id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
222010418
pes2o/s2orc
v3-fos-license
Virgin Coconut Oil for HIV-Positive People The objective of the study was to determine effects of 3 x 15 ml/day Virgin Coconut Oil supplementation for 6 weeks in subject to CD4 + T lymphocyte concentration and conducted at Special health center on Dharmais Cancer Hospital, Jakarta. The methods involved experimental study with parallel design on 40 HIV subject with CD4 + T lymphocyte count > 200 cell/μL divided into two groups, VCO group, subject in this group received VCO supplementation 3 x 15 ml/day for 6 weeks and non-VCO Group (without VCO supplementation). Data collected includes demographic characteristic (age and sex), anthropometric (weight, height, and body mass index), daily intakes by food recall 1 x 24 hours and laboratory (CD4 + T lymphocyte count). Statistical analysis was performed with independent t test and Mann-Whitney U test. The results could be summarised as follows. The average BMI were 20.8 ± 2.29 kgs/sqm (VCO group) and 20.7 ± 3.38 kgs/sqm (non-VCO group). Energy and fat intake between VCO group (1459 ± 327.4 Cal/day and 81.8 ± 19.35 gs/day) and nonVCO group (1101 ± 319.8 Cal/day and 37.1 ± 19.35 gs/day). Carbohydrate and protein intake between VCO group (143.8 ± 44.58 gs/day and 41.6 ± 14.04 gs/day) and non-VCO group (151.6 ± 14.04 gs/day and 39.5 ± 18.31gs/day). There were significant differences (p = 0.047) in average of CD4 + T lymphocyte count after 6 weeks intervention between VCO group (481 ± 210.0 cell/μL) and non-VCO group (343 ± 129.1 cell/μL). The conclusion is that Virgin Coconut Oil supplementation 3 x 15 ml/day for 6 weeks increases CD4 + T lymphocyte concentration in HIV patient. Introduction HIV/AIDS is a global crisis, affecting many aspects of life. Social stigma and the economic cost of HIV/AIDS have been haunting many patients, societies and governments. The cost of treatment and prevention measures has been a serious burden not only for developing countries, but industrialized ones as well (Walker, 2003). Since the first report of HIV infection in 1981, more than 40 million people have been infected and more than 20 million of which have died from AIDS (UNAIDS, 2004). Prevalence of AIDS varies among countries. The highest reported is the Sub Sahara region of Africa, which have a 30% rate of infection. It is estimated, with the advancement in early diagnosis, that the numbers of HIV/AIDS patients will rise significantly (Kamps and Hoffmann, 2005). In Indonesia, the first case of AIDS was reported on a foreign tourist in Bali in 1987. HIV/AIDS have now spread to all the provinces of Indonesia. No certain data exist on how many people suffer from the disease, but experts estimate about 80,000 to 120,000 Indonesian live with HIV (Sujudi, 2002). HIV mainly destroys the immune system, causing decreased quantity and quality of lymphocyte T cells, especially CD4. Progressives of the disease will depend mainly on the host immune response, which is measure by the amount of CD4 in the body (CD4 count). Therefore, CD4 count is the base of HIV infection classification (Kamps and Hoffmann, 2005). Nutrition is well known for its immune response stimulation effects. Malnutrition can aggravate the disease by up regulating viral replication (Scrimshaw and San Giovanni, 1997). On the other hand, achievement of optimal nutritional intake will ensure adequate immune response of HIV patients (Kotler, 1992). Coconut oil has long been used not only as food, but also as traditional remedies. Indigenous population of the Asia Pacific, which consumes coconut and coconut oil has long been known to have healthy and long lives. Despite the many benefits, many publications have focused on the negative effects of coconut oil. Their focus is on the high content of saturated fatty acids (SAFA). Saturated fatty acid is believed to be the main cause of arterial coronary diseases. This belief has caused people to turn to other sources of plant oil, which sure low in SAFA, in their daily food consumption. This notorious belief is not entirely true, because SAFA in coconut oil, consist mainly of medium chain triglyceride/MCT, which have many beneficial health effects. Coconut oil has unique features, where it is not only a source medium chain fatty acids, which are easier to absorb and utilize by cells, it also contain lauric acid and capric acid, which have anti microbial effects (Odle, 1997;Klein et al, 1999). These substances can destroy bacteria and virus which have lipid layer on their cell membrane (Enig, 1998). Because of its fatty acids and other nutrient contents, coconut oil is thought to be beneficial to HIV patients. Methods This trial was an experimental study conducted at Dharmais Cancer Hospital Special Clinic, Jakarta for six weeks (between June until August 2006). Written informed consent was obtained from subjects or legal guardians. Age between 18 -59 years, HIV positive with CD4+ count > 200 cell/µL and without antiretroviral (ARV) treatment were considered as inclusion criteria for this study. Exclusion criteria included chronic protein energy malnutrition (body mass index < 17 kg/sqm), history of cardiovascular disease and diabetes mellitus (from anamnesis), pregnant and breast feeding. Subject removed from the study if subject death, refused to continue the trial and difficulty to follow protocol. Forty subjects who met inclusion criteria were admitted to this study. The subjects were selected using block-randomized method into two groups designated as VCO and non-VCO, 20 subjects in each group. The main different treatment was on the VCO group, all the subjects received VCO 3 x 15 ml/day for six weeks but not in the non-VCO group. Population demographics data (age and sex), anthropometric measurements included height and weight to determine body mass index (BMI), assessment of nutritional intake with food recall 1 x 24 hours was used to establish daily energy and macronutrient intake and laboratory assessment (CD4 + count) will be done on subjects. Statistical Analysis used independent t test for group difference if normal distribution otherwise the Mann-Whitney test. Results In the VCO group, 57% subjects were between 18 -29 years old and in the non-VCO group 71% subject were between 18 -29 years old. Women were the greatest number in the VCO group (71%); on the contrary man was the most number in the non-VCO group (64%). 12 subjects (6 from each group) had dropped out from the study. Eight subjects had dropped out because they had difficulty to follow up (loss of contact), 3 subjects have moved outside Jakarta and 1 subject was tag on other trial. 92% subjects had HIV from intravenous drug use (IDU), 5% from heterosexual intercourse, and 2% from homosexual intercourse. There was no significant difference on anthropometric measurement before and after treatment (Table 1). Nutritional assessment There was no significant difference on energy intake before treatment (p = 0.37) on the contrary, there was significant difference on energy intake after treatment (p = 0.007) between VCO and non-VCO (figure 1). There were no significant difference on carbohydrate (figure 2) and protein intake (figure 3) not only before treatment but also after treatment. There was significant difference on fat intake after treatment (p < 0.001) between VCO and non-VCO (figure 4). Energy Intake Proportion There was no significant difference on energy requirement (p = 0.084) between VCO and non-VCO. However there was significant difference on energy intake proportion (energy intake per energy requirement) (p = 0.004) between VCO and non-VCO (table 2). CD4 + T Lymphocyte Count There was no significant difference on CD4 + T lymphocyte count (p = 0.37) before treatment between VCO and non-VCO. However there was significant difference on CD4 + T lymphocyte count (p = 0.047) after treatment (figure 5). Discussion 30% subjects drop out in this study resulted declining on research power from 80% to 60 -70%. High number of drop out in this study resulted from most of the subject (92%) in this study were drug or narcotic user (junkies). We know that junkies have high incline to use drug/narcotics again because of emotional factor or peer pressure. This condition made difficult to obtain subject to follow the protocol. 92% subject had HIV from IDU. In developing country, HIV spread mostly through IDU (WHO, 2005). Badan Narkotika Nasional (BNN), 2004 conducted a study in ten big cities in Indonesia (Medan, Jakarta, Bandung, Semarang, Yogyakarta, Surabaya, Makasar, Denpasar, Manado and Batam) found that 56% from 572 thousand people are intravenous drug/narcotic user and 40% or 229 thousand are HIV positive. This different data show to us that HIV cases rise very high in past four year. Body mass index is not significantly different between VCO and non-VCO not only before treatment but also after treatment. Short period of treatment and not lower proportion of energy intake were main reason from that result. There was interesting data from this study, There are several explanation. First, possible occurred inaccuracy or bias in food recall interview. Second, reference for energy requirement from AKG (Angka Kebutuhan Gizi) or Indonesian RDA is not suitable for the subject (too high). There were several study had the same result using AKG for energy requirement reference (Muhilal et all, 1998;Hatma, 2001;dan Nugraha, 2005 CD4 + T lymphocyte count is used to indicate HIV disease progression, because HIV bind to this receptor in human body resulted destruction and decline of CD4 + T lymphocyte count. In this study show that there was significant difference on CD4 + T lymphocyte count between VCO and non-VCO. This result indicates that VCO supplementation had positive influence to CD4 + T lymphocyte. The same result was obtained by Dayrit (2000). One of the reason of positive influence VCO supplementation on CD4 + T lymphocyte is high content of lauric and capric acid in VCO. As we know that lauric and capric acid are fatty acid with antiviral and bacterial capability. Takatsuki et all (1969) showed that fatty acid had toxic on viral cultivated and the more carbon chain on fatty acid made the more weaker toxicity characteristic. Thormar et all (1987) showed that lauric and capric have the antiviral ability, it can destruct lipid capsule layer virus at ten times more on lower concentration compare to long chain fatty acid like oleat and linoleat It can be concluded that Virgin Coconut Oil supplementation 3 x 15 ml/day for 6 weeks significantly increases CD4+ T Lymphocyte concentration in HIV patient.
2020-04-30T09:08:02.517Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "7c93af6347c12b2e2537bf94a3e0700a3eb1ed64", "oa_license": "CCBYSA", "oa_url": "https://journal.coconutcommunity.org/index.php/journalicc/article/download/46/29", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "94dab773b9ec245f17ff1e73ea18c528662e121d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18702933
pes2o/s2orc
v3-fos-license
Washback Effect of University Entrance exams in Applied Mathematics to Social Sciences Curricular issues of subject Applied Mathematics to Social Sciences are studied in relation to university entrance exams performed in several Spanish regions between 2009–2014. By using quantitative and qualitative analyses, it has been studied how these exams align with curriculum and how they produce a washback on curriculum and teachers’ work. Additionally, one questionnaire about teachers’ practices has been performed, in order to find out how the exams are influencing teaching methodology development. Main results obtained show that evaluation is producing a bias on the official curriculum, substantially simplifying the specific orientation that should guide applied mathematics. Furthermore, teachers’ practices are influenced by the exams, and they usually approach their teaching methodology to the frequent types of exams. Also, slight differences among the teachers lead to distinguish two behavioral subgroups. Results can also be useful in an international context, because of the importance of standardized exit exams in OECD countries. Introduction Entrance exams to university (from now on, Spanish acronym PAU will be used, from Pruebas de Acceso a la Universidad) mean the main way of access to higher education, involving percentages over 70% from the total new freshmen to Spanish university [1]. They are into effect, with slight changes, since 1974. In the last performed model, under the Organic Law of Education [2], Ministry fixes some minimum requirements in Baccalaureate curriculum [3] that is later completed at regional level. Subsequently, the current structure of the PAU [4] was applied for the first time in the academic year 2009/2010. In every Autonomous Community (i.e., region) a committee composed by teachers and University professors designs and implements the exams by choosing the most influential topics in the assessment, the orientation of the questions, the level of domain and other specific characteristics of the exams. Several years after the implementation of the new model, an important number of assessment units have been released, so that, it is possible to assess which concrete variations they have suffered in the subject Applied Mathematics to Social Sciences 2 (from now on, AMSS2, in the second year of Bachillerato, i.e., last year of upper secondary education,), which is the degree of fitting of the exams to the official curriculum, and how it all affects teachers' practices. Theoretical framework Literature describes the effects of high-stake testing programs, defined as "tests whose results are used to trigger actions or decisions such as passing or failing a grade, graduating or not, determining teacher or principal merit or assuming responsibility for a failing district by a state agency" ( [6]; cited by [7]). Apple [8,9] was one of the first authors identifying how centralized curriculum and assessment deskills teachers. Runté [10] enumerates these deskilling processes, emphasizing that centralized curriculum limits the range of skills required in making curricular decisions and it implies a shift from student-centred to curriculum-centred instruction. Smith [7] classified the effects of high-stake into 6 different categories, from which the present paper is only focused on types 4 (reducing the time available for instruction) and 5 (narrowing curriculum and reducing teachers' ability to adapt, create, or diverge). Jones et al. [11] conducted a study with North Carolina teachers concluding that highstakes testing increase the amount of time that teachers dedicate to practice tests in their lectures. Besides, they confirm that "material that involves higher-order thinking and problem solving often falls by the wayside" [11]. Some authors support benefits from this type of curriculum-based external exit evaluations. Bishop [12] demonstrates how this type of exams improves students' achievement in international assessments. Häkkinen [13] shows how Finnish university system would enlarge the proportion of university students if admission system were based on entrance exams instead of different admission criteria. Ou [14] demonstrates how students barely passing maths exams are more likely to dropout at university in the USA. On the other hand, Jacob [15] showed that the impact is not positive in general, by using a quantitative model relating scores on exit exams and university dropout. More specifically, other studies explore the field of mathematics [16][17][18] and also study differences between the official and the real curriculum (in the sense of Perrenoud [19]). Spanish literature also reflects comparative studies about PAU and curriculum-based exit external evaluations in different countries or university entrance exams [20], some analyses about factors influencing the performance in Spanish exams and their predictive capacity [21][22][23]. Several studies have pointed out the influence of PAU mathematics exams on the definition of the real curriculum, methodologies, and, barely, teachers' and students' attitudes [24][25][26][27]. No references are found in literature considering curricular particularities of subject AMSS2. Some recent research have analysed probability and statistical inference problems in PAU exams in Andalusia [28,29], by using the onto-semiotic approach. Regarding teaching practices, we follow the Mathematical Knowledge for Teaching framework (MKT) for characterizing teachers' knowledge, developed in the group leaded by Ball [30]. According to this model, teachers' practices belong to the Pedagogical Content Knowledge (PCK) domain, and, particularly, when examining practices in relation with high-stakes exams, we are studying the Knowledge of Content and Curriculum (KCC) and Knowledge of Content and Teaching (KCT). On the other, several authors have pointed out the influence of beliefs on practices, but also the difficulty, even impossibility, of measuring beliefs [31] by themselves, that is: "beliefs are referred to as constructed in the same sense that knowledge is constructed" ( [31], p.128). Assuming this theoretical perspective, it is also necessary to pay attention to literature regarding teachers' practices related to high-stake exams. Bishop [12] showed how Canadian teachers tend to develop more complex tasks in the classroom in order to prepare graduation exams. Many studies have been developed trying to predict students' performance based on several teachers' characteristics such as experience or qualification (see, for instance, [32][33][34]). Recently, in [35] it is analysed how the type of classroom task and the amount of homework predict the outcomes in the Russian equivalent to PAU exam (USE). In this research we are centred on the washback effect instead of focussing on students' performance. The term washback has been coined in Educational Sciences to denote the influence of testing on teaching and learning ( [36], p.259). It has been coined and especially used in language learning and assessing, but it can also be used in the mathematical context of the present work, since it also affects "curriculum materials, teaching methods, feelings and attitudes, learning" ( [37], p.7). The concept itself can be used with different scopes (see, for instance, [38]), but in this paper it will be considered the effect on curriculum and teachers' practices. When studying the effect of German Abitur exams, similar to PAU, [39] determine that: "It can thus be expected that, in the final years of schooling in particular, teachers align the standards of their in-class assessments with those of the upcoming central examinations." Therefore, our theoretical framework is built by integration of these previous approaches, and it is focused on the curricular analysis of AMSS2 related to the PAU exams, and on analysing the washback from PAU on teaching and learning methodologies. In summary: "It is testing, not the official stated curriculum, that is increasingly determining what is taught, how it is taught, what is learned, and how it is learned" ( [40], p.88). Additionally from previous sources, in the theoretical framework, from the methodological point of view, this work contributes with a novel approach to curricular analysis, paying attention to contents and their frequency appearing in PAU exams, and using a categorization inspired by conceptual focuses, developed in [41]. Since, for the present work, a broader classification than that in [41] is needed, the notion of curricular unit has been developed, and it will be deeply explained in the methodological section. Methodology Considering that two different studies are developed, it is necessary to deal with two different samples. Sample I: PAU exams Data from PAU exams have been collected in Andalusia, Asturias, the Basque Country and Madrid, from 2009-2010 to 2013-2014. Data were obtained from universities or regional education authorities web sites [42][43][44][45]. Asturias has been considered since it is authors' original region, the Basque Country was interesting since it higher degree of autonomy, whereas Andalusia and Madrid have been considered since they have a great number of students in the exams. Therefore, the sample covers over 40% of the total number of students passing PAU exams every year, which illustrate its significance [46]. Each region has some degrees of freedom in the organization of PAU exams, what implies the number of exams is different among them. For instance, some regions as the Basque Country use the same exam for the two phases of PAU (both compulsory and optional ones) where as others as Asturias use different exams for both phases. Additionally, some regions release only the used exams, but other release also emergency exams (those used for extraordinary situations as blackouts, student accidents, etc.). Nevertheless, every exam has two different options, every option consisting in a set of exercises, so that the student has to choose one option and solve it completely, thus, for statistical purposes, we can consider each option as an exam. Hence, summing up, the number of analysed options, and the number of exercises included in options (usually 4 or 5, but it can also vary from one region to another and even from one year to another) are showed in Table 1. Actually, the whole population of exams in these 4 regions is studied, not a sample, since all the released exams have been analysed. Sample II: teachers In order to analyse whether teachers' practice in AMSS2 is conditioned by PAU exams, a sample of 51 Mathematics teachers in Secondary Schools has been used. The questionnaire was performed in two different periods: January-April 2013 and September-December 2014. Due to budget constraints, the sample was chosen with teachers from Asturias. They were selected by convenience sampling, by contacting those high schools in which future mathematics teachers were developing their internship. The 51 teachers belong to 21 different high schools (including public, private and state-funded). The University of Oviedo and the Regional Ministry of Education, within its program of Educational Research and Innovation, approved this research. This program implies that all results obtained can be used for research purposes, unless explicit disagreement from any involved agent and, hence, the procedure does not require the written consent, since all the research programs are publicized and approved by a research committee and another academic committee. Thus, teachers were informed about the use of the data and they gave oral consent to it. Instruments Considering curricular contents, and assessment criteria defined by the Ministry and completed by the regional ministries, three observation tables have been designed in order to analyse content blocks in AMSS2 curriculum: Algebra, Calculus, and Probability & Statistics. To register the information in an interpretable way, a new tool named Curricular Unit (CU, in the following) has been introduced. CU's are defined as basic observation units for contents, procedures and assessment criteria in the curriculum. CU's are constructed as an ad hoc simplification for our problem of the so-called conceptual focuses defined in [33]. CU's have been designed by considering both so-called 'contents' and 'assessment criteria' paragraphs in the official curriculum. Therefore, CU's arrange contents, procedures and assessment criteria into homogeneous curricular structures, allowing a coherent information retrieval and producing meaningful results about the frequency of appearance of each CU. Thus, CU's not only are the tool to systematize the official curricula, but they also allow the observation of appearance frequency in each PAU exam, making it feasible to analyse the whole curriculum. This frequency observation is combined with a qualitative assessment of the type of problems and exercises posed in PAU exams, specifically, whether they propose solving contextualised problems and related with reality or not. Tables 2-4 show the defined CU's for each block and its description. This classification of the curriculum into CU's is, from authors' point of view, an efficient tool that allows analysing wide curricula, as it is in the case of AMSS2, and a huge number of exams, as presented here. With respect to the second goal of this paper, to assess teachers' practices one questionnaire has been designed, on the basis of the theoretical framework described above, and following Pajares' statement: "beliefs cannot be directly observed or measured but must be inferred from what people say, intend, and do-fundamental prerequisites that educational researchers have seldom followed" ( [47], p. 207). Therefore, questions ask about teachers' practices regarding the teaching methodologies and their relation to PAU exams, the existence of practising tests, the influence of PAU exams on the real curriculum, the type of exercises and problems developed in the classroom, etc. The questionnaire consists of three thematic groups (questions appear in a later section) • Group 1: Likert-type questions about the influence of PAU exams in their working methodology, selection of topics and assessment methods according to the subject AMSS2. Answers are considered being 1 = Totally disagree, 2 = Disagree, 3 = Neither agree nor disagree, 4 = Agree and 5 = Totally agree. A7: Linear programming Two-dimensional problems. Expressing natural language into linear programming problems. Feasible and optimal solutions. A8: Applications of linear programming Applications in solving social, economic and demographic problems. • Group 2: Open-ended questions about contents, competencies and suggestions and ideas to improve current PAU exams. • Group 3: Questions about personal and professional data. Procedure and analysis Data from the CU's have been collected from PAU exams. Each exercise (out of the total 628) has been assigned to one or several CU's. The assignment procedure consists of identifying the CU of every exercise in the whole set of exams. Regarding the questionnaire about teachers' practice, it was distributed in paper format among mathematics teachers belonging to the Departments of Mathematics at High Schools and having experience in teaching AMSS2. The questionnaire was delivered by the students of the Master's Degree in Teaching Training of the University of Oviedo, during their internship period at high schools. The information collected from the questionnaire has been treated with statistic package R, applying the following analyses: • Descriptive analyses of each one of the three parts of the questionnaire. • Quantitative studies about the possible relationship between data from the teachers (group 3) and their answers to the rest of questions (groups 1 and 2). It has been made through different non-parametric test (depending on the characteristics of the analysed variables), as there is no normality in the data, as it will be explained in the correspondent section of results. Correlations among different answers have been also considered. Curricular units of AMSS2 in PAU exams The following tables show the percentage of appearance of each CU in the exams, with respect to the total amount of exams in the considered region. It should be noted that within each exam several CU's appear, therefore the different percentages could sum up more than 100. Table 5 shows clear differences in frequencies of Algebra CU's. For instance, CU A3 (Problems with matrices) appears no more than 10% in three regions but in Asturias it reaches 42.5%. CU A4 only appears in Madrid (20.6% of cases there), whereas in the rest of the exams to obtain the matrix rank is never explicitly posed, neither the meaning of the rank nor its relationship with the determinant. Despite national AMSS2 curriculum include the specific assessment criteria: 'To transcribe problems expressed in common language into algebraic language' ( [3], p.45476), it has been checked that only in the case of Asturias an important number of problems about linear equation system are set out in real contexts (CU A3), whereas in Madrid, Andalusia and the Basque Country most of the exercises are expressed without any context. This situation appears partially replicated in the case of Linear Programming (CU's A7 and A8). In Asturias all exercises deal with solving social, economic and demographic problems, however, in the rest of regions exercises provide the inequations, asking for their representation and for calculating the maximum or minimum of a given function, but most of the cases, mathematical formulation is not contextualised within a real problem, it is neither used the proper terminology of linear programming (objective function, feasible region, optimum solution, etc.). Calculus block has less homogeneity, being the most frequent CU's C5, C6 and C8, as it can be seen in Table 6. Analysing by regions, in Andalusia it was never included any exercise related to integral calculus (CU C9), but in all exams derivate calculus is present (CU's C5 and C6). Moreover, only 17 out of the 60 exercises have a formulation related to real life situations (CU C2). The rest are expressed mathematically without any context, and it is required to study some characteristics of the function (continuity, derivability, trends, etc.). In almost half of the exercises (24 out 60), the function is defined as a piecewise one. In Asturias, there is an alternation in every exam between an exercise dealing with differential calculus (CU's C5 and C6) and other dealing with integral calculus (CU's C9 and C10). The first one is formulated in real life context (C7), whereas the integral calculus exercise follows always the same structure: calculating the primitive function and calculating the area under the curve, through Barrow's theorem. The functions are usually polynomial. Regarding Madrid, 22 out of 34 exercises combine differential and integral calculus, but the function is never defined in a real context. Finally, the Basque Country does not include exercises related to integral calculus (CU C9), however most of the analysed exams present problems about differential calculus (CU's C5, C6 and C7) and the study of functions (CU C8). Social phenomenon analysis only appears in 8 out of the 20 analysed cases. Probability & Statistics block is the most heterogeneous one, as it is shown in Table 7, being CU's S3 (Conditional Probability) and S4 (Bayes) the most frequent units. Several CU have barely appeared in the exams (CU S6: Central Limit Theorem practical applications, never appeared). The reason could be related to the difficulty to deal with Central Limit Theorem, except when considering it for calculating confidence intervals and hypothesis testing in great samples. All exercises in Asturias and the Basque Country are related to real life problems that need to be translated into statistic or probabilistic language. There are frequent themes such as alcohol consumption, loan granting, and health and work safety. However, in Madrid and Andalusia these contextualized problems are combined with others out any type of context. In Madrid, exercises focused on CU's S7, S8, and S3 are the most frequent, the rest of CU's having frequencies under 40%. In few exercises students have to make decisions about different probabilities or probabilistic scenarios, despite decision-making is one of the basic competencies in AMSS2. On the other hand, hypothesis testing is never posed. In Andalusia confidence intervals are much less frequent than in Madrid, but, on the other hand, they include exercises about hypothesis and sampling, sampling distribution of the mean or sample representativeness. In the Basque Country tests problems related to normal distribution usually appear in exams (CU S11) and problems related to total probability law and Bayes' Theorem (CU's S1 and S4). It is also easy to find exercises about calculus of probabilities through product rule, without further reflection about the sense of the operation. Influence of PAU on AMSS2 teachers' practices A questionnaire conducted on mathematics teachers in Secondary Schools in Asturias was performed. In this section the most important obtained results are presented. Teachers' personal and professional data. The questionnaire included a list of questions about social and demographic issues, besides other professional questions. The obtained results are listed below (N = 51): • Sex: 41.2% of the interviewee were man, 54.9% woman and the 3.9% resting did not specify the sex. • Age Rank: only one interviewee is between 22 and 35 years old, 19 are between 35 and 50 years old, 29 are over 50 years old and the two remaining have not answered to this question. It clearly reflects that Asturian teachers are quite aged. • PAU experience: only 4 people have participated in PAU exams, as graders (members of the committee), since 2010. • 84.3% of the teachers have taught AMSS2. Besides this percentage, 94.1% of teachers in the sample have also taught lessons in the first year of Baccalaureate (AMSS1). • Regarding the experience teaching AMSS2, 9 people have never give lessons in AMSS2, 19 people have less than 5 years of experience, 12 people have between 5 and 15 years and the 10 remaining people have more than 15 years of experience. One of the teachers has not answered to this question. Data from Likert-type questions. As it was specified in the description of the questionnaire, a second block contained 17 questions related to teaching methodology and its relationship with PAU, considered in a Likert-type scale from 1 to 5. Some questions are posed in a direct-affirmative way and some other in a negative way to avoid mechanical answering. Results are shown in Table 8. Results from open questions. The interviewees have given only four answers in this part of the questionnaire: • In PAU exams students are not assessed on information and communication technologies. • Students have a narrow basis from compulsory Secondary Education (previous stage to Baccalaureate) in Probability & Statistics. • PAU exercises are very repetitive, more open problems should be included, instead of solving repeatedly procedural exercises. • There is a lack of time to develop the official curriculum due to its extension, and this fact affects methodology, avoiding the development of cooperative learning method or researching techniques. Results from quantitative data analysis. In order to complete the analysis, it have been analysed the existence of statistical relationship among demographic and professional data of the teachers, their answers to the Likert-type questions from the first block and the open ended questions from the third block. For every Likert-type question and its possible relationship with demographic and professional variables, Kruskal-Wallis non-parametric test was used, as it was previously checked that answers were not normally distributed. This test allows determining if the differences between the different age ranks or years of teaching experience influence the answers of the Likert-type questions. Results from Kruskal-Wallis test show in all cases that there is no significant relationship between any of the answers to Likert-type questions and the age, or between any of them and the years of teaching in the considered subject. Table 9 shows respective p-values in its last three columns. Regarding sex variable and each of the Likert-type questions, non-parametric Wilcoxon test used, to detect differences between medians by sex, p-values are shown in the first column of Table 9. It was not found in any case significant relationship among the variables. Therefore, answers to Likert-type questions do not depend on the sex, age or professional experience of the teachers. There is only a p-value that is slightly under 0.05, which will be discussed later. Relationship between open questions and demographic and professional variables was analysed through Fisher's (in the case of sex) and Barnard's tests (for the rest of variables). It was chosen these kinds of non-parametric tests because results did not have the minimum number of values needed to apply χ2 test, and regrouping them would lead to nonsense. Obtained pvalues are shown in Table 10. Again, results underline the consistency of teachers' answers, which do not depend on the sex, age or years of professional experience. Additionally, it was made an analysis of the correlations between the answers to Likert-type questions, in order to detect behavioural common patterns. In Table 11, correlation coefficients are shown between the answers to every pair of Likert-type questions. As it can be observed, despite being a sample of relatively small size, it appears significant correlations (over 35%) of direct relation among questions Q1-Q14, Q2-Q4, Q2-Q7, Q2-Q12, Q4-Q7, Q5-Q6, Q5-Q10, Q5-Q13, Q7-Q8, Q7-Q12, Q12-Q15, and Q15-Q17. It also appears important inverse correlations (under -30%) between the questions Q2-Q13, Q3-Q17, Q4-Q10, Q4-Q13, Q7-Q10, Q9-Q17, Q9-Q15, and Q12-Q13. In the discussion section the interpretation of these values will be included with more details. Analysis of correlations together with answers to Likert-type questions lead to classify questions into several subgroups: • Considering Q1 and Q14 it is concluded that near 60% of teachers deny leaving parts of the curriculum without teaching when they are not usually asked in PAU exams (Q1), but, on the other hand, around 50% of them acknowledge that, if PAU did not exist, their teaching would be closer to the curriculum (Q14). • However, more than 70% of the teachers do not deny that in case of being force to renounce to some parts of the curriculum, they would supress the questions that appear less in PAU exams (Q4). Moreover, Q4 has a strong positive correlation with Q2 ('I pay more attention to questions that are usually asked in PAU') and Q7 ('PAU exercises determine my methodology in the classroom'). Additionally, Q7 is positively correlated with Q2, Q8 ('I usually do not use active methodologies in the second course of Baccalaureate') and Q12 ('In my exams I use exercises similar to PAU exercises'). This shows how teachers try to teach the entire curriculum, but in case of lack of time, PAU contents are the chosen ones, keeping a methodology and a kind of exercises similar to PAU exams. Therefore, PAU causes a washback on the official curriculum. • The inverse correlation between Q2, Q4 and Q12 with Q13 gives consistency to the described model, as this question is posed in a negative way. Therefore, there is a strong relationship between PAU and the day-by-day work in the classroom, establishing a quasi-logical relationship [48] between its preparation and the activity in the classroom. • Despite previous answers, Q10 offers a not so clear result: about 25% of the interviewees believe that PAU does not influence students' learning. Moreover, this question is negatively correlated with Q4 and Q7. This fact is analysed as the reluctance of teachers to admit a clear influence of PAU through a direct asked question, whereas when it is asked indirectly (Q2, Q3, Q5, Q7, Q9, Q12 or Q13) it is admitted with greater clarity. These answers show an internal conflict between their practices and beliefs, underlying that teachers are sensible in the way they enact their beliefs [49,50]. Teachers are also considering that their students usually perform in AMSS2 exams similarly to Baccalaureate, which is reinforced by Q17, showing a deep teachers' support to the validity of the PAU exams and its prediction capacity with respect to grades obtained by students in the Baccalaureate [23]. • Something similar occurs with Q15, despite previous answers related to PAU influence, more than two thirds of the interviewees affirm that PAU does not cause significant distortion in students' training. Thus, teachers consider that contents than can be avoided are not important in students' training, showing a clear ratification of the coherence of the test. • Finally, there are five questions (Q6, Q8, Q11, Q14 and Q16) where answers 'Nor agree or disagree' prevail. These are questions about active teaching-learning methodologies, competencies acquisition or using real problems that do not seem to raise a clear opinion among teachers. In the case of Q14, it is observed a small prevalence of the answer 'Disagree', which seems to highlight what has been underlined in the previous point. Q16 shows that most of the teachers do not express their opinion about changing the current PAU exam or maintaining it. Additionally, a cluster analysis has been performed over data from Likert-type questions. Results show how two main clusters can be distinguished. In Fig 1 the two main dimensions in scores are used as axes to plot the teachers' scores showing both clusters. Nine individuals compose first cluster whereas the rest of the sample (42 members) belong to the second one. Main differences between the two clusters yield on scores in questions Q1, Q2, Q4, Q7, and Q12, with notably lesser scores for cluster 1, and questions Q9, Q10, and Q13, with higher scores for cluster 2 (maximum difference being 1.19 points in Q4). This fact shows that teachers in cluster 1 have a lower agreement degree with questions supporting the influence of PAU in their teaching methodology, whereas they have a higher agreement degree with questions denying that influence. Therefore, we could look at group 1 as teachers acknowledging less influence of PAU on their practice than those in group 2. Discussion The first research hypothesis consisted on checking how PAU exams represent the official curriculum and on detecting possible biases on questions posed in PAU exams. Thanks to the study of all the exams released in the four considered regions it can be stated that although there are significant differences between the exams from the regions, there exist substantial parts of the official curriculum that are omitted or that appear underrepresented in all PAU exams. It has especial relevance that several CU's have never appeared in the exams of any region, what implies a clear bias. On the other hand, there are some CU's that have a constant presence in all analysed exams. Thus, it is demonstrated that PAU influences AMSS2 by producing a narrowed curriculum. This statement is consistent with conclusions in [27], but the present study is based on the study of all the released exams in four regions, being the first one doing this in the literature about this topic. Another question that has been observed in the exercises analysis is the repetitive structure from one year to another within each region. This fact could imply positive effects, as it allows students organising and planning their learning, at this point, Wall [51] pointed out that "It should not be assumed that a 'good' test will automatically produce good effects in the classroom, or that a 'bad' test will necessarily produce negative ones" ( [51], p.505-506). But this repetitive structure also has the negative side due to the high predictability of the exam, so students could limit their learning process only to the solving methodology of this kind of exercises with a scarce deepening: only learning to test. This fact opposes the understanding and the analysis of real life situations, as it is stated in the official curriculum. This result is consistent with [23] analysis for the particular case of the defined integral in PAU exams, but also with other research in the general field of high-stake testing. Moreover, this result endorses [10] into the deskilling processes of teachers, particularly math teachers with problem solving. But, on the other hand, PAU exams designing process is not including critical thinking as in Alberta exams ( [10], p.173). Subsequently, it is necessary a proper alignment [52] between new curricular standards oriented to problem solving in real contexts and their assessment in central examinations, despite some tasks as critical thinking or decision making in mathematical frameworks have been underlined as very difficult to be evaluated in such type of exams [53]. This result is also consistent with [28], where a lack of contextualized probability problems was pointed out. Additionally, assuming the lack of open problems and the pre-eminence of algorithmic exercises, there are few exercises including a context of close situations for the students, nor exercises describing phenomenon related to Social Sciences, despite AMSS2 official curriculum considers that a clear priority. Proposed exercises should stimulate reasoning (Jones et al. [11]), searching a solution and making a decision for the mathematical problem. Nevertheless, it should be underlined that PAU exams are limited in time to 1.5 hours, which could hinder to include open problems, which usually are considered without time limitation. This fact makes clear the crucial point about whether this type of assessment is the most adequate to such a curriculum [54]. This is also consistent with Kuhn [55] that underlines the difficulties of introducing context-based tasks in central examinations. The management of the mathematical language used to model these situations and to express the solution of an exercise constitutes another important skill for the acquisition of a real mathematical competence. In this paper it is demonstrated that, mainly the Statistics & Probability block, some exercises needing translation from verbal to math language appear. This fact supposes an advanced in the direction of a greater didactic suitability of PAU exams, which was suggested in [26]. Nevertheless, research about difficulties and mistakes made by students in certain topics must be taken into account to design this curriculum-based assessment, for instance, in Statistics [29] points out the high difficulty of hypothesis testing for Baccalaureate students. Assessment is a crucial point in the teaching-learning process, as it can condition learning processes and, moreover, it requires teachers' adaptation to assess competencies. If extensive and general assessment procedures, as it is PAU, do not reflect this paradigm change from content to competence assessment, teachers' role will be reduced. The second research hypothesis was to check whether PAU influences AMSS2 teachers' practices and to determine at what extent the real curriculum resulted from this behaviour. The research confirms that, despite there is not any explicit acknowledgment of it, PAU exams produce an influence on teachers' practices in their day-by-day, and, what is more important regarding this paper, they confirm the washback on curriculum. This conclusion is derived from the analysis of the questionnaires. From author's point of view, results derived from answers to the questionnaire are noticeably different from a similar study in other region [27]. In both cases, teachers point out that their work is not conditioned by PAU, but in the questionnaires employed for the present paper it can be clearly remarked a greater utilitarian use of the teaching-learning processes in the second year of Baccalaureate towards the preparation of PAU exams. Besides, the present work contributes with the novelty of analysing a curriculum that has a defined orientation to the use in real life or Social Sciences frameworks of mathematics and, therefore, it is needed to pay more attention to the notion of mathematical competence, understanding mathematics as a tool to solve Social Science problems. Actually, this reinforces the results obtained in [35], establishing that practice test do not improve the performance, on the contrary, reflexive homework tasks increase it. Besides, statistical analysis backups the homogeneity of answers among teachers, as there are no significant differences due to the age, sex or the years of professional experience in the subject. Therefore, perception respect to the importance of PAU are strongly consistent among teachers and, thus, results make clear PAU produces a washback in AMSS2 curriculum, not only on contents but also on methodology. Only few teachers answer with certain different responses, as it is demonstrated by the performed cluster analysis, so that, a small group can be distinguished from the rest by a lower degree of disagreement about washback of PAU. Authors are convinced this is a field that really needs an intervention, in order to reinforce teachers' beliefs and about the importance of their practices, as [56] confirmed in the case of Finnish teachers, there is a strong relationship among mathematics teachers between their beliefs and their teaching practices. If a correct alignment is attained between new curriculum and exams, teachers' practices will produce better effects on students' self-regulated learning, as it was pointed out in [57]. Moreover, results open new ways for future research about teachers' practices regarding the new final Baccalaureate exam that will substitute PAU in 2017 and, especially, the outcomes in this paper highlight some improvements that can be considered in designing the new exam. This is the moment to attain a proper alignment, being more faithful to innovations in the curriculum of AMSS2, especially those devoted to solve real (or likely real) problems, to use contextualized mathematics in the field of Social Sciences. It is also important to take into account the results to design a new much more balanced exam, not focused mainly on certain curricular units. Looking at the effect on teachers' practices, results also point out the need of working into more varied type of exams, enhancing teachers' flexibility to prepare and to manage the teaching/learning process, and releasing them from learning-to-test. Finally, it is necessary to remark two main limitations in the present study. First, it would be recommended to widen the study of the exams to other regions (although the chosen sample considers four regions that suppose an important percentage of the scholar population in Spain). Second, the sample of teachers that have answered the questionnaire could also be widened to other regions, and be selected by a random sampling.
2018-04-03T01:17:58.331Z
2016-12-09T00:00:00.000
{ "year": 2016, "sha1": "cde93a2b8268b2aa82fc3c0e34068dc2bc419ab2", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0167544&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cde93a2b8268b2aa82fc3c0e34068dc2bc419ab2", "s2fieldsofstudy": [ "Education", "Mathematics" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
216413196
pes2o/s2orc
v3-fos-license
A Fermi Energy-Incorporated Framework for Dealing with the Temperature- and Magnetic Field-Dependent Critical Current Densities of Superconductors and Its Application to Bi-2212 It is well known that the critical current density of a superconductor depends on its size, shape, nature of doping and the manner of preparation. It is sug-gested here that the collective effect of such differences for different samples of the same superconductor is to endow them with different values of the Fermi energy—a single property to which may be attributed the observed variation in their critical current densities. The study reported here extends our earlier work concerned with the generalized BCS equations [Malik, G.P. (2010) Physica B, 405, 3475-3481; Malik, G.P. (2013) 3,103-110]. We develop here for the first time a framework of microscopic equations that incorporates all of the following parameters of a superconductor: temperature, momentum of Cooper pairs, Fermi energy, applied magnetic field and critical current density. As an application of this framework, we address the different values of critical current densities of Bi-2212 for non-zero values of temperature and applied magnetic field that have been reported in the literature. Introduction The critical current density (j c ) of a superconductor (SC) is an important parameter because the greater its value, the greater is the practical use to which it can be put. Most of the plethora of formulae available in the literature for calculating this parameter may be mainly categorized as following from the framework of the Londons' equations or the Ginsburg-Landau (GL) equations. The salient features of such approaches are that they are based on diverse criteria such as the type of SC being dealt with (type I or II) and its geometry [1] [2] [3] [4] [5]. Another limitation of both the Londons' and the GL theories is that they work for the j c of an SC only when its temperature T is close to its critical temperature T c . In contrast with these phenomenological approaches, there is also available in the literature a smaller body of work dealing with the j c of an SC on the basis of the microscopic theory of superconductivity. Notable among these is the often-used approach of Kupriyanov and Lukichev [6] based on a simplification of the Eilenberger theory, which in turn is derived from the original Gor'kov theory under the assumption that 1 F l ρ  , where F ρ is the electrical resistivity of electrons at the Fermi surface and l their mean free path. An overview of the study reported herein is as follows. Guided by a substantial body of recent work, e.g., [6]- [12], which suggests that low values of the Fermi energy (E F ) play a pivotal role in determining properties of high-T c SCs, we have been following a course where E F is directly incorporated into the generalized BCS equations for the gaps (Δs) and the T c s of both elemental and composite SCs [13] [14]. This is a departure from the usual practice since these parameters are conventionally calculated via equations independent of E F because of the assumption that 1 F E kθ  , where k is the Boltzmann constant and θ the Debye temperature of the SC. As a supplement to this framework, we reported in [15] and [16] the results of an exercise that includes an E F -incorporated equation for j c , leading to a unified framework for dealing with the Δ s , T c s and j c s of both elemental and composite SCs at T = 0 and H = 0, where H is the applied external magnetic field. Since the j c -values of an SC are generally reported for non-zero values of both T and H, we present here a framework to deal with such a situation. In essence, the present work is concerned with a generalization of: 1) the work reported in [17] to include E F in the pairing equations that already incorporate T and H and 2) the work reported in [18] to include H in the equations that already incorporate T, E F and the momentum P of the pairs. We believe that presented herein is the first attempt that brings the T-and H-dependent j c of an SC under the purview of an E F -incorporated microscopic theory of superconductivity. The paper is organized as follows. In Section 2, we show how, without making the usual approximation F E kθ  , the dynamical equation for a Cooper pair interacting via the model BCS interaction can be generalized to include T and H via the Matsubara prescription and the Landau quantization scheme, respectively. We thus obtain the pairing equation incorporating E F , T and H corresponding to the 1-phonon exchange mechanism (1PEM) and P = 0, which is appropriate for dealing with the situation when j = 0. In order to deal with the situation when j ≠ 0, we need to do away with the P = 0 constraint of this section. This is done in Section 3, assuming that the dimensionless interaction parameter remains unchanged as we move out of the center-of-mass frame to the lab frame. Based on the equations thus obtained, in Section 4 are derived expressions for the density of the superconducting electrons n s , their critical velocity v c , j c and s ≡ m * /m e , where m * is the effective mass of an electron and m e the free electron mass. Application of these equations to deal with the empirical data of where (−V) ≠ 0 in a narrow region of kθ ± around the Fermi surface is the model BCS interaction parameter, 1 kT β = , k is the Boltzmann constant, θ the Debye temperature of the ions, and W is one-half the binding energy of a pair which is to be identified with Δ. The units employed are: eV, , generally. However, for the convenience of the reader, in all the final equations that are actually employed in our calculations, the factors of  and c have been made explicit. If the field H is applied in the z-direction, then Step (2) consists of making the following substitutions in (1): where Ω 0 (H) is the cyclotron frequency corresponding to the free electron mass and we assume that m * = m e when j c = 0. The transverse components of momentum are thus quantized into Landau levels and we have (1) where L and n m are usually ∞ . Since the energy of an electron in our problem is constrained to lie in a narrow shell around the Fermi surface, we fix L and n m by appealing to the law of equipartition of energy and split the region in which V ≠ 0 as Putting W = 0 and rewriting (5) in terms of T and the dimensionless variable where E F has been relabeled as E F1 in order to distinguish it from the E F that occurs in the equations when P ≠ 0, . The integrand in (7) is mani-festly dimensionless. In Appendix A, we provide the necessary conversion factors which show that λ m too is dimensionless and enable one to employ in our framework the more familiar BCS units, i.e., eV-cm 3 for V and Gauss for H. Pairing Equation Incorporating Fermi Energy, Temperature and Applied Field When the Momentum of Cooper Pairs Is Non-Zero It is a tenet of the BCS theory that the same interaction parameter λ occurs in both the equation for {T = T c , Δ = 0} and the equation for {T = 0, Δ 0 ≠ 0}, where Δ 0 is the gap at T = 0. Similarly, we assume here that λ m remains unchanged when we go over from the P = 0 to P ≠ 0 equations. We now draw attention to the fact that in [15] where we dealt with the Δ 0 s and j c s of various SCs, it was assumed that E F too has the same value in both the equation for Δ 0 (H = 0) and the equation for P c /j c --a plausible justification for which being that we were dealing with the j c s too at T = 0 and H = 0. For the sake of generality, we now assume that the value of E F when P ≠ 0 is different from its value when P = 0. Hence, E F is labelled as E F2 in the present section. The Tand P-dependent equation for pairing in the 1PEM scenario is [18]: (10) and the total energy of a pair is 2 2 F E E W = + . When P = 0, (10) fixes the limits  and u in (9) as . When P ≠ 0, the lower limit in (9) follows from: where P c is the critical momentum corresponding to W = 0, p has been neglected (as justified in [18]). Note that (11) automatically ensures that Appealing to the law of equipartition of energy as earlier, we now assume that which is now the lower limit in (9). Working out the upper limit similarly, we have the limits of (9) as: Employing substitutions (2) in order to introduce H into (9), we obtain and We note that the factors of (1/3) in the limits of the integral and (2/3) in the upper limit of the sum occur because, appealing to the equipartition law, we have split the inequalities (13) as where λ m (V, H c ) is given by (8) we finally obtain the desired equation incorporating T, H and E F2 when P ≠ 0 in terms of y as Although the multiplier r of y in the above equations is unity for an elemental SC, it has been introduced for later convenience when we deal with a composite SC characterized by multiple θs. Taking the example of Bi-2212, we note that if the Debye temperature of the SC is θ 0 and pairing takes place via the Ca or the Sr ions, then r 1 = θ Ca /θ 0 = 1 (because θ Ca = θ 0 ) for the former case and r 2 = θ Sr /θ 0 ≠ 1 (because θ Sr differs from θ 0 ) for the latter. Equations for the Density of Superconducting Electrons, their Critical Velocity, Effective Mass, and Critical Current Density If we assume that (7) corresponding to a set of {E F1 , T c1 , H c1 , j c1 = 0}-values and (19) corresponding to a set of {E F2 , T c2 , H c2 , j c2 ≠ 0}-values have provided a value of y, then we are enabled to obtain the following equations for the density of superconducting electrons n s , P c , the critical velocity v c and j c , where the requisite factors of  and c have been inserted for the convenience of the reader. For the data in (28), we need to employ (7) which corresponds to j c = 0, whereas for the data in (29) and (30) and, in all the four equations following (19), we need to have [15] r = θ Ca /θ 0 , θ Sr /θ 0 and θ Bi /θ 0 for pairing via the Ca, Sr and Bi ions, respectively. Following our study dealing with the T c s, Δ 0 s and j 0 s of Bi-2212 reported in [20], it seems imperative that we generalize (7) and (19) to the case of pairing via the 2PEM scenario. To this end, it is convenient to define For the θ-values in (31), we note that (i) solution of (33) with inputs from (28) will arise only if either of the so-obtained λ m s exceeds the Bogoliubov limit of 0.5, because beyond this value the system becomes unstable. Pairing Equation in the 2PEM Scenario for Non-Zero Values of Temperature, Applied Field and Critical Current Density If we obtain the generalized version of (19) to deal with pairing in the 2PEM scenario following the same procedure as above, then with the inputs from (29) and (30) we have two equations besides (33). These three equations are not sufficient to deal with at least the six unknowns in our problem: λ m1 , λ m2 , E F1 , E F2 , E F3 and y. Therefore, ab-initio calculation of the j c s via this approach will involve making ad hoc assumptions about some of the unknowns. To avoid such a situation, we adopt the strategy of incorporating in the generalized version of (19) the empirical j c -values noted in (29) Solutions in the Scenario of 1PEM Due to Either of the Ca, Bi and the Sr Ions In order to unravel the empirical features of Bi-2212 noted in (28), (29) and (30) in the above framework, we proceed as follows: 1) We first deal with the data in (28) via (33) where, in the latter equation, E F1 is an independent variable. If we solve this equation for different assumed values of E F1 = ρkθ 0 and λ m2 = 0, we obtain the corresponding values of λ m1 for pairing via the Ca ions. We thus find that for ρ = 10, λ m1 = 0.31283 and for all values of ρ > 10, λ m1 has the same value up to four significant digits. For the same value of ρ and λ m1 = 0, the corresponding value of λ m2 (Bi) and λ m2 (Sr) via (33) are 0.22043 and 0.20778, respectively Similarly obtained values of λ m1 (Ca), λ m2 (Bi) and λ m2 (Sr) for some select values of 10 ρ ≤ are given in Table 1. 2) Since none of the values of λ m in Table 1 exceeds the Bogoliubov limit of 0.5, we now employ (39) also in the 1PEM scenario in order to determine E F2 corresponding to the data in (29). To this end for ρ = 10, for pairing via the Ca ions, we use the following values in (39): which leaves out y which is yet to be specified. Since y too is an independent variable, we need to solve (39) for a range of values of ry > 1; that ry must be greater than unity follows from (34). We find that for each such value of y, the solution of the transcendental Equation (39) yields, in general, multiple roots each of which corresponds not only to j c2 (which is an in-built feature of our formalism), but also, to the accuracy with which they are quoted in Table 2 and Table 3, the same values for n s and v c via (24) and (25), respectively. For this reason, corresponding to each y, given in Table 2 are only the greatest roots of (39), which are found by first plotting (39) against q to determine the range outside which the equation does not have any roots. In the context of the critical current densities, such a plot given in Figure 1 seems to be unusual and will be discussed below. The largest value of q corresponding to any value of y at which the function being plotted crosses zero is then found more accurately using numerical root-finding methods and is given in Table 2 where similar results for ρ = 5 and 1 for both pairings via the Ca and the Sr ions are also given. The parameters, the values of which are different for different roots, corresponding to the same value of y are n u4 , s and E F2 where n u4 and s are determined via (35) and (27), respectively, and E F2 = qE F1 . Of these, the values of s dictate the range of y relevant for the data under consideration. To elaborate, for y ≤ 2.40, it is found that s ≥ 37. Since, contrary to the known features of the SC under consideration, such values of s would put it in the category of heavy-fermion SCs [13], they must be excluded and so we set the lower limit of y at a value which leads to s ≈ 10. The value of s decreases as y is progressively increased. This feature enables us to set the upper limit on the value of y as one for which s ≈ 1. 3) For the data in (30), we also give in Table 3 the results that follow when the operative 1PEM is due to the Sr ions. 4) Pairing in Bi-2212 can of course also take place via the Bi ions in the 1PEM scenario. From the results obtained via the Ca and the Sr ions and the value of θ Bi , which lies between θ Ca and θ Sr [vide (31)], one would expect that λ m (Bi) for any of the value of ρ in either of the two Tables should lie between the corresponding values of λ m (Ca) and λ m (Sr) and that, but for this change, the overall results should be substantially similar to those already obtained. This is indeed found to be so, as can be seen from the following results corresponding to T = 4.2 K, H c = 12 × 10 4 G and j c = 2.4 × 10 5 A/cm 2 as an example: where the units for n s , v c and E F are cm -3 , cm/s and eV, respectively. Discussion To give a perspective of the approach followed in this paper, it is pertinent to point out that, conventionally, the j c of an SC is determined via one or the other critical state models; for characteristic equations of nine critical state models, see ([19], p61). It is postulated in such models that for low applied fields or currents the outer part of the sample is in the so-called critical state which is characterized by particular values of j c and H and that the interior of the SC is shielded from these fields and currents. For Bi-2212, the most commonly employed model is Bean's model where its j c is determined via the geometry of the sample and the magnetization width ΔM of the M(H) hysteresis loop; see, e.g. [21], which gives values of j c (H) of the melt quenched and the non-melted samples of the SC at 5 K. Following the conventional approach, the significantly different j c (H)-values of the samples in [21] are attributed to material properties of the samples such as their cell parameters, alignment and inter-connectivity of the grains and the grain boundaries. It is hence seen that the approach followed in our paper differs radically from the conventional approach. A remark about the operator Re in (33): The function G 1 (θ 1 , T c1 , H c1 , E F1 ) in this equation becomes pure imaginary for all values of ( ) because of ( ) . This is a situation which also occurs in several other problems, e.g., while dealing with heavy-fermion SCs [13] and BCS-BEC crossover [22]. In order then to obtain real solutions, one alternative is to manually shift the lower limit of integration. This becomes cumbersome if one is simultaneously dealing with more than one such equation. The operator Re provides a much simpler, one-step, alternative, as was also noted in the context of Fe-based SCs [23]. This remark also applies to (39). As was noted above, the solution of (39) leads to multiple roots for any value of y for which s falls in the range of our interest. These are shown in Figure 1 for a particular value of y. A notable feature of this figure is its saw-tooth appearance (a series of "Vs"), which is attributable to a combination of the fluctuations in Fermi energy and the floor function employed in our formalism. The unmarked vertical limbs of the Vs are discontinuities which occur when the summation index n changes discretely from one integral value to another due to the floor function. We note that no root is found even when such limbs cross zero, as also that the saw-tooth behavior is not seen when (33) is solved despite the fact that it too employs the floor function, which is so because it is solved for a fixed value of E F . Another feature of (39) is that as y is progressively increased, the number of its roots keeps decreasing, as an example of which we note that as one goes from y = 2.60 to 3.10 in the upper-half of Table 2, the number of roots changes from 7 to 2. This is reminiscent of Melde's experiment with stationary waves on a taut string clamped at both ends which are therefore nodal points. In-between these, the string displays a varying number of nodes depending on the frequency of the tuning fork with which it is induced to vibrate. One is thus led to surmise that: 1) (39) embodies content that leads to behavior akin to a Melde's string, 2) the variation of y in the present case is equivalent to changing the frequency of the tuning fork and 3) the values of q at which (39) vanishes (i.e., the roots of the equation) are equivalent to the nodes of the Melde's string. Regarding the employment of Ω 0 (which corresponds to free electron mass) in obtaining solutions of (33), we note that there is no loss of generality in making this assumption in so far as the solutions of this equation are employed to shed light on why the same SC at the same values of T and H sustains different values of j c -which is the chief objective of this study. This assertion follows from the fact that different pairs of {ρ, λ m } in both Table 2 and Table 3 In so far as the n u values given in Table 2 and Table 3 are concerned, we recall from [17] that the radius of the largest Landau orbit is given by for the elemental SCs Hg, In, Tl and Sn, for example, has the values 1062, 1900, 2201 and 3172, respectively. Since r n (H) and therefore n u may also be regarded as a measure of the coherence length of the SC, the low values of the latter in the two Tables signify that Bi-2212 has a much smaller coherence length than the elemental SCs, which is in accord with the known empirical facts. If, for both the values of j c , i.e., j c2 = 2.4 × 10 5 (for an Ag-sheathed tape) and j c3 = 1.0 × 10 6 A/cm 2 (for a multilayer tape), the SC is assumed to have nearly the same value of s, which is unlikely, then the results in Table 2 and Table 3 . All of these results remain valid even if s(j c2 ) ≈ 10 and s(j c3 ) ≈ 1. However, if s(j c2 ) ≈ 1 and s(j c3 ) ≈ 10 then, while the first two results still hold, we have E F (j c2 ) > E F (j c3 ). In so far as the absolute values of E F in Table 2 and Table 3 are concerned, the lowest among them are of the order of 0.3 meV or less; these are interpretable as being near the nodes or the node lines on the Fermi surface. Conclusions 1) Based on the microscopic BSE customized to deal with superconductivity, we have given here derivation of equations that incorporate E F (equivalently, the chemical potential μ), T, H, and P. 2) Among the main results of this paper are (33) and (39). The former generalizes the equation given in [17] that incorporated T and H, but not E F ; the latter generalizes the equation given in [18] that incorporated T, E F and P, but not H. 3) Another notable result of the approach followed here is that it sheds light on why the cuprate that we have dealt with has much smaller coherence length than elemental SCs. 4) A novel aspect of our work is that it incorporates j c , which is a sample-specific property, into the dynamical equations that govern pairing in the SC. 5) As is well-known, the j c of an SC depends on several factors such as its size, shape, and how it is doped and prepared. Based on the premise that the E F of an SC subsumes all of these features, we have given here a framework for testing it, and applied it for a detailed study of Bi-2212. 6) The upshot of the present work is that for greater substantiation of the above premise, there is a need not only to monitor via experiment, insofar as it may be feasible, the following parameters for Bi-2212: 4 , , u s n s n and c v , but also to carry out similar studies for other SCs. Unfortunately, none of the conventionally employed critical state models sheds light on any of these parameters. We conclude by noting that a detailed exposition of most of concepts of the BSE-based approach employed in this paper can be found in [24].
2020-03-19T10:30:36.581Z
2020-03-16T00:00:00.000
{ "year": 2020, "sha1": "7550d6eb627b8e63a74502d76225b72333f04183", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=98933", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "f00aeb39fdb7a672d48e4875a52f53e27ad1c499", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1323262
pes2o/s2orc
v3-fos-license
Determination of Priority Areas for Dengue Control Actions OBJECTIVE: To identify areas at risk of dengue transmission by means of cluster analysis. METHODS: A cluster analysis in which the primary analysis units were the 48 districts of the municipality of Niterói, Southeastern Brazil, was conducted. The districts were grouped into six strata according to sociodemographic conditions, using the k-means cluster analysis method. After defi ning the strata, the incidence of dengue was calculated for each stratum in relation to RESULTS: The analysis on the incidence showed that the rates for the last three study periods were greatest in the stratum 2.1, which had the worst sanitation infrastructure conditions and high population increases, and in stratum 3.1, which had the highest percentage of shantytowns. Stratum 1.2 presented the lowest incidence and the best sanitation and income indicators, along with small increases in population and a low proportion of shantytowns. The incidence rates in 2001 and 2002 were high in most strata except for stratum 1.2, which had the districts with the least heterogeneity in relation to the indicators used. In 2001, the strata presented high rates of incidence when group immunity had supposedly become established for serotype I, thus expressing the transmission strength of this agent. CONCLUSIONS: The cluster analysis technique made it possible to recognize priority areas. It indicated areas where the dengue control and surveillance actions needed to be improved, along with structural improvements that infl uenced the living conditions and health of the municipality's population. INTRODUCTION A variety of methodologies have been used to characterize the epidemiology of endemic diseases, with the aim of formulating control strategies.For prevention and control measures to be effective, it is very important that the methodology that best highlights the environmental and social processes infl uencing disease transmission patterns should be used.Thus, stratifi cation of the space according to socioenvironmental indicators, with the addition of information relating to the level of endemicity of the area, is an important instrument for supporting the planning of control actions. 4,7e distribution of the risk of exposure to the dengue virus, in relation to different social and economic situations, is still an issue presenting contradictions.It has been correlated both with areas in which populations live under precarious conditions, 6 and with populations living in more favorable situations. 13,14he demographic and socioeconomic characteristics of territorial units need to be known in order to analyze different health situations, along with the characteristics of their population groups.a All of these elements characterize territories and form the basis for the territorial stratifi cation that is applied for health surveillance. 3his proposal, which is contained within the new model for health surveillance, is justifi ed by the worsening of the social inequalities that are associated with spatial segregation.Such segregation restricts these populations' access to better living conditions. 9e conditions in the city of Niterói, state of Rio de Janeiro, Southeastern Brazil, have favored transmission of the dengue virus.Simultaneous circulation of serotypes 1 and 2 caused a major epidemic in 1990-1991.Two other large epidemics have occurred in this municipality: one in 2001 with the reintroduction of serotype 1 and the other in 2002 with the introduction of serotype 3. b A large proportion of ecological studies within the fi eld of epidemiology have used political-administrative areas representing slices of geographical space, in order to detect transmission patterns. 2However, these areas do not always represent the reality involved in the epidemiological dynamics of the disease. In this light, territorial stratifi cation makes it possible to determine the spatial size of events by means of aggregation according to the homogeneity of the characteristics, with disaggregation of territories presenting heterogeneity. 3 For this, studies have used cluster analysis to seek spatial patterns of events and characterize homogenous areas. 3,6alysis on the role of human populations and infestations by Aedes aegypti in each territory, taking into consideration the socioeconomic conditions and the environment within which they interact, may contribute towards identifying the role of each agent in maintaining the circulation of the virus.This may add elements to the debate on prevention strategies. 12herefore, the aim of the present study was to characterize areas at risk of dengue transmission by means of cluster analysis according to socioeconomic and demographic indicators. METHODS This study was developed in the municipality of Niterói, in the metropolitan region of the state of Rio de Janeiro. Niterói is considered to be a medium-sized municipality, covering an area of 131.5 km² and with a demographic density of 3487.43 inhabitants/km².The population was estimated by Instituto Brasileiro de Geografi a e Estatística c (IBGE -Brazilian Institute for Geography and Statistics) to be 475,000 inhabitants in 2007.Around 78% of households were connected to the general water supply network; around 70% were connected to the sewage system; and 81% were served by garbage collection.The main economic activity was in the tertiary serviced provision sector.d The municipality is in third place in the national ranking of the human development index (HDI) and in fi rst place in the state ranking. An ecological study was conducted on clustered data, taking the 48 districts of the municipality as the primary units.Subsequently, these districts were grouped according to social and demographic conditions, into six areas (strata). The non-hierarchical k-mean method was used.The aim of this method was to classify the units into a certain number of clusters that were defi ned previously.The technique starts from these k-clusters, and moves the units between the clusters in order to maximize the variability between the clusters, while minimizing it within the clusters.Through this, results of greater signifi cance can be obtained in analysis of variance. 1correlation matrix on the 13 indicators constructed based on the variables in the demographic census of 2000 e was generated.Two variables that presented strong correlations (Pearson correlation coeffi cient greater than 0.9) with other variables were removed from the analysis.These were the proportion of heads of permanent private households with up to three years of schooling, which presented a strong correlation with the variable of the proportion of heads of permanent private households with up to two minimum monthly salaries; and the variable of demographic density, which presented a strong correlation with the population density in terms of the internal area. The stratifi cation was based on analysis of 11 indicators: proportion of permanent private households connected to the water supply network (WATER); proportion of permanent private households with garbage collection carried out by the cleansing services (GARBAGE); proportion of permanent private households connected to the sewage system (SEWAGE); proportion of heads of permanent private households with income of up to two minimum monthly salaries (UPTO2SAL); proportion of permanent private households of apartment type (APART); population density in terms of internal area per km² (2001) (DENSINTERN); proportion of households in shantytowns (SHANTY); proportion of permanent private households with more than eight residents (8+RES); proportion of residents over 70 years of age (70+YEARS); percentage of internal area greater than the level of 40 m² (2001) (INTERN+40), obtained through satellite image classifi cation for the years 1986 and 2001; and population increase (INCR).All the variables were normalized. The variables were chosen with the aim of covering factors that have been described as social macrodeterminants of dengue.a The proportion of permanent private households with more than eight residents and the proportion of residents over 70 years of age were used as indicators of living conditions.The percentage of the internal area greater than the level of 40 m² and the population density in terms of internal area were used by Silveira b (2005) as indicators of altitude and urbanization, respectively. The stratifi cation was done in two stages.In the fi rst stage, cluster analyses with three strata were performed, in which all the variables contributed importantly to the general model.In the second stage, analysis using two strata was performed for each stratum obtained in the fi rst stage, thus totaling a set of six strata.The reason for performing the second stage of cluster analysis was that differences were observed within important variables in each stratum of the fi rst stage (Table 1).In addition, some districts belonging to the same stratum still appeared to be very heterogenous in relation to the variables analyzed. The strata were chosen based on tests and retests, and the fi nal six strata were the ones that best represented the event under evaluation.After defi ning the strata, the dengue incidence rate was calculated per stratum for four periods: I-1998 to 2000, i.e. the endemic period after the introduction of serotype 2 and before the epidemic of serotype 1; II-2001, i.e. the epidemic period when serotype 1 was reintroduced; III-2002, i.e. the epidemic period when serotype 3 was introduced; and IV-2003 to 2006, i.e. the endemic period with circulation of serotype 3 after the epidemic of its introduction.The population data were obtained from the demographic census of 2000 and from population estimates for the years between censuses.c To obtain the number of dengue cases, duplicate records were excluded and only cases with clinical epidemiological confi rmation were considered.These data were obtained from the National Notifi able Diseases System (Sistema Nacional de Agravos de Notifi cação -SINAN) at the Municipal Health Department of Niterói.Analysis of variance was performed to investigate the statistical signifi cance of any differences in incidence found between the strata and analysis periods. The Statistica 6.0 and MapInfo 6.0 software were used in the analysis. This study was approved by the Research Ethics Committee of the Escola Nacional de Saúde Pública (ENSP). RESULTS The main characteristics of the three strata of districts according to social and demographic conditions that were generated in the fi rst stage showed that stratum 1 was formed by districts with low population growth.Its residents had the best income levels, greatest longevity, best conditions of sanitation service infrastructure, highest proportion of housing of apartment type (located in areas of high population density) and smallest proportion of substandard clusters.Stratum 2 was characterized by districts with high population growth in which the residents had intermediate income levels and the lowest conditions of sanitation service infrastructure.The housing ranged in type from simple houses to the most sophisticated housing in condominiums with low population density, and with a small proportion of shantytowns.Stratum 3 was composed of districts with low population growth in which the residents had low income, lower longevity, intermediate conditions of sanitation service infrastructure and the lowest percentage garbage collection by the cleansing services.The housing consisted of simple houses located in areas of medium population density and with the presence of substandard housing (Figure 1a and Table 2). The highest coeffi cients of dengue incidence for the four periods were found in stratum 2, and these were respectively 1.34, 2.27, 1.24 and 1.72 times greater than the values calculated for the whole municipality of Niterói (Table 3). Among the main characteristics of the six strata of districts according to social and demographic conditions that were generated in the second stage, stratum 1.2 was composed of districts presenting indicators of income, proportion of homes of apartment type, proportion of garbage collection and population density in terms of internal area that were greater than those of stratum 1.1; stratum 2.1 was composed of districts with population growth and income that were greater than those of stratum 2.2, along with lower infrastructure levels of general water supply and lower proportions of internal area greater than the level of 40 m²; stratum 3.1 was composed of districts with indicators of the proportion of shantytowns and sanitary service infrastructure conditions that were greater than those of stratum 3.2, along with a lower proportion of internal area greater than the level of 40 m² (Figure 1b and Table 2). Comparing the coeffi cients of dengue incidence calculated in the fi rst stage (three strata) with those calculated in the second stage (six strata), it was observed that the rates in the second stage were more differentiated and presented greater variation between strata (higher coeffi cient of variation).The results from the analysis of variance showed that the differences in incidence between the periods (p = 0.00) and between the strata (p = 0.06) were signifi cant (p < 0.10).Analysis on the incidence according to the indicators used showed that the rates for the last three study periods were greater in the strata with the lowest conditions of sanitary service infrastructure and high population growth (stratum 2.1) and high percentage of shantytowns (stratum 3.1).Furthermore, the greater rate of dengue incidence found in stratum 2 in the fi rst stage was due mainly to this expanding area of the urban periphery (stratum 2.1).Throughout the study period, stratum 1.2 presented the lowest incidence rate and the best sanitation and income indicators, along with small population increases and the smallest proportion of shantytowns.Stratum 2.2 presented the lowest risk of dengue transmission during the epidemic years.The incidence rates in 2002 were high in most strata, except for stratum 1.2.In addition, there was a substantial increase in incidence in 2002 in most strata, except in 2.1 (Figure 2 and Table 3). DISCUSSION In the present study, the cluster level used was the stratum, and the most homogenous of these was stratum 1.2 (which was formed by only three districts).All the other strata included districts with a certain degree of heterogeneity of socioeconomic and demographic characteristics. According to Machado et al 8 (2007), dengue cases occur mainly in heterogenous areas, defi ned as specifi c geographical spaces in which a diversity of socioeconomic strata coexist in the same region.Such areas would thus favor diffusion and maintenance of dengue.Sabroza et al 10 (1992) speculation, with great population growth, increasing land value and homes for the upper middle class population, with income and schooling levels above the average for the municipality.a The high incidence rates observed in different strata in 2001 was an unexpected fi nding, given that group immunity to serotype 1 had supposedly been established during the preceding period.It is likely that both the socioenvironmental characteristics of the municipality and the still-low degree of immunity among the population were relevant for the epidemic behavior encountered during this year. For better understanding of how the degree of immunity among the population modulates transmission in urban areas, serological studies specifi c for each of the types a Prefeitura Municipal de Niterói.Secretaria Municipal de Desenvolvimento, Ciência e Tecnologia.Perfi l de uma cidade.Rio de Janeiro; 2000. The analysis on the incidence rate according to the living condition strata showed that the rates for the last three periods studied were greatest in stratum 2.1 (with the worst conditions of sanitation service infrastructure and high population growth) and in stratum 3.1 (with the highest percentage of shantytowns).Thus, strata 2.1 and 3.1 were highlighted as priority areas for dengue control actions.On the other hand, the districts in stratum 1.2 presented the lowest heterogeneity in relation to the indicators used in this study, the lowest incidence rate and the best sanitation and income indicators, as well as low population growth and a low proportion of shantytowns, thus corroborating Machado et al 8 (2007). The epidemic of serotype 1 in 2001 was concentrated in stratum 2.1.This is an area of intense real estate of dengue virus are necessary.However, no such studies have yet been conducted in this region. During the epidemic of serotype 3 in 2002, one important factor that may have favored the explosion in numbers of cases was the population's susceptibility to this type of recently introduced virus, given that individual or collective immunity is not permanent.The incidence levels increase if a new virus is introduced or if there is a decline in collective immunity to the circulating virus. 11This would explain the magnitude and diffusion of the epidemic in Niterói, with incidence much greater than previously and distribution between the strata that was more homogenous: with predominance in stratum 2.1, a high coeffi cient in stratum 3.1 and slightly lower coeffi cients in strata 1.1 and 3.2.Stratum 2.2 was less affected, despite its characteristics favoring transmission, and this suggests that its vulnerability was lower, possibly explained by relative protection due to the persistence of signifi cant plant coverage a and few inhabitants in terms of internal area.Such characteristics are unfavorable for increases in population density among vectors that have adapted to urban environments, such as Aedes aegypti. The association between risk of dengue transmission and the socioeconomic and environmental conditions is a question requiring deeper analysis, while taking into consideration the realities in each municipality.It is important to analyze the spatial relationships between dengue transmission and other variables, such as the population's degree of immunity, the effectiveness of the control measures, the degree of infestation by the vector and the population's habits and behavioral patterns, among others. The different associations found in different studies on dengue occurrence and socioeconomic and environmental conditions may be related to the types of clusters used (census tracts, districts, zones and/or municipalities) and to the type of data used (primary or secondary data).Regarding the type of cluster used, the different results obtained using spatial slices are called problems of the modifi able area unit.By clustering the epidemiological and demographic data into larger units, the effect of rate instability is reduced.However, this clustering may falsify the information, through construction of large means that cover up internal differences. 5Regarding the secondary data obtained from offi cial notifi cation systems, these systems generally record cases for which medical attendance within the public system was sought.The public system is used more by the low-income population, and thus the data from these systems do not include a large proportion of the cases that occur in areas of the city with better living conditions.This may lead to distortions in the knowledge of dengue virus circulation. 12ansformation of geographical space and social dynamics appear to be fundamental factors in dengue production in Niterói.Historical-social processes and transformation of geographical space, among other factors, determine local living conditions.Unplanned urban development, high population growth, intermittent water supplies, irregular garbage collection, intense movement of people and the lack of effectiveness of the control measures are factors that favor maintenance of endemic disease and occurrences of important epidemics in Niterói. The spatial units usually used in epidemiological studies, such as districts, municipalities and states, result from the form of data aggregation in the information systems.However, neither the environmental nor the social processes that promote or restrict situations of risk to health are limited to these political-administrative boundaries.With regard to ecosystem approaches used in public health studies, there is still a need to develop methodologies that are capable of identifying and acting on social and environmental determinants.By choosing spatial units for data clustering that best highlight the social and environmental processes, processes that occur at scales differing from political-administrative divisions can be better understood. 2 Most ecological studies within epidemiology use political-administrative divisions as the unit for analysis and investigation of disease transmission patterns a posteriori.On the other hand, in the present study, it was sought to identify areas of greater transmission of dengue based on clustering in areas constructed a priori, through environmental, socioeconomic and demographic criteria. According to Silveira b (2005), instability in indicators for disease frequency in territorial units with small populations (census tracts, urban districts, rural localities and even municipalities with fewer than 10,000 inhabitants) has brought problems for statistical analyses on data consolidated at these levels of clustering, thereby leading to the use of Bayesian statistics.Another alternative, which was used in the present study, was to consolidate data into discontinuous strata that are commonly defi ned by socioeconomic and/or environmental indicators. Thus, the methodology used was shown to be useful for surveillance and for epidemiological investigations.Identifi cation of disease occurrence patterns, according to the distribution of factors that favor the appearance, distribution and behavior of diseases affecting In addition, the results from epidemiological studies using secondary data from disease notifi cations may be greatly infl uenced by under or overestimation of cases caused by diagnostic errors, problems of access to healthcare services and the frequency of asymptomatic infections. 12rough recognizing priority areas in Niterói, the present study has indicated where improvements in surveillance and control actions should be directed, along with structural improvements that would infl uence the living conditions and health of the municipality's population. Figure 1 . Figure 1.Consolidated strata of districts according to indicators of social and demographic conditions.Municipality of Niterói, Southeastern Brazil, 1998-2006. Source: Brazilian Institute for Geography and Statistics.Brazilian census 2000: preliminary results.Rio de Janeiro; 2000.WATER: proportion of private households connected to the water supply network; GARBAGE: proportion of private households with garbage collection; SEWAGE: proportion of private households connected to the sewage system; UPTO2SAL: proportion of heads of private households with income of up to two minimum monthly salaries; APART: proportion of households of apartment type; DENSINTERN: population density in terms of internal area per km²; SHANTY: proportion of households in shantytowns; 8+RES: proportion of private households with more than eight residents; 70+YEARS: proportion of residents over 70 years of age; INTERN+40: percentage of internal area greater than 40 m²; INCR: population increase. a Mean incidence rate Source: National System for Notifi able Diseases Table 2 . Intra-stratum indicators of social and demographic conditions.Municipality of Niterói, Southeastern Brazil, 2000.Brazilian Institute for Geography and Statistics.Brazilian census 2000: preliminary results.Rio de Janeiro; 2000.WATER: proportion of private households connected to the water supply network; GARBAGE: proportion of private households with garbage collection; SEWAGE: proportion of private households connected to the sewage system; UPTO2SAL: proportion of heads of private households with income of up to two minimum monthly salaries; APART: proportion of households of apartment type; DENSINTERN: population density in terms of internal area per km²; SHANTY: proportion of households in shantytowns; 8+RES: proportion of private households with more than eight residents; 70+YEARS: proportion of residents over 70 years of age; INTERN+40: percentage of internal area greater than 40 m²; INCR: population increase. population's health, facilitates the planning and development of interventions of greater effectiveness.However, environmental variables and other variables portraying the population's immunological profile should also be used, along with complementary methodological procedures (construction of summarized risk indicators and geoprocessing methodology, among others), thereby enabling deeper analysis.The present study has certain limitations.Socioeconomic information was only available for the years of the demographic census and the population estimates were calculated by taking the growth to be geometric, at a constant rate equal to what was observed for the period 1996-2000.Furthermore, the intense mobility of the population, for work, study or leisure purposes, made it diffi cult to analyze the areas with greatest dengue transmission, since individuals might become infected in neighboring or distant districts.One possibility for dealing with this problem would be to analyze cases occurring among children up to the age of ten years: it is accepted that the level of mobility among this age group is lower.
2017-04-06T19:32:13.759Z
2010-04-01T00:00:00.000
{ "year": 2010, "sha1": "50773a8e13b33950a6b09789d144331023c35639", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/rsp/a/F8RHbSFfH8cbrgJQ7fy4mBp/?format=pdf&lang=pt", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "50773a8e13b33950a6b09789d144331023c35639", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
255828478
pes2o/s2orc
v3-fos-license
Using practical wisdom to facilitate ethical decision-making: a major empirical study of phronesis in the decision narratives of doctors Medical ethics has recently seen a drive away from multiple prescriptive approaches, where physicians are inundated with guidelines and principles, towards alternative, less deontological perspectives. This represents a clear call for theory building that does not produce more guidelines. Phronesis (practical wisdom) offers an alternative approach for ethical decision-making based on an application of accumulated wisdom gained through previous practice dilemmas and decisions experienced by practitioners. Phronesis, as an ‘executive virtue’, offers a way to navigate the practice virtues for any given case to reach a final decision on the way forward. However, very limited empirical data exist to support the theory of phronesis-based medical decision-making, and what does exist tends to focus on individual practitioners rather than practice-based communities of physicians. The primary research question was: What does it mean to medical practitioners to make ethically wise decisions for patients and their communities? A three-year ethnographic study explored the practical wisdom of doctors (n = 131) and used their narratives to develop theoretical understanding of the concepts of ethical decision-making. Data collection included narrative interviews and observations with hospital doctors and General Practitioners at all stages in career progression. The analysis draws on neo-Aristotelian, MacIntyrean concepts of practice- based virtue ethics and was supported by an arts-based film production process. We found that individually doctors conveyed many different practice virtues and those were consolidated into fifteen virtue continua that convey the participants’ ‘collective practical wisdom’, including the phronesis virtue. This study advances the existing theory and practice on phronesis as a decision-making approach due to the availability of these continua. Given the arguments that doctors feel professionally and personally vulnerable in the context of ethical decision-making, the continua in the form of a video series and app based moral debating resource can support before, during and after decision-making reflection. The potential implications are that these theoretical findings can be used by educators and practitioners as a non-prescriptive alternative to improve ethical decision-making, thereby addressing the call in the literature, and benefit patients and their communities, as well. because the latter does not consider the particularities of any given case and context. Although Gallagher et al. [2] argue that doctors are motivated primarily by deontological (guideline or rulebased) approaches to decision-making in the context of medical practice, the quantity of such guidelines means that this prescriptive approach has become unmanageable. One estimate suggests that there may be more than 7000 deontological guidelines for clinicians to follow, with many being added every year [3]. Practitioners note that the growing use of ever-closer codification of medical practice is not able to take into account the complexity of caring for patients with multiple comorbidities and within difficult contexts [4]. The abstractness of principle-based approaches raises concerns about their utility in decision making in complex clinical context [5,6]. Further, Torjuul et al. [7] suggest that meeting the expectations of patients, colleagues and society makes doctors professionally and personally vulnerable. For instance, although doctors' main concern are their patients, in an environment of resource and regulatory constraints, meeting patients' expectations and distributing health care is challenging [7]. We argue that the theory presented in this paper offers a partial 'antidote' that makes the process of ethical decision-making easier, potentially reducing feelings of vulnerability and can build physician resilience. Our research responds to the call for overcoming the limitations of the dominant deontology-based approach and meets the need for a new approach with a conceptual framework used as a moral debating resource for cultivating phronesis (practical wisdom) in the process of making ethical decisions. Phronesis is a conceptual approach to ethical decisionmaking grounded in an accumulated wisdom gained through previous practice dilemmas and decisions. In effect, phronesis is an "executive virtue" that keeps stakeholders central to the decision-making process, allowing ethical choices to be executed in practice [8]. In their account of ethical decision-making, Jonsen et al. [9] report that the fundamental difficulty in all clinical settings is uncertainty, noting that the biological side of the uncertainty problem is severe because it is often very hard to be sure of a diagnosis or be certain that a specific treatment will work. Additional layers of uncertainty further compound this situation. For instance: What do patients prefer? Do they value length or quality of life? Do they prefer treatment (or no treatment) in line with religious or moral beliefs? Moreover, understanding the views of interested parties is essential: Does the patient's family have wishes? Does the hospital or health system have certain priorities of what can and cannot be treated due to financial constraints? Regardless of such preferences, what is the right or wrong thing to do? What if patients, relatives and hospitals prefer courses of treatment (such as antibiotics) that are not in the collective interest of the community to give? When different moral and social problems begin to interact with each another, medical decision-makers may not be certain about what is best for patients and society, and how best to achieve optimal outcomes for both, simultaneously. Decision-makers must choose that what is in the best interest of patients and society is also in line with good clinical practice and has a good chance of working in practice. Toulmin [10] and MacIntyre [11] align with each other in arguing for a recovery of the virtue of practical judgement to overcome disagreements and conflicts in the form of Aristotle's phronesis [12]. In Aristotle's work, phronesis is the intellectual virtue that helps turn one's moral instincts into practical moral action [13] by providing the practical know-how needed to turn virtue into successful action and enables phronimos to weigh up the importance of different virtues and competing goals in a given moral situation [14]. While moral virtues enable us to achieve the end, phronesis, makes us adopt the right means to that end [13:161]. Both moral virtues and phronesis work in tandem. In the absence of the former, phronesis degenerates into a "certain cunning capacity for linking means to any end rather than to those ends that are genuine goods for man" [11]:154]. Whereas, in the absence of phronesis, we may be lost in the moral maze. Building on Jonsen and Toulmin [15] a number of authors [16][17][18][19][20] have developed theoretical accounts of judgement in medicine based on Aristotle's account of phronesis being the kind of practical wisdom that is needed to promote the good in morally difficult situations. Pellegrino and Thomasma [16] provide the most forceful defence of the virtue-theoretic approach to medical ethics. They write that rule, or principle-based approaches to medical ethics are "too abstract" and "too formularized and far removed from the concrete human particulars of moral choice" [16]:19]. Alternatively, phronesis being medicine's "indispensable virtue" plays a crucial role in providing an essential connection between seeing or understanding what is right or good and knowing how to do good [16]:84]. Because different virtues can pull doctors in different directions (e.g. compassion drives her to give an optimistic assessment to a patient and honesty drives her to tell the truth), phronesis helps put virtues in the "proper order of priority and to make the right and good decision in the most difficult situation" [21]:382]. In the 'people professions' the culture of mere compliance to rules and guidelines [14,22] tends to oversimplify the complex clinical situation, making patients single-pathology entities rather than the multifaceted (medically and socially) humans they are, who require a holistic approach. Focussing on the patient as a person is imperative; science alone is not enough to understand the complexity of the case [21,23,24]. A recurrent theme of practice virtues being argued for as important is made in Carr et al. [25] and Kristjansson [14], and further supported by Jonsen and Toulmin [15], Pellegrino and Thomasma [16], Pellegrino [21], Montgomery [18], Toon [19] and Kaldjian [17]. Although there has been a drive to theorise about the importance of empirically-informed ethics [26][27][28], attempts to study clinical-judgement as phronesis using empirical data are rare. Notable exceptions are Jordens and Little [29], Little et al. [30], Jones et al. [31] and Conroy et al. [32]. However, all of these studies have been limited by small sample sizes. A clear gap existed for a large empirical study to inform the development of a virtue-based phronetic approach to medical and public health decision making. We use phronesis as a theoretical frame to analyse the narratives and the observations on medical decisions that our participants conveyed both as wise and not so wise for patients and their community. Being within the neo-Aristotelean practice virtue ethics theoretical framework ( [11], the concept of phronesis includes the telos of wellbeing for all in society. We acknowledge other alternatives to deontological reasoning such as casuistry, narrative ethics and discourse ethics [e.g . 33]. The choice to focus on a practice virtue ethics approach came from a combination of the gap and emphasis in the literature, as outlined above, and directly from feedback from policy makers, academics and practitioners at the first workshop for this study [8] The first step in practice virtue ethics is to understand the virtues that are considered by communities of practitioners, in this case the medical practice, to be important for their practice. Real cases communicated through stories or narratives, according to Montgomery [18], offer the best approach to eliciting practice virtues. The ontology of narrative conveying and transmitting practice virtues is supported by MacIntyre [11]. Therefore, to fill the gap in understanding, outlined above, we started our research with the following questions: 1. What does it mean to the medical community to make ethically wise decisions? 2. What virtues do the medical community convey as important for decision-making? 3. What approach do medical practitioners use to navigate the virtues they consider important when making decisions? These questions formed the basis of a three-year study (2015-2018), 'Phronesis and the Medical Community' (PMC), which used a narrative and arts-based methodology to understand what it means to a community of UK medical practitioners to make ethically wise decisions for patients and their communities. Narratives from all medical career stages were collected. Following Kaldjian [17] and MacIntyre [11], we critically examined the practice narratives with particular attention to the virtues conveyed. This paper describes the methodology, summarises our findings, discusses their contribution to ethically wise decision-making and suggests implications for medical education, policy and research. Methods Our research methods are based upon three main sources. From the humanities, we draw on the practice virtues philosophy of MacIntyre [11] and his argument that practitioner narratives convey the virtues of the practice of interest: narratives "humanize" situations and convey "what matters to us" [34,35]. From social sciences, we use Flyvbjerg et al. 's [36] phronesis-based ethnography to contextualize the collected stories. From the arts, we draw on a participatory video production approach [37], where the research team and some of the participants are involved in iteratively producing a film series and app that convey the collective set of virtues derived from the narratives [38]. Our data sources were in-depth, semi-structured interviews and direct observation of medical decisionmaking at multi-disciplinary team meetings (MDT). This approach allowed us to collect narratives and consider contexts, which indicated what participants considered to be ethically wise decisions and those which they considered to be unwise for their patients and communities. Our methods and data sources are appropriate to answering our three research questions and build on those used in other studies with an interest of understanding practice virtues [e.g. 39]. We draw on the language used by the participants when they refer to their practice and then changed that to a practice virtue. For example, practitioners may refer to 'negotiates' and that is interpreted as an action referring to practitioners negotiating with patients and the equivalent practice virtue would be 'negotiation' . Data collection Three communities of doctors were interviewed from mid-2015 to early 2017. Interviews took place during three time periods: Eligible doctors and medical students were identified with the help of the academic and administration coordinators at the three participating medical schools (of University of Birmingham, University of Warwick and University of Nottingham) and their local NHS Trust hospitals. Thereafter, using a snowball sampling technique, more doctors were approached and invited to participate via emails. All communication between interested potential participants and researchers was treated as confidential. Emails did not reveal who else was contacted. All medical students, FY1, FY2 doctors and experienced doctors who were interviewed were asked if they would consent to observations. Observations were carried out from August 2017 to November 2017. Of the 48 FY (1 and 2) cohort 13 were interviewed twice, first when they were in FY 1 and re-interviewed when they moved to FY2, to understand to what extent their phronesis develops as they move from FY1 to FY2. Non-participatory observations were conducted when four experienced practitioners, were working with multidisciplinary teams or peer groups to make decisions for their patients. Participants were sent participant information sheets. Those who consented were invited for an interview. Semi-structured interviews (SSI) were conducted that developed from story gathering methods [39]. The interviewer started by explaining that we were interested in exploring: (i) the participant's experience of involvement in making ethical/wise decisions; (ii) whether their own or those of others they work with; (iii) whether they perceived them to be good/wise or not so good/unwise decisions. Although an aide memoire was prepared, to be used as prompts only, participants started their accounts wherever and however they wanted. If they did not respond to this open start, the interviewer might prompt with 'It seems difficult for you to start on this subject, would you like to start by talking about the difficulty?' They were then asked about the instances or stories that they subsequently alluded to and so, the interview built on participants' responses. [8] Most interviews were conducted face to face at the medical schools for the medical students, for the practicing doctors they were conducted at the hospitals where they worked, after they had completed their clinical duties, while a few interviews were conducted via telephone. The interviews, which lasted between 40 and 60 min, were audio-recorded and transcribed verbatim. Four FY participants and two experienced doctors from across the three sites provided a diary focusing on experiences of meaningfulness and puzzlement that they indicated represented practical wisdom (wise or unwise decisions). The narratives were used to generate discussion and along with the observations provided the basis for storyboards and scripts that led to the production of a video series. Prior to video production the scripts were fed back to some of the doctors (two consultants, two GPs and a FY doctor) who attended a steering group meeting to ensure they were a fair and balanced representation of the kind of stories that circulate in their practice environment. Subsequently the alpha version of the videos produced were reviewed by nineteen medical practitioners (Consultants and GPs) medical ethics' tutors and CPD providers at a workshop, held in March 2018. The feedback from workshop participants was collated and sent to the film production team for an update to a beta version of video series [38]. Data analysis Interview data were analysed thematically by reading and rereading the transcripts to make the transition from literal meaning to practice virtue themes. Data were organized and coded using NVivo 11 Plus. Coding and analysis were conducted simultaneously. The research team met regularly, initially to discuss coding strategies and categories and then to check (inter-coder reliability) and to consolidate virtue themes. The nature of the practice virtues including phronesis was examined through two theoretical lenses: MacIntyre's [11] practice based virtue ethics and Kaldjian's [40] medical phronesis. Findings from the latter are presented elsewhere [41]. After four iterations, a set of 15 virtues were agreed. These 15 virtues were put into virtue continua ( Table 2). This consolidated set of virtues, along with the narratives and observations, were used as the basis of the video series. Use of combined narrative and film production methodology enabled MacIntyre's [11] practice virtue ethics to emerge from our data as the 'collective practical wisdom': a non-prescriptive moral debating resource, that reflects what it means to participants to make ethical decisions that contribute to the good for patients and their communities. The close fit between our data and practice virtue ethics concepts in video and app format, including phronesis, means they offer a new and accessible resource for doctors at all stages in their professional careers. Practitioners may find the resource aids ethical reflection about their decision-making, as a guide to ethical action, and as a foundation for resolving discrepancies between virtues. Results Participants conveyed many different practice virtues but no one participant conveyed all 15 virtues. The more experienced participants conveyed more of the virtues than the less experienced participants. This finding connects to the neo-Aristotelian framework of practice virtues constructed by MacIntyre [11] and advocated by Carr [42]. MacIntyre suggests determining the practice virtues across a peer group of practitioners rather than individual moral characters. This fits with the notion of diversity in practitioner contributions bringing a more robust set of virtues to the practice [25,42]. The consolidated 15 virtues we present here represent a "starter-set" of virtues for medical practice that are open for debate and challenge from others. As a non-prescriptive debating resource, this combined phronesis offers a powerful way for those in medical education and practice to debate forms of decision-making to serve the best interest of patients and their communities. The arts-based element of the analysis was used to produce a seven-part video series, which is an enacted form of our participants' 'collective practical wisdom' . The videos offer a highly accessible form of moral debating resource for reflection before, during and after medical decision-making (for an example see episode 3 at: https ://www.youtu be.com/ watch ?v=Azkxe ddnlp g). Virtue continua We present our findings as a "virtue continua" (Table 2) before presenting them in text form. The virtue continua show the virtues conveyed by our interviewees in their narratives. Each virtue extends from pole-to-pole via a mean. Here we present four examples from Table 2, above, to demonstrate how the data supported the analysis. Additional file 1 shows all the 15 virtues gleaned from the Experienced doctors spoke about the importance of dialogue and how exchanging information resolves conflicts and enables patients to make an informed choice: [A] a constructive conversation both ways. I've got something to say but let's not jump to a decision now, because that would be wrong. (BX02). However, decisiveness was respected both by doctorsin-training and by some patients they encountered. It was reported that some patients implicitly sought paternalistic guidance, as they may find decision-making burdensome, relying on the doctor's expertise and knowledge to guide them: Sometimes, some participants said that persuading patients in their best interest is necessary because: [A] patient doesn't understand the severity of the decision they're making, and perhaps only when they've seen people who don't have the procedure done or don't have an operation might they learn… the actual nature of the decision they're making, because we see it, whereas they don't. (WX02). Sometimes doctors, led by patient autonomy, assume the role of information providers, enabling patients' decisions to be implemented: But, for me, a good decision is one where the patient is the one who essentially makes the decision, or puts forward their wishes, and we then, as the clinicians, allow that decision to come to fruition. (B107). Collaboration (Being Collaborative/seeking guidance) (V.5) Many participants narrated stories about the presentday clinical paradigm being where professionally isolated decision-making is often neither advisable nor possible. Seeking to involve all those entrusted with a particular patient's care allowed holistic, tailored decisions. Counsel from multiple parties and professional guidelines was reported to be valuable. This was corroborated by our observations of different MDT meetings. When making decisions for complex cases, team members frequently found that the progressive decisions reached and displayed on the whiteboards were useful, as "they help prioritise and review decisions. " (Obs.1). Guidelines, though useful, require contextual awareness that can be provided by those who know the patient well, such as: [T]he nursing staff who cared for the patient throughout, I relied on hugely …and even the night sister … just made it more logical, and decisionmaking more logical. I do rely on my consultants for the ultimate decision quite a lot of the time. (BX01). In observation 2, the roles of the occupational therapist (OT), physiotherapist (PT) and speech therapist were seen to be central to certain patients' treatment because they had the most up-to-date information. The Registrars and Consultant relied on the OT and PT to provide almost the whole information summary. These collaborative discussions become critically important when making "deprivation of liberty" decisions. This observation made it clear that: The lead consultant would ask questions and appeared to be kind of taking it all in, cross-referencing information he got with his records on his computer. More often than not, he would defer to the decisions of the PT and OT…..The nurse had a lot of say as well about how patients were progressing towards their goals. (Obs. 2). This approach was not universal, though. At another MDT observation (Obs. 3), the discussions were mostly contained amongst the doctors, with barely any input from other staff. Most medical students were of the view that it is far better for "not-so-experienced doctors" to defer to people with more experience: [Y]ou know, bigger decisions, you're not going to want to take that onto yourself, you're going to defer to people that have got the experience. (W203) Several newly trained doctors conveyed that they appreciated the opportunities to seek guidance from, and feel reassured by, more experienced doctors. This was observed (Obs. 3) in an Emergency Department environment, when the junior doctor requested the Consultant to discuss "an older patient with complex health and social problems", because: …. they've probably made that [decision] before and they can tell you with experience the outcome and why. And they might come up with ideas as to why your idea might not be the best for that patient. (W101). Experienced doctors also conveyed that they seek guidance in challenging cases as it helps them clarify the situation and make better decisions: …sometimes that consensus is really useful because you're basically going through the arguments … and again clarifying some of the aspects of it, I think. (BX11) Some participants referred to guidelines being interpreted contextually. This could result in referring to more experienced doctors to gain insights into wider interpretations of the guidelines: Cultural competence (V.6) Some stories conveyed that respecting patients' values and beliefs is important. Many of the participants said that they consult their colleagues to understand cultural issues. However, some participants narrated experiences where the doctor chose to follow their own beliefs and values, rather than the patients' . One doctor experienced a situation where a doctor refused an intervention that challenged their personal beliefs leading to treatment delay. They said: [I]t is important to park your own values. You should not allow those values to affect the decision. (BX04). Similarly, reflecting in their diary a FY doctor wrote: However, one point I am completely certain of is the importance of professionalism. I feel one should never impose their views on to a patient (W103). A 5th Year medical student told of a consultation in a sexual health clinic, where the doctor seemed judgemental towards a patient: [H]e said something like, 'Are you gay or straight?' or something. Just, like, which is incorrectly phrased? There's far more, like, tactful ways to do it. But he, kind of, shouted at them, so, 'Are you gay?' kind of thing. (B501). Cross-cultural sensitivity was stated to be important in building trust. Rehabilitation, for example, is conveyed as following a "white Anglo-Saxon" model. An experienced doctor explained: Emotional Intelligence (including Interpersonal communication) (V.7) Good interpersonal communication was conveyed as commendable by our participants, because: [Y]ou can be the greatest doctor in the world but if you can't communicate, nobody will do what you say, will they? (BX103). However, some conveyed that having the clinical knowledge regarding the disease is also essential: Phronesis (V.15) The development of practical wisdom was conveyed by most interviewees as sequentially experiential. One medical student termed it "learned experience" (N203), while a FY doctor spoke of it as a "mix of nurture and nature" (B104, Follow-up). For one experienced doctor: …some people are inherently wiser, they are really wise people…now, whether that wisdom is inherent or … is simply because that person has walked past that journey ahead of me. (NX05). Experience can, however, lead to an assumption about personal knowledge. Another FY doctor recounted how the Consultant seemed to show a lack of compassion and so: "… experience makes you better at making clinical decisions… but not necessarily in terms of ethical decisions… a lot of people get stuck in their ways. (B504) Assuming that they "know it all" and following a textbook approach can, according to another participant, cause a doctor to be caught out: You can't make a decision based on what the textbooks say… because if the textbooks say it, you can only say that that's right 99% of the time. There will always be the one case that will catch you out if you treat everybody the same… there's things that are really rare, but they still happen. (WX02). An experienced doctor reported another risk that arises with experience and seniority as being "arrogant or foolhardy. " (BX04). Similarly, another experienced doctor reported on a senior consultant who regularly over-ruled based on experience rather than book knowledge: Because evidence-based medicine tells you something else, but the experience of this doctor was something different, so there is, kind of, a clash between the two, rather than both going forward in a symbiotic relationship…. Which is why I'm wary of saying that wisdom is the most important thing. (NX04). Phronesis was variously described as the collation of holistic information, both clinical and social, from different sources, as well as being able to weigh that up against protocols, guidelines, various situations encountered in the past and then getting other "opinions, other approval, putting the situation to a new pair of eyes, and saying okay this is what I have got here. " (B106). But for some medical students, phronesis seemed to be narrowly defined as diagnostic skills (i.e. techne) as opposed to the broader process of a holistic decisionmaking (as described by FY and experienced doctors): You know you learn by example, by following what someone else is doing… the art and the science of medicine… you need the clinical knowledge and then you need the experience to know how to apply it and when to apply it. (W207). Another medical student spoke of consultants with a "repertoire of patterns" (W209). However, experience and "time served" were not enough to guarantee wise decision-making, and certain other virtues were seen as key to phronetic decisions. For instance, being reflective, "open to insight" (WX04) and being consultative: " always questioning what the right thing to do is. " (B110). There were those who considered it as intuition, "a sixth sense" (NX02). Phronetic decisions were often seen as the avoidance of the rigid application of rules and guidelines, what was termed by one experienced doctor as the "protocolisation of medicine" (WX03). In medicine "it is this difficulty of managing an illness rather than treatment of an illness which is the more difficult bit and there are never going to be mathematically accurate answers. " (WX06). Discussion The virtue continua in Table 2 show fifteen virtues, including phronesis, and the spectrum of actions for each, from deficiency to excess via a mean. This table represents the consolidated 'collective practical wisdom' conveyed in the decision narratives of the study participants. These fifteen virtues were conveyed in small sets or individually in the narratives collected from medical practitioners. Thus, they combine diversity of thinking and experience across the medical community interviewed and observed. These practice virtues represent a collective of what is required to come to wise decisions that are best for patients and their communities. However, we are not claiming that they represent a final list of virtues that all practitioners should be following. Instead, they are formulated as a response to MacIntyre's [43] identification of a vital but missing component in professional education (including medics and other health professionals) of moral debating resources. The shared 15 virtue continua provide and act as exactly that, a stimulus for reflection and moral debate. We therefore argue that it is now possible to cultivate phronesis through that reflection and debate, to support the process of arriving at decisions that are right for the range of cases and contexts that practitioners are faced with. These findings provide empirical evidence to support the theory that good practice emerges from agreeing the virtues across a group of people who conduct that practice [20,44,45]. Pellegrino and Thomasma write about the virtues of compassion, prudence, justice, trust, fortitude, temperance, integrity, respect and benevolence [16]. In her encounters with empathetic doctors, Kristiansen, narrates how the humanity of doctors helped make sense of the suffering and loss she experienced, long after the clinical encounter had ended [46]. The flexibility of this new virtue continua allows practitioners with a preference for these virtues to be included or merged with one of the fifteen presented in Table 2. Interestingly this was reported by some of the research participants as part of 'collaboration' virtue (V.5 in Table 2), in that collaboration needs to exist not just with other practitioners but with any other guidance from professional bodies or other sources such as academic literature on the topic. This may lead to apparent disjunctions between virtues put forward in the fifteen here and those from other sources. However, since these new continua are not prescriptive, i.e. 'this is how it should be done' , but instead a stimulus for moral debate, this would mean that the resource is fulfilling the role of providing a moral debating resource. It is then up to the community of practitioners to decide what virtues and continua are of relevance to the case, or cases, at hand. Thus, this is not a case of empirical trumps theoretical, or any other guideline for that matter, but simply a way of cultivating practical wisdom within and across practices through the moral debate that ensues. The flexibility in the continua allows for the two to be integrated, which is the role they are designed to fulfil to ensure that the diversity in the community is fully synthesised. This also applies to community-based moral reasoning which may on occasion generate tension with individual reasoning. Participants consider that ethically wise decisions are guided by their medical knowledge (techne) and virtues, including the ability to understand patients' values [47]. Participants conveyed that negotiating treatment plans was important; although some reported relational interdependence, grounded in relational ethics, played an important role in making practically wise decisions [24,48]. Participants also suggested it is helpful to collaborate with those who know the context and can advise whether the decisions made are applicable in the real-world [1]. They indicated that seeking guidance on making decisions, especially in complex situations, from colleagues either senior or peers who have experiential knowledge, provides reassurance. This notion is corroborated by the nursing profession [49,50]. Resource constraints (time and finances) were reported as affecting communication and decisions made, and "provide the conditions in which unsafe acts occur" [51]. Worries about litigation ("covering myself ") also had a tangible effect on decisions [52]. Resolving that type of tension through moral debate could be facilitated by the resources. Implications for practice The findings answer a call in the literature for a moral debating resource to support practice virtue ethics/phronesis-based approach for ethical decisionmaking in medical practice. This is despite the detractors of such an approach who argue that this is an unreachable ideal and phronesis is conceptualised in different forms [e.g. 14]. The form of conceptualisation used is the practice virtue ethics version as advocated by MacIntyre [11] and supported by others [e.g. [17][18][19] in medically contextualised studies. Furthermore, given the argument that the role of doctors is changing from being the sole guardians of medical knowledge [53,54] to being facilitators of practically wise decisions, weight is given to this approach being conceptually and practically relevant right now. We argue these fifteen virtues represent an intra-medical practice collective for the doctors interviewed and observed (n = 131). One practitioner possessing all these virtues in their character is an unrealistic ideal, or as Curzer states: "one person can have some but not all the virtues" [55]:70]. Therefore, rather than them being a set of individual character virtues in the original Aristotelian tradition, we argue that the table provides their collective ethos and so aligns with the neo-Aristotelian concept of practice virtue ethics. In this collective form and when conveyed in the highly accessible video series and online App the 15 virtues provide a form of practical wisdom that can be used in both professional education and practice for moral debate related to ethical decision-making [56]. This supports the notion that such an approach leads towards professional fulfilment and practice excellence [11]. The flexibility of the continua allows phronesis to be cultivated within practice and across related practices in the wider community; which according to MacIntyre [11] is also part of schema to establish virtuous practices and a telos (purpose) of well-being for all in the community. A virtuous act "hits the target" by deriving an understanding of the situation and acknowledging all its pertinent features [22]:11] and in so doing requires moral judgement to discern how to act wisely. Our findings mean practice virtue ethics and phronesis can be used to complement other ethical approaches and clinical knowledge, leading to treatment plans/decisions that consider the particularities in each case. This is integral to reaching a diagnosis and proposing a plan of care that gives primacy to the best interest of the patient, and the community. Our findings emphasize that even when virtues are recognised for that particular practice (e.g. negotiation, reflection, cultural competence, collaboration, recognising limits to treatment etc.), knowing where to act on the continuum requires discernment that is provided by the intellectual virtue of phronesis [11]. So that moral reasoning and relevance work simultaneously [57], this framework offers practitioners a helpful stimulus to achieve that. On this view, we agree with others about introducing practical wisdom during the "formative development of medical students' ethical reasoning" [58,59]:241] which also enriches teaching methods for ethics [60]. Cribb argues that to deliver independent rigorous (moral) thinking, relevant to the situation at hand, without sacrificing rigour for relevance, is a challenge for translational ethics [57]. By consolidating the 15 virtue continua as 'Collective Practical Wisdom' and conveying them in an enacted seven-part video series we created a contemporary moral debating resource that responds well to the challenge of translational ethics identified by Cribb [57] and is applicable in varied contexts. As stated earlier, rather than a deontological prescription as the definitive, 'this is how it should be done' , the series provides a stimulus for reflection before, during and after medical decision-making. The way this is achieved is to use the videos in undergraduate and/or post-graduate(UG/PG) medical educational or continuous professional development (CPD) programmes to support the cultivation and development of practical wisdom for groups of practitioners. For instance, in a CPD programme they could view the series and then debate in groups the virtues they observe and how it relates to the current cases and dilemmas they are experiencing. The flexibility and adaptability of the resources means that the practitioners can add a virtue continuum of relevance to the situation. Thus, practitioners can add and remove virtues, move along the continua and integrate with the particularities of the decision-making process for the individual case. Limitations Our study does not claim to have captured all the virtues required for good and wise clinical decision-making, or to be offering yet another deontological guide, to be followed by medical doctors. However, it does offer a contemporary moral debating resource for medical educators and practitioners in their peer groups to decide on the relevant virtues for their context and the case under consideration. This study is focussed on one healthcare professiondoctors. Further research is therefore needed, with inter-professional/disciplinary teams and integrated care groups to understand what it means for them individually and as a collective to make ethically wise and good decisions for patients and their communities. This is especially relevant to patient groups where a wide group of different professions must come together to make the right decision at the right time and in the right place e.g. older people with frailty and people suffering from mental health issues. To this end the research team have already initiated new proposals that address this call and gap in current understanding of ethical healthcare decision-making. Conclusions Phronesis and practice virtues are interdependent. Phronesis helps adapt the practice virtues that enable doctors to make the right decisions for this patient, and the wider community. We offer here a starting point of "collective practical wisdom" from a group of medics at all stages in their careers. In this regard, the moral debating resource described is a credible tool for introducing and cultivating the concept of phronesis. As well as complementing the knowledge possessed by qualified professionals, medical trainees do not have to wait until they are older to be purveyors of wisdom or wise decisions. Instead, the latter could start to learn some phronesis from the wider medical community at the start of their careers. The videos, and accompanying resources, can be used both as an in-action and post-action debriefing tool. Future practice, research and policy on medical decision-making should benefit from applying this non-prescriptive approach to addressing the health and well-being of patients and wider society. The tool and the virtue continua support the General Medical Council's (GMC) 'Generic Professional Capabilities Framework' [61] and, from a policy perspective, the GMC's 'Outcomes for Graduates' , which lays down the professional and ethical responsibilities for its doctors [62]:9-10]. There is an understanding that "the aim of medical education is to develop doctors who are reflective, empathetic, trustworthy, committed to patient welfare and able to deal with complexity and uncertainty" [60]:431]. The 15 virtues in Table 2, interpreted from the narratives of our participants are the in-situ virtues that these UK practitioners conveyed as important to their practice. In medical ethics undergraduate, postgraduate and CPD programmes the resources described here can be used to examine and debate ethical dilemmas and challenges faced by actors in low, medium or high resource healthcare contexts and therefore are of international relevance.
2023-01-16T15:09:11.044Z
2021-02-18T00:00:00.000
{ "year": 2021, "sha1": "10bee1e9478abb970631c5876808927545355bbd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12910-021-00581-y", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "10bee1e9478abb970631c5876808927545355bbd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
4724911
pes2o/s2orc
v3-fos-license
Microchip electrophoresis with laser-induced fluorescence detection for the determination of the ratio of nitric oxide to superoxide production in macrophages during inflammation It is well known that excessive production of reactive oxygen and nitrogen species is linked to the development of oxidative stress-driven disorders. In particular, nitric oxide (NO) and superoxide (O2•−) play critical roles in many physiological and pathological processes. This article reports the use of 4-amino-5-methylamino-2′,7′-difluorofluorescein diacetate and MitoSOX Red in conjunction with microchip electrophoresis and laser-induced fluorescence detection for the simultaneous detection of NO and O2•− in RAW 264.7 macrophage cell lysates following different stimulation procedures. Cell stimulations were performed in the presence and absence of cytosolic (diethyldithiocarbamate) and mitochondrial (2-methoxyestradiol) superoxide dismutase (SOD) inhibitors. The NO/O2•− ratios in macrophage cell lysates under physiological and proinflammatory conditions were determined. The NO/O2•− ratios were 0.60 ± 0.07 for unstimulated cells pretreated with SOD inhibitors, 1.08 ± 0.06 for unstimulated cells in the absence of SOD inhibitors, and 3.14 ± 0.13 for stimulated cells. The effect of carnosine (antioxidant) or Ca2+ (intracellular messenger) on the NO/O2•− ratio was also investigated. Graphical Abstract Simultaneous detection of nitric oxide and superoxide in macrophage cell lysates Simultaneous detection of nitric oxide and superoxide in macrophage cell lysates Introduction Reactive nitrogen and oxygen species (RNOS) play a variety of roles in biological systems [1,2]. In mammals, specialized enzymes, such as NADPH oxidase and nitric oxide synthase (NOS), are involved in the production of RNOS as part of the immune system. Excessive production of these reactive species can lead to oxidative and nitrosative stress, resulting in damage to important biomolecules [3]. Moreover, this damage has been linked to neurodegenerative diseases, cardiovascular disorders, and cancer [4][5][6]. Nitric oxide (NO) is a water-soluble, free-radical gas that acts as an intracellular and intercellular messenger in all vertebrates [7]. NO is able to modulate cytokine and chemokine release [8] during the immune response [9], and plays important roles in both the cardiovascular system [10] and the nervous system [11]. NO has a half-life of 3-6 s in vivo, and is synthesized by a complex family of NOS enzymes through the conversion of L-arginine to L-citrulline [7]. The superoxide anion (O 2 •− ) is a reactive oxygen species naturally produced in the human body when oxygen (O 2 ) gains an excess electron during various enzymatic reactions in mitochondria; it is involved in many physiological and pathological signaling processes [12]. Overproduction of O 2 •− can lead to cell death due to oxidative damage to DNA, lipids, carbohydrates, and proteins [13]. In living organisms, the intracellular enzyme superoxide dismutase (SOD) protects the cell from the deleterious effects of O 2 •− by catalyzing the conversion of O 2 •− to O 2 and hydrogen peroxide [14]. Electronic supplementary material The online version of this article (doi:10.1007/s00216-017-0401-z) contains supplementary material, which is available to authorized users. Simultaneous production of intracellular NO and O 2 •− can lead to the formation of peroxynitrite [15]. This dangerous molecule has the ability to nitrate, nitrosylate, and oxidize proteins, DNA, and lipids, inhibiting their functions and causing cytotoxicity within the cell [16]. Additionally, peroxynitrite has been linked to neurodegenerative disorders, cardiovascular disease, and cancer [7]. Therefore, the simultaneous detection of NO and O 2 •− is necessary to obtain a thorough understanding of intracellular nitrosative and oxidative stress. Macrophages are cells involved in the primary immune defense mechanism that, when activated in vivo under proinflammatory conditions, can lead to a higher expression of inducible NOS (iNOS) coupled with the production of a large amount of NO and, thus, RNOS [17,18]. It is well known that a combination of lipopolysaccharides (LPS) and interferon gamma (IFN-γ) results in the stimulation of macrophages to produce a large amount of NO via iNOS [19]. In addition, high amounts of intracellular O 2 •− can be generated by incubation of macrophages with phorbol 12-myristate 13-acetate (PMA) [20]. A method for the simultaneous detection of O 2 •− and hydrogen peroxide in stimulated macrophages using microchip electrophoresis (ME) with laser-induced fluorescence (LIF) detection was reported by Li et al. [21]. Our group has also used ME-LIF for the determination of the intracellular production of NO in lymphocytes [22], as well as detection of O 2 • − in macrophages [23]. ME has several advantages over conventional methods for the analysis of cultured cells, especially for the detection of RNOS. These advantages include short separation times, isolation of the intended product from interfering substances, high throughput, and the ability to easily integrate multiple detection platforms. Additionally, ME systems are ideal for single-cell analysis because they can be automated and permit on-chip cell manipulation and lysis [24][25][26][27]. In the present work, the use of ME-LIF for the simultaneous detection of intracellular NO Cell culture and preparation RAW 264.7 macrophages were cultured in DMEM containing 10% (v/v) fetal bovine serum, L-glutamine (2 mM), penicillin (50 IU/mL), and streptomycin (0.3 mg/mL). The cells were cultured in 25-cm 2 polystyrene culture flasks at a density of 5 × 10 6 cells per flask, maintained in a humidified environment at 37°C, 5% CO 2 , and 95% air, and passaged every 2 to 3 days depending on the cell confluence to avoid overgrowth. Stimulation protocol for the simultaneous detection of NO and O 2 •− The protocol used for cell sample preparation is shown in Fig. 1. On the day of the experiment, cells were harvested with use of a cell scraper, counted with a C-Chip disposable hemocytometer, and plated at a density of 1.2 × 10 7 cells per flask. Stock solutions of LPS (1 mg/mL) and IFN-γ (200,000 U/mL) were prepared in 10 mM PBS and in 10 mM PBS with 0.1% BSA, respectively. Once the cells had adhered to the flask surface, they were stimulated to increase the production of NO by dilution of the LPS stock solution to 100 ng/mL and the INF-γ stock solution to 600 U/mL in 5 mL of cell culture medium. Immediately after the stimulation, the macrophages were incubated for 20 h in a humidified environment at 37°C, 5% CO 2 , and 95% air. A stock solution of 5 mM DAF-FM DA was prepared in 99% sterile DMSO. After the 20-h incubation with LPS plus INF-γ, the medium was replaced with 5 mL of DMEM free of phenol red containing 10 μL DAF-FM DA for 60 min (10 μM DAF-FM DA final concentration) [22]. A 5 mM MitoSOX Red stock solution was prepared in 99% sterile DMSO [23]. Additionally, a stock solution of 100 mM DDC was prepared in 10 mM PBS, and stock solutions of 16.5 mM 2-ME and PMA (1 mg/mL) were prepared in DMSO. Each flask of cells containing 5 mL of culture medium was then incubated for 1 h with a combination of the cytosolic SOD inhibitor DDC (1 mM final concentration), mitochondrial SOD inhibitor 2-ME (50 μM final concentration), and MitoSOX Red (10 μM MitoSOX Red final concentration). Finally, the cells were stimulated with PMA (1 μg/mL final concentration) for 30 min. During the incubation of the macrophages with DAF-FM DA and MitoSOX Red, the flasks were covered with aluminum foil to minimize any photobleaching of the dyes. Native untreated cells from the same population were used as a control. These were incubated under the same conditions as the cells described earlier, except that no stimulants was added. Another set of untreated cells were incubated in the presence of both SOD inhibitors (to increase the detectable concentration of O 2 •− ). At the end of the stimulation process, the cells were harvested, and 100 μL of the cell suspension was removed for cell counting. The suspension was then centrifuged at 1137 g for 4 min. The supernatant was removed, and the cell pellet was washed twice with 1 mL of cold 10 mM PBS at pH 7.4. Cells were lysed with 50 μL of pure ethanol. The lysate solution was filtered with a 3 kDa molecular weight cutoff filter with centrifugation at 18,690 g for 10 min. Then 10 μL of the filtered cell lysates was added to a 90 μL solution of 10 mM boric acid and 7.5 mM SDS at pH 9.2 (10% ethanol final concentration) and immediately analyzed with the microfluidic device. Cell density and viability Cell density and viability were measured with a C-Chip disposable hemocytometer and Trypan blue exclusion assay, respectively. The cell suspension was diluted either 1:3 or 1:5 (stimulated and untreated, respectively) with 0.4% Trypan blue solution. Microchip fabrication and instrumental setup The fabrication of hybrid PDMS-glass microfluidic devices has been described previously [22]. Briefly, a silicon master containing the design of the microchip was fabricated with SU-8 photoresist and soft lithography. A 10:1 (w/w) PDMS prepolymer to curing agent mixture was degassed in a vacuum desiccator and poured onto the master. The PDMS was cured overnight in an oven at 70°C. Then the cured PDMS was peeled off the master, and 3-mm reservoirs were punched in the substrate with a biopsy punch (Harris Uni-core, Ted Pella, Redding, CA, USA). To make the final microfluidic device, the PDMS substrate was reversibly bonded to a borosilicate glass substrate (Precision Glass and Optics, Santa Ana, CA, USA). For these experiments, a microchip with a simple-T design was used with a 5 cm separation channel, 0.75 cm side arms, and 40 μm by 15 μm channels throughout. Before operation, the microchip was conditioned with 0.1 M sodium hydroxide and run buffer. The run buffer consisted of 10 mM boric acid and 7.5 mM SDS at pH 9.2. A separation field was generated with a high-voltage power supply (Ultravolt, Ronkonkoma, NY, USA). A 1 s gated injection was used for sample introduction. A gate was established by application of +2400 V and +2200 V to the buffer and sample reservoirs, respectively. For LIF detection, the microchip was placed on the stage of an Eclipse Ti-U inverted microscope (Nikon Instruments, Melville, NY, USA). A 488-nm diode laser (Spectra-Physics, Irvine, CA, USA) was aligned 3.75 cm below the sample-buffer intersection as the excitation source. . To ensure that the ratio accurately depicts the relative concentration of NO to O 2 •− in the cell, the response factors for the products of the two probes were determined as described previously [22,23]. The ratio of the slope of the response curve for NO-specific product (DAF-FM T) to that for O 2 •--specific product (2-OH-MitoE + ) was determined to be 1.2. All NO/O 2 •− ratios were corrected for the difference in sensitivity (quantum yield) between the two products. This provided a more accurate assessment of the relative amounts that are produced (more details can be found in Fig. S1). Optimization of stimulation protocol and electrophoretic separation Before the NO/O 2 •− ratio was monitored, it was crucial to ensure that the method would be able to generate separable, measurable, and reproducible signals for both NO and O 2 •− in complex matrices such as RAW 264.7 macrophage cell lysates. Several factors needed to be considered because of their possible influence on the ability to detect both analytes by ME-LIF. Initially, the cell stimulation protocol was optimized for generation of NO and O 2 • − . In the first studies, cells were stimulated for 20 h with LPS plus IFN-γ, followed by the addition of PMA (500 ng/mL final concentration) and incubation for an additional 60 min. However, this protocol was not optimal in terms of cell viability and NO and O 2 •− production. The PMA-stimulation time was then decreased from 60 to 30 min, and the concentration was doubled to 1 μg/mL. The MitoSOX Red probe was incubated with cells for 1 h before stimulation with PMA. This new protocol was found to be the best for cell viability and NO and O 2 •− production. Once the optimal cell stimulation protocol had been established, attention was focused on the separation and detection conditions. Our group previously reported the detection of NO using DAF-FM DA and ME-LIF using a run buffer consisting of 10 mM boric acid and 7.5 mM SDS at pH 9.2 [22]. A separate method was developed for O 2 •− that used a similar run buffer but with a lower SDS concentration (3.5 mM) [23]. To determine the optimal concentration of SDS needed for the simultaneous detection of NO and O 2 •− , several SDS concentrations (3.5, 5.5, and 7.5 mM) in combination with 10 mM boric acid at pH 9.2 were investigated as background electrolytes. It was found that 7.5 mM SDS provided the best resolution for DAF-FM-T, 2-OH-MitoE + , and potential interferences. The previous ME-LIF methods mentioned earlier used a detection distance of 4.5 cm from the T intersection of the simple-T microchip. Therefore, initial experiments in these studies used the same detection distance. However, it was found in these experiments that a distance of 3.5 cm provided a faster separation and better resolution, so it was used for all further studies. The migration times for DAF-FM T and 2-OH-MitoE + under the different experimental conditions are reported in Table 1. The relative standard deviation for migration times was below 5% for both DAF-FM T and 2-OH-MitoE + within the sample type. However, there was a drift to longer migration times with the stimulated samples, which could be due to changes in the sample matrix effects and fouling of the PDMS substrate. The final optimized method including sample preparation is described in detail in BMaterials and methods.Ŝ Although DAF-FM is very selective for NO, it has been shown that it can react with dehydroascorbate (DHA), giving DAF-FM DHA [29][30][31]. In earlier studies, the DAF-FM DHA peak was effectively separated from the NO-specific peak in cell lysates [22]. In these studies, a peak for DAF-FM DHA was observed only in the electropherogram of a stimulated cell sample (Fig. 2c). Figure 2a shows a representative electropherogram for untreated macrophages. As can be seen by this electropherogram, the amount of NO and O 2 •− produced by native macrophages is very small because of the natural occurrence of endogenous intracellular scavenging molecules, such as SOD. Cytosolic and mitochondrial SOD regulates the intracellular concentration of O 2 •− , which can make O 2 • − difficult to detect in the cell lysate samples [32]. Therefore, to reduce the degradation of intracellular O 2 •− by SOD, two different SOD inhibitors, 2-ME and DDC, were introduced into the cells, along with MitoSOX Red, and the cells were incubated for 1 h before analysis (Fig. 1). Figure 2b shows a representative electropherogram obtained for unstimulated macrophages pretreated with 2-ME and DDC, with a corresponding increase in the 2-OH-MitoE + peak in relation to the DAF-FM T peak. An electropherogram of the cell lysate obtained for cells stimulated with LPS, IFN-γ, and then PMA in the presence of SOD inhibitors is shown in Fig. 2c. In this case, the products for the NO and O 2 •− signals were increased, indicating an expected enhancement in intracellular production of both species, with a greater amount of NO being produced. This yielded a higher NO/O 2 •− ratio than for the SOD inhibitors alone. A bar graph showing the comparison of the NO/O 2 •− ratios obtained under the different experimental conditions is provided in Fig. 2d. The ratios obtained for unstimulated cells pretreated with SOD inhibitors (0.60 ± 0.07), unstimulated cells without SOD inhibitors (1.08 ± 0.06), and stimulated macrophages (3.14 ± 0.13) show that, along with peak areas, the NO/O 2 •− ratio changes as a function of the stimulation conditions. In these experiments, the number of viable cells was determined before analysis because the stimulation process can reduce the amount of cell division [33], increase cell differentiation [34], and also cause cell death [35]. Figure S2 shows the variation in cell numbers as a function of the different stimulation protocols used in the present study. Carnosine is an endogenous dipeptide that exhibits antioxidant properties and protects cells against free radicals. It has been clearly demonstrated that carnosine is able to scavenge RNOS [42]. Caruso et al. [43] recently reported that carnosine can catalyze the conversion of NO to nitrite, thereby causing a decrease in the apparent intracellular NO concentration. We have also shown that significant amounts of carnosine are taken up by macrophages when it is incorporated in the cell culture medium [44]. In these studies, the effect of Ca 2+ on the NO/O 2 •− ratio was also investigated. Ca 2+ is an intracellular second messenger involved in signal transduction and many pathological processes [45]. Ca 2+ , along with RNOS, participates in the regulation and integration of many cellular functions [46]. Increases in cytoplasmic Ca 2+ concentration have been correlated with increased amounts of O 2 •− [46][47][48]. The effect of pretreatment of the cells with either carnosine or Ca 2+ on the NO/O 2 •− ratio in macrophage cell lysates was investigated under native and proinflammatory conditions. Figure 4 depicts the change in NO/O 2 •− ratio due to pretreatment of the cells with carnosine or Ca 2+ in unstimulated (Fig. 4a) and stimulated (Fig. 4b) Conclusions In this investigation, ME-LIF was used for the simultaneous detection of NO and O 2 •− using the fluorescent probes DAF-FM DA and MitoSOX Red. This method was also used to study the variations of the NO/O 2 •− ratio in RAW 264.7 macrophage cell lysates under physiological and proinflammatory conditions. Additionally, the effect of the natural antioxidant •− peak area ratios between the two different stimulation protocols. Standard deviations are represented by vertical bars carnosine and the second messenger Ca 2+ in modulating this ratio was investigated. These results highlight the roles played by different stimulation protocols in influencing the release and bioavailability of NO with respect to O 2 •− . It is well known that NO and O 2 •− production is related to many nitrosative and oxidative stress-driven disorders; thus, the development of new cell stimulation protocols along with the application of this method in single-cell analysis formats will provide new perspectives that can be used for a better understanding of the role of RNOS in neurodegenerative and cardiovascular disease. Manjula Wijesinghe is a fourthyear Ph.D. candidate in the Department of Chemistry at the University of Kansas, USA. His research concerns the development of a unique detection system for microchip electrophoresis based on electrochemically generated fluorescence. Susan Lunte is the Ralph N. Adams Professor of Chemistry and Pharmaceutical Chemistry at the University of Kansas, USA. She has more than 35 years of experience in bioanalytical chemistry and pharmaceutical analysis. Her research interests include the development of separation-based methods for the determination of biological markers of oxidative stress and neurodegenerative disease. Most recently her group has focused on the development of microfluidic devices for singlecell analysis and online monitoring of neurotransmitters using microdialysis sampling coupled with microchip electrophoresis.
2018-04-03T01:25:57.921Z
2017-05-29T00:00:00.000
{ "year": 2017, "sha1": "3ca39ee5057df7bd6a11611c0c036e64778d2677", "oa_license": "CCBY", "oa_url": "https://figshare.com/articles/journal_contribution/Microchip_electrophoresis_with_laser-induced_fluorescence_detection_for_the_determination_of_the_ratio_of_nitric_oxide_to_superoxide_production_in_macrophages_during_inflammation/7813514/1/files/14537432.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "c7d31020cef60a5a3bf6c04611ba4f15fe22859f", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
261572354
pes2o/s2orc
v3-fos-license
Desialylated Platelet Clearance in the Liver is a Novel Mechanism of Systemic Immunosuppression Platelets are small, versatile blood cells that are critical for hemostasis/thrombosis. Local platelet accumulation is a known contributor to proinflammation in various disease states. However, the anti-inflammatory/immunosuppressive potential of platelets has been poorly explored. Here, we uncovered, unexpectedly, desialylated platelets (dPLTs) down-regulated immune responses against both platelet-associated and -independent antigen challenges. Utilizing multispectral photoacoustic tomography, we tracked dPLT trafficking to gut vasculature and an exclusive Kupffer cell-mediated dPLT clearance in the liver, a process that we identified to be synergistically dependent on platelet glycoprotein Ibα and hepatic Ashwell–Morell receptor. Mechanistically, Kupffer cell clearance of dPLT potentiated a systemic immunosuppressive state with increased anti-inflammatory cytokines and circulating CD4+ regulatory T cells, abolishable by Kupffer cell depletion. Last, in a clinically relevant model of hemophilia A, presensitization with dPLT attenuated anti-factor VIII antibody production after factor VIII ( infusion. As platelet desialylation commonly occurs in daily-aged and activated platelets, these findings open new avenues toward understanding immune homeostasis and potentiate the therapeutic potential of dPLT and engineered dPLT transfusions in controlling autoimmune and alloimmune diseases. Introduction Platelets are small blood cells generated from megakaryocytes, are abundant in the blood, and are well-known essential contributors to hemostasis and thrombosis [1][2][3][4][5].Platelets also contain a plethora of immune modulators and receptors, giving way to a growing appreciation for them as critical sentinels of immunity [6][7][8].It has been well reported that platelets act as integral players in various stages of innate immunity and can help shape adaptive immune responses [6,8,9].Acting through either direct contact with leukocytes [10] or secreted factors including microparticles [11], platelets contribute to the propagation of proinflammation in various diseases [12][13][14], cementing their status as an essential link within the thrombo-immuno-axis [15].Despite their well-recognized role in proinflammation, the anti-inflammatory/immunosuppressive role of platelets is not well explored. Platelets constantly undergo removal from the bloodstream following normal senescence, activation, or in disease states.The most common signals for active platelet clearance include exposure or binding of key molecules such as phosphatidylserine, P-selectin, or in a pathological context of antibody binding [16][17][18][19].This leads to canonical clearance pathways within the reticuloendothelial system of the spleen.One unique mechanism by which platelets regulate self-clearance is the endogenous modification of surface glycans [20].In particular, desialylation, the removal of platelet terminal sialic residues, leads to increased trafficking and clearance of desialylated platelets (dPLTs) to the liver [21,22].This has recently been shown to be a dominant process during platelet aging [23].We and others have previously demonstrated that premature desialylation could also be induced via activating signals from antibody, CD8 + cytotoxic T lymphocytes or microbial binding [22][23][24][25][26][27].Antibody and cytotoxic T lymphocyte-induced platelet desialylation and subsequently clearance in the liver may play important role in immune thrombocytopenia that can be ameliorated by sialidase inhibitors such as oseltamivir [22,23,[27][28][29].However, the receptors that dictate platelet targeting to the liver remain not well defined.More importantly, the systemic implications of dPLT clearance within the unique immunotolerant niche of the liver have never been explored. Here, we observe that dPLT clearance in the liver stimulates an antigen-independent systemic immunosuppressive response whereby a secondary challenge of platelets or sheep red blood cells (sRBCs) results in lower antibody generation.We attribute this phenomenon to dPLT clearance via hepatic Kupffer cells and identify that this process is synergistically mediated by both platelet glycoprotein Ibα (GPIbα) and the Ashwell-Morell receptor (AMR).Immunological sequelae of Kupffer cell dPLT uptake included increased anti-inflammatory cytokine production and circulating regulatory T cells (T regs ), which was attenuated following Kupffer cell depletion.To establish translational potential, we further tested a clinically relevant murine model of hemophilia A [factor VIII (FVIII)-deficient, FVIII null ] and found that presensitization with dPLT significantly attenuated anti-FVIII inhibitory antibody (inhibitor) generation in FVIII null mice following recombinant human FVIII (rhFVIII) transfusion.These findings introduce a novel anti-inflammatory role of platelets that contrasts the prevailing proinflammatory view.It elucidates a previously unknown mechanism by which platelets contribute to the maintenance of immune quiescence and opens new therapeutic avenues for the utilization of dPLTs to control alloimmune and autoimmune diseases [30][31][32]. Contrary to expectations, we observed significantly lower antibody titers when dPLTs were transfused, particularly in GPIbα −/− mice (Fig. 1A).Interestingly, when sRBCs were transfused as an immunogen, there was no significant difference in anti-sRBC IgG production between desialylated versus nondesialylated sRBC (Fig. S2), suggesting that this lower antibody response is a platelet-specific effect.As antibody titers were detected through binding of serum IgG to WT non-dPLTs, we also tested the binding of IgG to dPLT and saw consistent lower antibody titers (Fig. S3).This excludes that the observed lower antibody response was due to the generation of a distinct antibody repertoire to novel epitopes following desialylation. To assess whether dPLTs were merely less immunogenic or could immunomodulate a response, we next cotransfused dPLT with same amounts of WT platelets and found that the presence of dPLT decreased antibody response dose dependently (Fig. 1B).To test whether the immunosuppressive effect was specific to platelet antigens or is broadly immune dampening, we presensitized WT mice with a desialylated or WT platelet transfusion, followed by a nonplatelet antigen challenge with sRBCs.We observed a slight, albeit significant decrease in anti-sRBC titers in mice pretransfused with dPLT (Fig. 1C).These data indicate that clearance of dPLT in vivo may lead to an antiinflammatory and immunosuppressive state that dampens a cocurrent immune response. dPLTs target to the gut vasculature at early time points and are exclusively cleared in the liver The observed immunosuppressive effects following dPLTs transfusion may be attributable to the local clearance mechanisms and immunological niche responses.Although we and others have previously demonstrated that dPLTs are cleared in the liver [8,22,[39][40][41], other contributory organs have not been well explored, and we thus investigated the biodistribution of dPLT clearance.dPLTs were fluorescently labeled ex vivo to allow for in vivo tracking via flow cytometry.Following intravenous transfusion, venous blood sampling at various time points indicated dPLTs exhibited rapid clearance kinetics similar to what was previously reported [22,42] (Fig. 2A).Unexpectedly, this process was completely independent of the spleen as we did not observe a significant rescue in dPLT clearance in splenectomized mice (Fig. 2B). To determine the organs of acute dPLT clearance, we developed a novel technique utilizing photoacoustic tomography (PAT) to noninvasively assess dPLT real-time clearance mechanisms in vivo, circumventing potential post-mortem artifacts of nonspecific blood pooling.PAT photonically excites tissue with wavelengths in the near-infrared red region (>700 nm), allowing for deep tissue penetration, minimal photon light scattering, and detection of the signal by an ultrasound transducer with ultrasound resolution [43].Multispectral unmixing, known as multispectral optoacoustic tomography (MSOT), further allows for tracking target biomolecules such as transfused platelets when coupled to specialized contrast agents with distinct absorption spectra [44].We utilized a wellcharacterized Food and Drug Administration-approved indocyanine green (ICG), commonly used as a contrast agent in humans [45], to couple to platelets.To the best of our knowledge, we are the first to utilize PAT to track platelet biodistribution in real time, in vivo.Confocal microscopy revealed platelet uptake of ICG when incubated together (Fig. 2C).Spectrometric measurement of absorbance of ICG-labeled platelets every 10 nm from 600 to 900 nm showed maximum absorbance between 800 and 810 nm, consistent with previous published spectral values of ICG coupled to biomolecules (Fig. S4) [46].Preliminary assessment in MSOT with phantom agar molds was used to test limits of detection and signal strength in relation to the concentration of dPLT.We found at low concentrations of 10 [6] ICG-labeled platelets, the signal was ~3× that of unlabeled platelets at an ICG-maximum absorption wavelength of 810 nm (Fig. 2D).We next transitioned to in vivo mouse MSOT scans of intravenously transfused with ICG-labeled WT platelet or dPLT.We observed a rapid increase in signal accumulation in the liver and gut vasculature of ICGlabeled dPLT, with a gradual decrease in signal in the spleen over the course of 60 min.Whereas control ICG-labeled platelets continue to circulate throughout the mice (Fig. 2E).These kinetics are consistent with the rapid disappearance of dPLT from circulation as observed by flow cytometry (Fig. 2A).To the best of our knowledge, dPLT localization to the gut has not been previously reported.To confirm, we further utilized intravital microscopy [47][48][49] and observed dPLT stable adherence to mesenteric vasculature (Fig. S5).Furthermore, small amounts of fluorescently labeled dPLT were detectable via flow cytometry and tissue immunofluorescence of the small intestine, suggesting potential translocation of dPLT to underlying gutassociated lymphoid tissue (Figs.S6 and S7) We further harvested different organs to track platelet accumulation with flow cytometry at various time points.We observed only a significant accumulation of dPLT in the liver of mice at early time points (<2 h) (Fig. 2F), but not at late time points (>12 h), indicating dPLT clearance.Transfused control platelets were predominantly localized to the spleen and lung at both the early and late (>12 h) time points, which, concomitant with its sustained presence in circulation, suggests nonspecific pooling during organ collection.It is noteworthy that there was scant accumulation of dPLT in the lung (Fig. 2G) despite its anatomical positioning as the first large vascular bed following intravenous tail-vein transfusion.This indicates that dPLT are actively sequestered in the liver, rather than passive adherence within the vasculature. Platelet hepatic targeting requires synergistic platelet GPIbα and hepatic AMR Mechanisms that exclusively target dPLT to the liver are not well understood.Previously, the hepatic AMR was implicated as a critical receptor in sequestration and uptake of dPLT in the liver [21,22].We have previously demonstrated platelet GPIbα is required for platelet-mediated hepatic thrombopoietin production [50,51], suggesting that both receptors actively regulate hepatic dPLT targeting.To elucidate the contributory role of each receptor, we assessed dPLT clearance in AMR deficient (ASGR2 −/− ) mice and with GPIbα −/− platelets.We found that in ASGR2 −/− mice, transfusion of CellTracker-labeled WT dPLT resulted in minimal but significant rescue of dPLT clearance at 2 h after injection (mean, 4.6 ± 1.8 versus 0.3 ± 0.1) (Fig. 3A).Similarly, when CellTracker-labeled GPIbα −/− dPLTs were transfused into WT recipients, rescue of dPLT clearance from circulation was higher than in ASGR2 −/− mice and significantly different from WT dPLT transfusion (mean, 8.0 ± 8.7 versus 0.3 ± 0.1).Interestingly, lack of both AMR and GPIbα (GPIbα −/− dPLTs into ASGR2 −/− mice) resulted in a synergistic effect, with clearance of dPLT plateauing at the 15-min mark (mean, 27.0 ± 0.8).Despite the significant rescue, the majority of GPIbα −/− dPLT (~75%) remains cleared from circulation (Fig. 3A), suggesting that other mechanisms might be involved although the roles of αIIbβ3 and/or other antigens in this process are still unclear.Further investigation into sequestering organs revealed a predominantly hepatic localization in the absence of one of the receptors, although GPIbα −/− platelets exhibited significant targeting to the lung at the 2-h mark (Fig. 3B).This suggests that neither AMR nor GPIbα alone is sufficient for dPLT hepatic targeting.However, when both receptors were absent (GPIbα −/− dPLT into ASGR2 −/− mice), although the kinetics of platelet clearance were similar, the organ distribution was significantly altered.Specific targeting to the liver was abolished, and diffuse GPIbα −/− dPLT localization in the ASGR2 −/− spleen and lung represented ~30% of dPLT in the liver (mean, 35.4 ± 7.6 and 27.2 ± 8.8, respectively) (Fig. 3C). Immunofluorescence further confirmed significant GPIbα −/− dPLT localization in the ASGR2 −/− spleen and lung (Fig. 3D).Moreover, colocalization with Kupffer cells was decreased in the absence of platelet GPIbα but not AMR alone (Fig. 3E).These data suggest that platelet GPIbα synergistically with the AMR mediates dPLT targeting to the liver with GPIbα-mediating interaction with Kupffer cells. Kupffer cells are required for dPLT clearance and are functionally distinct from splenic macrophages As we observed above, dPLTs were predominantly associated with macrophages in the liver; we hypothesized that Kupffer cells may be the central phagocytic cell to mediate dPLT clearance.To test this, we depleted macrophages with clodronate liposomes (Fig. S8), followed by injection of neuraminidase (NEU) to induce platelet desialylation in vivo.We found that thrombocytopenia was almost completely rescued in macrophage-depleted mice (Fig. 4A), suggesting almost complete attenuation of dPLT clearance. We next transitioned to an in vitro culture system to assess the phagocytic potential of Kupffer cells in dPLT uptake.Utilizing single-cell suspensions from harvested livers, we measured via flow cytometry phagocytosis of fluorescently labeled dPLTs by CD11b lo F4/80 + Kupffer cells, which expressed litter/low levels of CD11b compared to monocyte-derived macrophages [52,53].As expected, we observed significant phagocytosis of dPLT by Kupffer cells compared to non-dPLTs (Fig. 4B). Interestingly, when antibody-opsonized platelets or splenic F4/80 + macrophages were used as a control, we observed that splenic macrophages were more proficient at engulfing antibody-opsonized platelets compared to dPLTs, with the inverse being true for liver macrophages (Fig. 4B).Similarly, both the murine splenic cell line RAW264.7 and the human monocytic THP-1 cell line exhibited preferential uptake of antibodyopsonized platelets compared with dPLTs (Fig. 4C and D).Thus, we are the first to show a functional bias between the 2 resident macrophages (i.e., liver versus spleen) for platelet clearance. As antibody-opsonized platelets are predominantly cleared in the spleen, consistent with the preferential uptake by splenic macrophages in our in vitro culture system, we were curious as to whether antibody opsonization of dPLT would sufficiently abrogate hepatic targeting.Interestingly, when we injected CellTracker-labeled antibody-opsonized dPLT, it retained liver targeting (Fig. 4E).However, clearance is shifted to CD11b + monocyte-derived macrophages and not Kupffer cells (Fig. 4F).These data suggest that Kupffer cells are specialized to clear dPLT, while monocyte-derived M1-type macrophages are functionally biased toward antibody-mediated clearance. Increased interleukin-10 and transforming growth factor-β production in the liver following Kupffer cell uptake of dPLTs Kupffer cells have been reported to contribute to the maintenance of the immunotolerant niche of the liver [54,55].Since we have established that dPLTs are predominantly cleared by Kupffer cells, we next investigated the downstream immunological responses.In vitro coculture assays of dPLT with primary Kupffer cells revealed a dose-dependent increase in Kupffer cell production of interleukin-10 (IL-10) peaking at time of 8 h (Fig. 5A and B).Other Kupffer cell-derived cytokines produced small but nonsignificant increases (Fig. S9).We also observed a transient and variable increase in Kupffer cellassociated transforming growth factor-β (TGF-β) at 6 h after following dPLT coculture (Fig. 5A), which is likely derived from increased dPLT binding.To assess whether dPLT induction of increased IL-10 production polarizes Kupffer cells to a more immunosuppressive state, we added lipopolysaccharide (LPS) to Kupffer cell cultures following 24 h of coincubation with dPLT.We observed decreased production of proinflammatory tumor necrosis factor-α in Kupffer cells preincubated with dPLT (Fig. 5C) which suggests that dPLT promotes a shift toward anti-inflammatory IL-10 in Kupffer cells. We next investigated that the direct effect platelet depletion would have on circulating and liver TGF-β levels.Anti-αIIb monoclonal antibody and NEU were injected to induce predominant splenic and hepatic clearance of platelets, respectively.Thrombocytopenia was confirmed to be similar in both injections (Fig. S10).On day 1 after injection, when platelet counts are at a nadir, serum TGF-β was found to be drastically decreased in both antibody-and NEU-injected mice (Fig. 5D).This is not unexpected as platelets in both cases are cleared from circulation and underscore the substantial contribution of platelets to circulating TGF-β levels.However, TGF-β levels in liver tissue were found to be significantly increased in NEU-injected mice but significantly decreased in antibody-injected mice (Fig. 5E) as dPLTs are accumulated to the liver, whereas antibody-opsonized platelets are routed to the spleen.Thus, dPLT targeting to the liver significantly contributes to local potent immunosuppressive TGF-β levels, which is abrogated when platelet clearance is diverted to the spleen. dPLT increases circulating and functional CD4 + T regs We next investigated whether increased production of immunosuppressive cytokines in the liver in the presence of dPLT clearance can drive a systemic response.We found that transfusion of dPLTs, or in vivo platelet desialylation with NEU, led to increased CD4 + CD25 + FOXP3 + T regs in blood circulation (Fig. 6A and B) 3 days after injection.Transfusion of non-dPLTs also significantly induced circulating CD4 + CD25 + FOXP3 + at a later time point (~6 days).This could potentially be attributable to platelet aging that increased platelet desialylation and hepatic clearance.We next assessed whether the increased CD4 + T regs possessed functional significance, specifically contributing to the lower immune response observed above (Fig. 1).We cell-sorted CD4 + CD25 + splenic T cells following transfusion with dPLT or non-dPLT.An in vitro T cell proliferation assay revealed an enhanced suppressive function of CD4 + T cell proliferation with CD4 + T regs isolated from dPLT compared to platelet-transfused mice (Fig. 6C).In vivo transfusion of CD4 + T regs from dPLT-sensitized mice, followed by platelet challenge, led to a lower antibody response (Fig. 6D).Last, to assess the direct link with Kupffer cell clearance of dPLT and increased functional CD4 + T reg generation, we depleted Kupffer cells with clodronate liposomes and measured circulating peri pheral blood mononuclear cells (PBMCs).Previous reports demonstrate that Kupffer cells do not significantly repopulate by our experiment duration [56].We found decreased circulating CD4 + T regs at day 4 after injection (Fig. 6E), which could not be rescued by injection of NEU.This demonstrates that Kupffer cell contribution to CD4 + T reg homeostasis is, in part, mediated by clearance of dPLT in the liver.Thus, through both anti-inflammatory/immunosuppressive cytokines and CD4 + T regs , dPLTs play a previously unexplored but important roles in maintenance of immune quiescence. dPLT infusion attenuates antibody generation in clinically relevant transfusion models Transfusion-mediated immunogenic antibody reactions can lead to life-threatening adverse events [57].Development of antibodies against transfused blood products negates their therapeutic use, requiring challenging treatment alternatives [58,59].To investigate the translational potential of dPLT-mediated immunosuppression in transfusion medicine, we first tested whether desialylated human platelets possess immuno suppressive effects against an independent secondary challenge of sRBCs.We found high-dose presensitization with human dPLT significantly decreased anti-sRBC titers following challenge with one dose of sRBC (Fig. 7A). We next honed on the disease hemophilia A, a common congenital bleeding disorder due to absence of FVIII.Replacement FVIII therapy frequently leads to detrimental anti-FVIII inhibitory antibody generation [58].We utilized a murine model of hemophilia A (FVIII null ), which is highly prone to develop anti-FVIII immune responses upon rhFVIII infusion, recapitulating human disease [60][61][62][63][64][65].We previously demonstrated that anti-FVIII inhibitory antibodies could be prevented in FVIII null when transfused FVIII was carried by FVIII-expressing transgenic platelets (2bF8 Tg ) [60,62].However, transfusion of 2bF8 Tg failed to protect FVIII null mice against a subsequent anti-FVIII response when challenged with rhFVIII [60].To investigate whether desialylated 2bF8 Tg platelets could attenuate antibody response following secondary challenges of rhFVIII, we adopted a similar sensitization strategy (Fig. 7B and C).None of the mice developed anti-FVIII inhibitors after 4 doses of dPLTs infusion (Fig. 7D).Importantly, anti-FVIII inhibitory titers in the 2bF8 Tg dPLT-presensitized group was significantly lower than in the control group without 2bF8 Tg dPLT (130 ± 154 Bethesda unit (BU)/ml versus 343 ± 184 BU/ml; Fig. 7D).Although 4 of 9 presensitized FVIII null mice developed high titers (>100 BU/ml) of anti-FVIII inhibitors following rhFVIII immunization (Fig. 7E), of the remaining 5 mice, 3 did not develop anti-FVIII inhibitors, and the other 2 had low titers (2.9 and 13 BU/ml, respectively).In contrast, all control FVIII null mice without dPLT presensitization developed anti-FVIII inhibitors when immunized with the same immunization protocol, and 93% of animals developed greater than 100 BU/ml of anti-FVIII inhibitors, which is significantly higher than the 2bF8 Tg dPLTpresensitized group (P < 0.01) (Fig. 7E).These results demonstrate that dPLT presensitization can attenuate alloimmune or isoimmune responses during blood transfusions, with potential therapeutic value. Discussion The liver is constantly exposed to environmental neoantigens and microbial and food antigens from the intestine, necessitating an immunotolerant milieu to prevent hyperimmune activation [55,66].Under normal conditions, antigens presented within this niche generate a blunted immune response locally and have been shown to induce tolerance systemically [67].We identify that near-exclusive clearance of dPLT in the liver, which given the large vascular beds within the lung and spleen, indicates active platelet capture and not passive endothelial adherence.Previously, it was widely accepted dPLT clearance mediated by hepatocytes via the AMR predominates [21].However, the restrictive size limitations of the endothelial fenestrae and uptake capacity of hepatocytes suggest other more likely candidates [68,69].Consistent with recent emerging reports [39,41], we identify macrophages as the primary cell mediating dPLT clearance as evidenced by the rescue of thrombocytopenia in the presence of clodronate depletion of macrophages, but not AMR deficiency.As clodronate liposomes deplete not only Kupffer cells but also splenic macrophages, we cannot exclude the phagocytic contribution from splenic macrophages.However, we did not observe a significant difference in clearance kinetics of dPLT in splenectomized mice, suggesting a minor role.Furthermore, in our in vitro phagocytic assays, where to the best of our knowledge, we are the first to demonstrate that splenic and monocyte-derived macrophages are less proficient at dPLT clearance compared to Kupffer cells. The lack of in vivo dPLT targeting to splenic macrophages is intriguing.Marginal zone and red pulp macrophages in the spleen come into direct contact with dPLT in circulation.However, it is unknown whether they express the required lectin receptors to recognize exposed underlying desialylated galactose residues [70,71].In addition, expression of canonical inhibitory phagocytic receptor signal regulatory protein α (SIRPα), not expressed on Kupffer cells, may cumulatively contribute to the observed functional dichotomy and net effect of decreased dPLT uptake by splenic macrophages [72,73]. We did not observe a significant contribution of either AMR or platelet GPIbα for hepatic targeting, as most dPLT maintained hepatic clearance in Ashwell-Morell-deficient mice or with GPIbα-deficient platelets.Interestingly, we did observe increased lung targeting of desialylated GPIbα-deficient platelets.Given that lung macrophages are located within the subendothelial space, the increased sequestration may be due to increased interactions and tethering to the endothelium, rather than macrophage capture, following desialylation in the absence of GPIbα [74].It has been reported that GPIbα blocking increases seeding and lung metastasis of cancer cells; whether it is due to increased platelet targeting to the lung is unknown; thus, further studies are warranted to investigate the exact role of GPIbα in lung recruitment of platelets [75]. Recent reports have identified some putative ligand-receptor partners for Kupffer cell-mediated dPLT uptake [39,41].We observed a synergistic requirement of GPIbα and AMR for hepatic targeting and macrophage association.In the absence of both, the sequestration of dPLT was diffuse and found in large amounts in both the spleen and liver, where associations with Kupffer cells significantly decreased.We hypothesize that GPIbα contributes to dPLT adherence and tethering in the liver and contributes to Kupffer cell-mediated uptake.This is a similar phenotype to a recent report of synergistic blocking of hepatic AMR and Kupffer macrophage galactose-type lectin receptor (MGL) [39].Thus, it is likely that the ligand for MGL may be platelet GPIbα or MGL may indirectly interact with GPIbα via von Willebrand factor (VWF), although further investigation is warranted. Interestingly, we observed that antibody opsonization of dPLT did not alter the predominant hepatic targeting but did alter the macrophage subtype, switching from immunosuppressive M2 Kupffer cells to M1-proinflammatory monocyte-derived M1 macrophages [17].Phenotypic heterogeneity of macrophages within the liver has been reported to exist within a spatial gradient, with a highly immunosuppressive polarized population localized closest to the periportal triad [76,77].Whether antibodyopsonized dPLTs are cleared by resident M1-like Kupffer cells or via monocytes recruited from the blood circulation is unknown.It has been previously reported that increased numbers and activity of M1 proinflammatory macrophages within the liver abrogate the immunotolerant milieu of the liver; thus, whether immune-mediated thrombocytopenias skew the liver toward proinflammation deserves further investigation [78]. Kupffer cells have been reported to contribute to the induction and maintenance of systemic tolerance of 54.At baseline, Kupffer cell antigen uptake produces the anti-inflammatory cytokines IL-10 [78] and TGF-β [79].We demonstrate that uptake of dPLT further increases these anti-inflammatory cytokines and may contribute to the maintenance of circulating CD4 + T regs observed.This may be the basis of the correction and elevation of CD4 + T reg levels as seen in immune thrombocytopenic patients who experience normalization of platelet numbers [80,81].Similarly, we observed an increased number and function of CD4 + T regs following increasing circulating dPLTs.Notably, non-NEU-treated WT platelet transfusion also induced increased CD4 + T reg numbers; we attribute this to eventual senescence and desialylation at late stage for both exogenous and endogenous platelets and subsequent hepatic clearance [39], which may contribute to a general immunosuppressive state restricting autoimmunity.Clinically, a general correlation between platelet mass and increased immunosuppression has been long observed, such as in transfusion-mediated immune suppression and increased incidence of infections and cancer following allogeneic platelet transfusions [82].More recently, thrombopoietin (TPO) mimetic therapy in immune thrombocytopenic patients saw ~30% of patients who experience long-term remission following the tapering of the drug, which has been linked to decreased antibody generation and correction of immune dysregulation [80,83,84]. The mechanism of function of the increased CD4 + T regs is currently unknown.However, it is at least partially contributory to the suppression of the adaptive immune response that we observed in an antigenic challenge, as seen with our adoptive transfer data.The immunosuppression may be antigen nonspecific as we observed here and in our earlier study with other T regs [85], although the suppressive effect was more robust with platelet-associated antigens.Further studies are warranted to elucidate the limits of the immunosuppression, including the duration of the effector-suppressive function. The findings here indicate that platelet-associated antigens could induce an immunosuppressive state through Kupffer cells, which, we demonstrate, carries through in relevant clinical transfusion models.This also introduces therapeutic potential of dPLT transfusions to promote immunosuppression or to utilize dPLT as delivery vehicles for antigens requiring cover from immune surveillance, such as replacement FVIII in hemophilia A. Furthermore, in platelet transfusions, dPLT or its derivatives, previously nonutilized because of decreased circulating lifespan, may now have therapeutic utility. As platelets are known scavengers of the circulatory system, it is conceivable that clearance of senescent platelets requires an immunotolerant response leading to clearance in the liver.Conversely, increased dPLT hepatic targeting of detrimental antigens such as cancer aggregates or viral particles may protect them from immune targeting.Further studies are required to explore the full implications of this novel immunosuppressive pathway within different diseases [86][87][88][89][90]. Blood collection and platelet isolation Procedures were approved by the Research Ethics Board of St Michael's Hospital (Toronto, ON, Canada) and conducted as previously described [22].Venous blood was obtained from healthy volunteers by venipuncture into 3.2% trisodium citrate.Platelet-rich plasma (PRP) was prepared by centrifugation (10 min, 300g, no brake, 22 °C).Platelets were isolated from PRP and washed (15 min, 1,050g, no brake, 22 °C, with prostaglandin I2 of 10 ng ml −1 ) and resuspended in buffer B [10 mM Hepes, 140 mM NaCl, 3 mM KCl, 0.5 mM MgCl 2 , 10 mM glucose, and 0.5 mM NaHCO 3 (pH 7.4)].Mice were bled via retro-orbital bleeding into anticoagulant citrate dextrose.PRP was obtained with centrifugation (250g, 7 min, no brake).Washed platelets was then obtained from PRP with centrifugation (800g, 8 min, no brake) and resuspended in buffer B. Immunization and antibody titration Indicated mice were immunized with indicated platelets or washed sRBCs (Colorado Serum Company) via tail vein (or otherwise indicated) at the indicated dose, weekly for the indicated number of weeks.Sera was then collected via saphenous vein and serially diluted (1:2) in phosphate-buffered saline (PBS) and incubated with 2 × 10 6 platelets of the genetic background of the most recent immunization for 1 h at room temperature.Samples were then washed (800g, 8 min, no brake) and incubated with 1:1,000 dilution F(ab′) 2 goat anti-mouse IgG (H+L) Alexa Fluor 647 secondary antibody (Thermo Fisher Scientific) for 30 min at room temperature and read by flow cytometry. Ex vivo fluorescent labeling and desialylation of platelets 5-chloromethylfluorescein diacetate (5 μM) or CellTracker Far Red (Thermo Fisher Scientific) was added to 2 × 10 8 /ml of washed platelets in Pipes buffer to fluorescently label platelets.NEU (2.5 mU; EMD4) was sometimes added to dPLTs.Platelet preparation was then incubated in 37 °C water bath for 45 min followed by a 5× dilution with buffer B, and washed 2× (800g, 8 min, no brake) in the presence of 10 μM prostacyclin (Cayman Chemical).Desialylation was checked with 1:1,000 dilution of fluorescein-labeled Ricinus Communis Agglutinin I (Vector Laboratories).Samples were read by flow cytometry on a BD LSRFortessa X-20. In vivo platelet circulation studies Platelet membranes were fluorescently labeled and sometimes desialylated with NEU as described earlier.A total of 10 8 labeled platelets were transfused via tail vein into syngeneic mice.Mice were bled at 1, 15, and 30 min and 16 h after transfusion via saphenous bleed into PBS-EDTA.PRP was tested with flow cytometry to assess percentage dye + platelets.To induce in vivo platelet desialylation, mice were injected intraperitoneally with NEU (2.5 mU/g) or mouse anti-mouse αIIb (0.05μg/g) inhouse-generated monoclonal antibody [22].Platelet counts on indicated days were taken as described above.In some cases, Kupffer cell depletion was preformed 2 days prior to NEU injection with intravenous injection of 0.01 ml/g of body weight of clodronate liposome or control liposome (Liposoma). ICG labeling of platelets ICG (3 μg/ml; Sigma-Aldrich) was incubated with 10 8 platelets in PBS-EDTA on a rotator for 1 h to label platelets.Labeled platelets were washed 2× (800g, 8 min, no brake) and resuspended in buffer B. Successful labeling of platelets was checked on a Molecular Devices m5e multimode plate reader on absorbance on wavelengths spanning from 600 to 900 nm.Data were analyzed on SpectraMax software. Multispectral optoacoustic imaging Agar phantoms with ICG-labeled platelets were scanned as previously described [96].Prior to imaging, the torso regions of mice were gently shaved with a hair trimmer (ChroMini, WAHL, IL) under general isofluorane anesthesia.Hair removal cream (Nair, Church & Dwight, ON, Canada) was applied for 30 s to remove remaining hair and thin layer of clear ultrasound gel (Aquasonic Clear, Parker Laboratories, NJ), warmed using a warmer (Thermasonic Gel Warmer, Parker Laboratories), and was applied.Depilated mice under general anesthesia were placed in the MSOT (MSOT 128, iThera Medical GmbH, Munich, Germany) machine with a tail-vein catheter, to allow for baseline scans prior to platelet transfusion without disturbing body placement.Washed dPLTs labeled with ICG (3 μg/ml), or control ICG-labeled platelets, were transfused after acquiring background scans.Scans were acquired at wavelengths between 680 and 900 nm every 5 min at multiple slices for 1 h following platelet transfusion.Data were collected and analyzed using the native software (ViewMSOT, iThera Medical GmbH) in conjunction with iThera support. Immunohistochemistry Washed dPLTs or control-labeled platelets were injected intravenously.Mice were bled and sacrificed at 2 h after transfusion.Liver was perfused with Liver perfusion medium (Thermo Fisher Scientific) through the hepatic portal vein, and lungs were perfused with optimal cutting temperature (Thermo Fisher Scientific) compound through the trachea before harvest and snap-frozen in liquid nitrogen.Frozen tissue sections (5 μM) on slides were fixed with 4% paraformaldehyde (PFA) for 15 min.Sections were blocked with PBS-Tween 20 + 2% BSA and 5% goat serum for 20 min at room temperature, before staining overnight at 4 °C with anti-F480 (5 μg/ml; clone BM8) and anti-αIIb (in-house generated mouse anti-mouse, clone 5C4).Secondary staining was performed with anti-mouse 488 and anti-rat Cy3 in 1:500 dilution for 2 h at room temperature and 4′,6-diamidino-2-phenylindole (DAPI) staining at 1:12,000 dilution for 5 min.We mounted with DAKO mounting media overnight, imaged on Zeiss Axioscan Z1, and analyzed with HALO (Indica Labs) software. In vitro phagocytosis assay Spleen and livers from anesthetized mice were harvested following liver perfusion with liver perfusion media through cannulation of portal vein and mashed through 70 μM cell strainer to generate single-cell suspensions.Red blood cell lysis was performed with ACK.Liver parenchymal cells were precleared with centrifugation at 50g for 3 min.Remaining cells were gently layered on 15 ml of lymphoprep (STEMCELL technologies) and subject to density centrifugation at 800g for 25 min.Lymphocyte interphase was washed 3× before counting.Liver cells and splenocytes were plated at 10 6 cells in 12-well plates.THP-1 cells were plated at 10 6 cells in 12-well plates and differentiated overnight with phorbol 12-myristate 13-acetate (50 ng/ml).A total of 5 × 10 5 RAW264.7 cells were platelet in 12-well plates on coverslips in RPMI 1640, 10% fetal bovine serum, and 1% penicillin/streptomycin for 48 h in a 37 °C and 5% CO 2 humidified chamber.Desialylated labeled platelets, control-labeled platelets, or labeled platelets incubated with antiplatelet antibody 9D2 or 5C4 (2 μg/10 8 ) were added to the wells and incubated for 2 h.Wells were washed 3× with Pipes and incubated with 1:1,000 dilution of fixable viability dye Zombie yellow.Following which cells were gently scraped and fixed with 4% PFA at room temperature for 15 min, fixed cells were washed in flow staining buffer (PBS, 2 mM EDTA, 2% BSA, and 0.1% sodium azide), blocked with FcBlock (10 μg/ml; BD) followed by staining with anti-F4/80 (BM8) (0.5 μg/ml; eBioscience) for 45 min at room temperature, and analyzed with flow cytometry.In other wells, 4% PFA was added after anti-F4/80 staining, and coverslips were removed from wells and mounted on glass slides in Prolong Diamond Antifade Mountant (Thermo Fisher Scientific) before visualizing on a Zeiss LSM700 confocal. In vitro cytokine production Single-cell liver suspensions and parenchymal preclearance were prepared as described above.Kupffer cells were isolated at the interphase of 25% and 50% Percoll gradient solution following density centrifugation at 1,350g for 30 min.Cells were washed and plated at a density of 10 6 viable cells in 24-well plates.Plates were washed at 4 h post to remove nonadherent cells.A total of 5 × 10 6 WT or dPLTs and LPS O111:B4 (1 μg/ml; Sigma-Aldrich) were added to the wells and incubated for indicated time points.Golgi stop (1,000×) dilution was added 6 h before harvest.Cells were harvested by gentle scraping and were stained with fixable viability stain Zombie Aqua (BioLegend) followed by fixation with 4% PFA and blocked with FcBlock (10 μg/ml; BD) prior to staining for flow cytometry.Cells were permeabilized for intracellular staining with 1× perm wash buffer (BioLegend) according to the manufacturer's recommendations. T reg suppression assay Spleens were harvested from GPIb −/− mice transfused 2× with either 10 8 WT or desialylated BALB/c platelets.Single-cell suspensions were prepared as described above, and CD4 + CD25 + T regs and CD4 + CD25 − conventional T cells were cell sorted to >95% purity on an FACSAria III.Conventional T cells were stained with 5 μM carboxyfluorescein diacetate succinimidyl ester and resuspended in RPMI 1640 complete.A total of 2.5 × 10 5 cells per well were plated in 96-well plates.T regs were added in 2× serial dilutions starting with 1.25 × 10 5 .The ratio (1:1) of Dynabeads (Thermo Fisher Scientific) was also added to stimulate T cell proliferation.After 5 days, cells were harvested and prepared for flow cytometry as described above. Hemophilia A mouse model FVIII-deficient (FVIII null ) mice were generated via a targeted disruption of exon 17 of the F8 gene with undetectable FVIII in the plasma [65], and the colony was maintained in our facility.2bF8 mice (2bF8 Tg ) are transgenic with platelet-specific human B domain-deleted FVIII expression generated by lentiviral transgenesis [60,95].Both FVIII null and 2bF8 Tg mice were on C57BL/6 and 129S mixed genetic background.All mice were maintained in pathogen-free microisolator cages at the animal facilities operated by the Medical College of Wisconsin.Animal studies were performed according to protocols approved by the Institutional Animal Care and Use Committee of the Medical College of Wisconsin.Isoflurane or ketamine was used for anesthesia. Platelets were isolated from 2bF8 Tg mice as previously described [60,61] and treated with α2-3, 6, 8, and 9-NEU (10 mU/ml) in modified Tyrode buffer for 5 to 6 h.Platelets were then washed and resuspended in Tyrode buffer with a concentration of (1.5 to 3.0) × 10 9 platelets/ml.To determine the percentage of platelets that were desialylated, 1 × 10 6 of NEUtreated platelets were stained with Fluorescein Ricinus Communis Agglutinin I (Vector Laboratories Inc., Burlingame, CA) and analyzed by flow cytometry.dPLTs in Tyrode buffer (10 μl/g of body weight) were infused into FVIII null mice via retro-orbital venous administration weekly for 4 weeks.One week after the last dPLT infusion, blood samples were collected, and plasma was isolated for Bethesda assay as we previously reported [62] to determine anti-FVIII inhibitory antibodies titers (referred to as inhibitors).Animals were further immunized with recombinant human B domain-deleted FVIII (rhFVIII, Xyntha; Pfizer Inc., NewYork, NY) at a dose of 50 U/kg per week for 4 weeks through retro-orbital venous administration.One week after the last rhFVIII immunization, blood samples were collected, and inhibitor titers were determined. Statistical analysis Unless noted otherwise, a 2-tailed, unpaired t test was used to assess statistical significance.Statistical calculations were performed in GraphPad Prism 7. The number of replicates and a description of the statistical method are provided in the corresponding figure legends.Differences with P values of less than 0.05 were considered to be statistically significant.*P < 0.05, **P < 0.01, and ***P < 0.001; ns indicates not significant.D.K. preformed some of the experiments and edited the manuscript.G.Z. provided the mouse anti-αIIb monoclonal antibody.Y.H.Y., S.A.M., H.Z., J.W.S., and J.F. contributed to study design, data interpretation, and manuscript preparation.As principal investigators, H.N. and Q.S. oversaw study design, data interpretation, manuscript preparation, and providing funding support.Competing interests: The authors declare that they have no competing interests, but a patent application is in preparation. Fig. 2 . Fig. 2. dPLTs target to the gut vasculature at early time points and are exclusively cleared in the liver.(A and B) CellTracker-labeled WT platelets or dPLTs were intravenously transfused into (A) WT or (B) splenectomized mice.At 15 min, 30 min, 2 h, and 16 h after injection, remaining labeled platelets in circulation was assessed by flow cytometry and calculated as percentage of baseline (percent in circulation at 1 min after injection).n = 10 per group.(C) Representative immunofluorescence images of ICG-labeled platelets.Arrows point to labeled platelets.(D) Representative images of MSOT scans of ICG-labeled platelets and unlabeled platelets in agar phantoms.Tracings represent mean optoacoustic intensities across the length of the agar phantom at nonspecific 900-nm and ICG-specific 810-nm excitation wavelengths.(E) Representative 3D reconstructed images of MSOT scans over 40 min of mice transfused with ICG-labeled dPLTs and platelets.Tracings represent mean optoacoustic intensities across 60 min of 2 regions in mice.Blue line, the liver and gut region; red line, spleen.(F) Representative dot plots and quantification of CellTracker + CD41 + platelets in indicated organs at early (<2 h) and late (>12 h) time points.Data represented as means ± SD. *P < 0.05, **P < 0.01, and ***P < 0.001.(G) Representative immunofluorescence of CellTracker + platelets in various organs at 2 h after intravenous transfusion.Bar graph is quantification of % area of whole tissue section positive for CellTracker + platelets as assessed by HALO software.White, CellTracker + platelets; green, F4/80 + macrophages; blue, DAPI.Arrows point to localized CellTracker + platelet. Fig. 6 . Fig. 6. dPLT increases circulating and splenic functional CD4 + T regs .(A) Representative dot plot gated on CD4 + showing increased CD25 + FOXP3 + T regs in circulation on day 3 following intravenous transfusion with dPLTs.(A and B) Measurement of PBMC CD4 + T regs on indicated days following intravenous transfusion of (A) 2 × 10 8 dPLT or platelet (B) 50 mU of NEU or anti-αIIb antibody (0.02 μg/g).n = 6 per group.Statistical analysis done by with one-way ANOVA with Tukey post hoc test.(C) Graph and representative histogram of decreased CD4 + proliferation in vitro in presence of increasing dilution of splenic CD4 + T regs isolated from desialylated or WT platelet transfused GPIbα −/− mice.(D) Anti-GPIbα titers from 3× WT immunized GPIbα −/− mice transfused with CD4 + CD25 + splenic T reg sorted from either dPLT-or WT platelet-treated mice.(E) Measurement of PBMC CD4 + T regs on indicated days following macrophage depletion by clodronate liposomes.Data represented as percentage change from day 0 (immediately prior to clodronate injection).n = 4. *P < 0.05 and ***P < 0.001. Fig. 7 . Fig. 7. dPLT infusion attenuates antibody generation in clinically relevant transfusion models.(A) Anti-sRBC antibody titers in WT BALB/c mice preinjected 3 days prior with 10 8 WT human dPLTs.Bar graphs show antibody titers quantified as AUC.n = 2 human donors and 3 mice per group.(B) Schematic diagram of the timeline of dPLT transfusion and FVIII immunization in naïve FVIII null mice.(C) Flow cytometry analysis of dPLTs.Platelets without NEU treatment were used as a negative control.(D) Inhibitor titers of FVIII null mice infused with desialylated 2bF8 Tg platelets followed by rhFVIII immunization (50 U/kg per week, ×4). FVIII null mice without 2bF8 Tg dPLT infusion were used as a control in parallel.Statistically significant differences between the means of groups were determined by one-way ANOVA followed by Tukey's multiple comparisons test.(E) Comparison of number animals that developed high inhibitory titers (>100 BU/ml) following rhFVIII immunization.Statistical significance was determined by Pearson's test.**P < 0.01 and ****P < 0.0001."n.s." indicates no statistically significant difference between the 2 groups.
2023-09-07T15:08:52.838Z
2023-09-05T00:00:00.000
{ "year": 2023, "sha1": "d964f027dfba7c7b88e62643df0b8d5e1a35a78d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.34133/research.0236", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "86ede8ba0e48ed4cf996d6936bca9dcb3ba01ad9", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
216214282
pes2o/s2orc
v3-fos-license
Unravelling the New Plebiscitary Democracy: Towards a Research Agenda Abstract Pushed by technological, cultural and related political drivers, a ‘new plebiscitary democracy’ is emerging which challenges established electoral democracy as well as variants of deliberative democracy. The new plebiscitary democracy reinvents and radicalizes longer-existing methods (initiative, referendum, recall, primary, petition, poll) with new tools and applications (mostly digital). It comes with a comparatively thin conceptualization of democracy, invoking the bare notion of a demos whose aggregated will is to steer actors and issues in public governance in a straight majoritarian way. In addition to unravelling the reinvented logic of plebiscitary democracy in conceptual terms, this article fleshes out an empirically informed matrix of emerging formats, distinguishing between votations that are ‘political-leader’ and ‘public-issue’ oriented on the one hand, and ‘inside-out’ and ‘outside-in’ initiated on the other hand. Relatedly, it proposes an agenda for systematic research into the various guises, drivers and implications of the new plebiscitary democracy. Finally, it reflects on possible objections to the argumentation. Vox populi redux In May 2018, the Spanish left-wing political party Podemos organized a digital party referendum, as they called it, on its leadership. What had happened? Pablo Iglesias, the party's outspoken leader, and his life partner Irene Montero, the party's parliamentary spokeswoman, had purchased a relatively luxurious €600,000 home with a swimming pool and a guest house. According to many within and outside the party this was a hypocritical act, running counter to earlier public statements about perverse mechanisms on the housing market. To re-establish its credibility, the leadership supported an unplanned vote of confidence, organized via the party's website, saying, 'if they say we have to resign, then we will resign' (Marcos 2018). Although the words 'party referendum' were being used, the procedure could just as well be likened to a (party) recall: a voting procedure to accept or decline a leader already in the saddle. The couple ultimately survived the vote, on 28 May 2018, after winning 68.4% of nearly 190,000 votes cast. In March 2016, the Natural Environment Research Council (NERC) in the UK organized an online poll, as they called it, to involve the wider public in the naming of a new research vessel: publicly funded, then why not publicly named, the reasoning went. The NERC suggested some names -Endeavour, Falcon, Henry Worsley, Sir David Attenboroughon which people could cast an online vote. The public was asked to suggest additional names, which could then also compete for support. The hashtag on social media became #NameOurShip. More than 3,000 additional names were suggested. Former radio presenter James Hand jokingly suggested 'Boaty McBoatface', which became an instant hit. This name ultimately won the online vote, with more than 124,000 declarations of support (four times more than secondplaced: 'Poppy-Mai'). The public's favourite, however, did not become the name of the ship, but only of one of the submersibles aboard. Jo Johnson, then minister for universities and science, decided to go along with the more traditional name RSS Sir David Attenborough (BBC News 2016). The formal line was that the online poll, although open to public input, was never meant to be a binding referendum. These are just two illustrations (common practices rather than best practices) that take public voting on political actors and public governance issues well beyond the realm of traditional voting for politicians and their programmes. This is emblematic for the new 21st-century plebiscitary democracy that is reinventing long-existing methods (initiative, referendum, recall, primary, petition, poll) with new tools and applications (mostly and prominently online, occasionally also offline). The new plebiscitary democracy is a sprawling phenomenon that needs more encompassing scholarly research, as it comes in a great range of guises, with various possibilities and problems, and many questions still to answer. Hence, this article sets out to formulate a research agendanecessarily open endedbased on an explorative review of new plebiscitary formats, developing on a substratum of older plebiscitary formats, which have in common: • a focus on the swift aggregation of individually expressed choicesincluding electronic clicks, checks, likes and other signs of supportinto a collective signal believed to be the voice of the demos or the vox populi, which tends to be revered ('vox populi, vox dei'); • a concentration of such citizen-inputted, aggregative processes on political actors and issues in public governance, tending to result in binary public verdicts ('yes/no', 'for/against'); • a belief in direct voting of a highly competitive and majoritarian sort ('you vote, you decide'), centralizing mass and quantity, a bigger-the-better logic, a 'democracy of numbers' (cf. Lepore 2018). The new plebiscitary democracy comes with a comparatively thin conceptualization of 'democracy', invoking the bare notion of a demos whose aggregated, amassed will is to steer actors and issues in public governance in a straight majoritarian way (cf. Della Porta 2013;Hendriks 2010;Powell 2000;Lijphart 1999). 1 Unlike deliberative democracy (and more like, for instance, 'stealth democracy'; Hibbing and Theiss-Morse 2002), there is no sophisticated normative political theory in place from which plebiscitary practices are deduced and legitimated. Democratic claims develop in and around new plebiscitary practice. Such claims cannot be taken for granted, but neither can they be dismissed a priori. How and to what extent democratic claims are actually realized are to be determined by the type of research outlined in this article. The adjective used in 'plebiscitary' democracythe new incarnation as well as the olderrefers to the more or less democratic use of plebiscites or 'votations'. The latter is an umbrella term for various ways of taking votes beyond merely the ballot box of general elections. 2 Votations or plebiscites can be either bottom-up or top-down, issue-oriented or elite-oriented (more on this in the next section). The leader-dominated variant is one of the possibilities of plebiscitary democracy (cf. Green 2010: 5;Qvortrup et al. 2018), not its one and only option. In present-day, 21st-century democracy, the request 'May we have your votes now?' entails more than it used to. Not only has the 'we' taking and aggregating votes been enlarged, but so has the 'votes' that are being taken and aggregated. Citizens may still cast their votes in ballot boxes on election day, as citizens have done for decades. But nowadays they can also see them aggregated as digital signatures, checks, likes and various sorts of electronic declarations of support in the periods between election days. The actors initiating such votations can be other citizens, non-political and non-governmental actors, but they can also be political or governmental actors with an institutionalized stake in the political system à la David Easton (1965). The votations may be directed at political leaders and authorities that operate within the political system or at issues or topics in the public domain. They are not confined to formal democratic decision-making. While conceptual stretching of the concept of voting is not uncommon -'voting with your feet', popularizing some areas more than others, 'voting with your purse', supporting some brands more than othersthe exploration here is primarily focused on practices that can be viewed as variants of 'voting with your hands' on public and political issues. This means that the focus is on contemporaryoften device-clicking (Halupka 2014;Hill 2013;Jeffares 2014)extensions of the longer-existing hand-raising, box-ticking and button-pressing activities of individuals that amount to a collective signal with regard to political leaders or issues of public governance. (Hence, I consider the naming of a publicly funded 'flagship' within the boundaries of the exploration and, for example, the digital vote for 'best book of the year' not). Such a constrained stretch of the public vote concept is both justifiable and urgent. New voting formats are spreading, changing democratic discourse and relations in ways not yet well understood. We should differentiate the new plebiscitary democracy, which assumes human agency, including democratic action and discourse, from the strictly 'instrumentarian' surveillance systems that Shoshana Zuboff (2019: 20) describes as deeply 'antidemocratic', working towards a data-driven, behaviourist society model in which 'the algorithms know best' and in which political action is to be avoided (Zuboff 2019: 433). Yuval Noah Harari (2017: 428-462) uses the term 'dataism' to denote the belief that refined algorithms can render democratic action and discourse obsolete in the not too distant future. The rival idea -'techno-activism'assumes that technology extends human agency and collective action. 3 The formats of the new plebiscitary democracy that are explored here largely follow a techno-activist approach to democracy, albeit with a particular, plebiscitary leaning. In exploring these formats, I do not negate the scope for technical applications with nondemocratic and apolitical implications in dire need of investigation too; the examination of these, however, falls outside the scope of this article. New plebiscitary, deliberative and established electoral democracy As this article focuses on the sprawling phenomenon of 21st-century plebiscitary democracy, it will not delve deeply into the peculiarities of established electoral democracy (the rectangular box in Figure 1). The emerging 21st-century plebiscitary democracy (the circle on the left) is approached here as a set of additions to established electoral democracy, just like the deliberative turn at the end of the 20th century produced a set of additions (the circle on the right; cf. Warren and Lang 2012). Deliberative-democracy formats include random mini-publics, juries, citizens' assemblies, consensus conferences, planning cells and the like, and are geared at thoughtful, reflective and transformative processes of public opinion formation (cf. Bächtiger et al. 2010;Dryzek 2000;Gastil and Levine 2005). Such formats have thus far received more encompassing attention in the democratic-innovations literature than the sprawling and nascent formats of new plebiscitary democracy. The latter are comparatively under-conceptualized, notwithstanding the existence of important alerts of related developments (cf. Cain et al. 2003;Green 2010;Keane 2009;Rosanvallon 2008;Rowe and Frewer 2005). The subfield of new 'digital-age' democracy is covered by many studies, but these are to a large extent focused on versions with a deliberative or collaborative setup: digital town meetings, online discussion forums, Wiki-style law-making, hackathons, collaborative coding and similar formats for interactive co-creation (Mulgan 2018;Noveck 2009Noveck , 2015. The more voting-oriented, plebiscitary versions have attracted some attention (e.g. Susskind 2018: 239-243), 4 but in terms of systematic theorizing and comparative analysis much ground is still not covered. Therefore, the central objective of this article is to take a next step in exploring and mapping the diversity of 21st-century democracy, and to formulate a tentative research agenda with regard to it. The world of general elections for parties and candidates is extensively documented (e.g. Diamond and Plattner 2006;Lijphart 1999;Sartori 2016Sartori [1976). For our present purposes, the conceptual distinction between plurality/majority systems and systems of proportional representation is most useful and relevant. Although winner-takes-all systems of plurality/majority voting at face value seem fertile breeding grounds for new plebiscitary practices, we do not yet know whether the emerging formats of plebiscitary democracy are taking root any less in electoral systems of proportional representation. This is actually one of the questions that requires more systematic research, the basic lines of which will be drawn in the concluding section. There the relationship between deliberative and plebiscitary additions to representative democracy will also be interrogated. The new plebiscitary democracy: emerging formats In this section, I develop a tentative typology of the new plebiscitary democracy, distinguishing between four types of emerging plebiscitary formats. The new formats regenerate longer-existing methods (initiative, referendum, recall, primary, petition, poll) with new tools and applications. The Podemos and NERC votations mentioned as opening examples illustrate, in a very specific way, two of these general types: Type-I votations that work 'inside-out'pushed by parties or institutions that make up the political systemand are 'leader-focused' (the Podemos example); and Type-II votations that also work 'inside-out' but are 'issue-focused' (the NERC example). 5 There are also new votations emerging that work from the outside inpushed by actors or groups beyond the set of parties and institutions that are commonly understood as the political system. Ideal-typically, they may focus their votecollecting activities on political elites and leaders -Type-III votationsor on particular public issues -Type-IV votations. The resulting matrix of ideal-typical options is depicted in Table 1. In democratic practice, we may see combinations or clusters of such ideal types developing, but to understand these properly we first need to see the underlying mechanisms and diversity of formats. This is what the next two subsections will focus on. Emerging formats: inside-out Type-I and Type-II votations share a top-down or more precisely an inside-out logic of mobilizing choice signals and interpreting them as an aggregated public choice. They reinvent, with new formats, longer-existing mechanisms like party primaries, party recalls and top-down (i.e. government-initiated) referendums and (pre-internet) opinion polling steered by political actors and public authorities (cf. Altman 2017; Cain et al. 2003;Hollander 2019). When these initiatives involve the aggregation of electronic clicks, the term 'clicksultation' is used as a contraction of 'clicks' and 'consultation'. 6 Clicksultation operates top-down, or more precisely inside-out, and should be distinguished from 'clicktivism' (Halupka 2014;Lindgren 2015), which combines 'clicks' with 'activism', operating bottom-up or Elite-monitoring social media/data analytics (e.g. watchdog-exposed sentiment ratings of elites) Divisive content pushing/ hot-topic trolling (e.g. foreign actors campaigning for anti-Islam clicks in the US) Underlying formats Voter-imposed recalls, elite-focused petitions and polls (pre-internet) Underlying formats Voter-imposed initiatives, bottom-up referendums, issue-oriented petitions and polls (pre-internet) rather outside-in. 7 At this point we should recall that we are concentrating here on formats that numerically aggregate individual signals into a collective signal or vox populi, not just any form of online engagement. Two people sharing a political post may be an expression of online engagement but not so much an expression of 21st-century plebiscitary democracy as, say, 2 million liking such a post (cf. Jeffares 2014). Type Iinside-out and elite-focused Besides Podemos, various other political parties have also taken new steps into the realm of Type-I votations. The European Green Party, for instance, organized an 'open online primary', as they called it, to select top candidates (Spitzenkandidaten) for the European parliamentary elections of 2014. Such a digital primary has an elite-forging logic to it, but leadership-challenging digital consultations can also be organized quite easily, as the earlier Podemos example testifies. As we saw, this worked practically as a party-initiated recall, organized through the Podemos website, even though it was called a 'party referendum' in line with more publicly resonant language. Party-and government-initiated polls to legitimate and serve political and executive leadership have become easier and less expensive to organize on a frequent basis with present-day technology. Under pre-internet circumstances, specialized organizations were often hired for designing and conducting large-scale public polls, while nowadays virtually all of this can be done in-house. That the quality of public polling often falls back to straw-polling practicesstraying from the scientific approach to representative sampling and proper authenticationis often taken for granted in these practices (Bishop 2005). I include these practices here to the extent that their results are expressed as representative claims in democratic discourse (cf. Saward 2010), regarding in this subcategory the selection or deselection of political leadership. Party-or government-initiated popularity polls that are used only internally to monitor the approval rates of politicians are excluded from this overview of the new plebiscitary democracy. A next step on this avenue (inside-out, elite-focused) is the deployment of social media and big data analytics to reveal which politician, party or authority is developing positive or negative sentiment among the public. The promise of data analytics is that vital information, also when it comes to political preferences, can be distilled from social media choices already collected in various places. Only when the aggregated choices are publicly revealed and made part of democratic discourse, which is not very often thus far, do the underlying practices fit the previous definition of the new plebiscitary democracy. 8 Until now, the results of social media and big data analytics in the political realm have tended to stay within campaign teams, using the information for covert political micro-targeting: knowing who to focus on with variable political messages from a political party or candidate in order to get better results on election day (Zuiderveen Borgesius et al. 2018). Type IIinside-out and issue-focused The Type-II illustration that we started with was the NERC-initiated digital consultation of the general public that resulted in 'Boaty McBoatface' being pushed forward as the name for the publicly financed flagship in question. Here, the aggregated voice of the people, backed by 124,000 declarations of support, was ultimately not followed by the public authorities that had sought it. The new plebiscitary democracy is not different from older and other expressions of democracy in which the public voice is also not always or automatically followed. Yet, there are various instances where clicksultation did lead to government action. For instance, like so many other cities, Rotterdam experimented with a so-called design competition for city-enhancing ideas. Social entrepreneurs could propose ideas, the general public could express their support digitally, and the winning idea would be implemented. 9 Practices like these mobilize support digitally, through clicks of various sorts, and are often set up in a competitive fashion: ideas competing with each other for support in a win-or-lose format. In popular television language of the day: 'you vote, you decide'. 10 If two options are specifically compared (yes or no to an idea or proposal, to a plan A or a plan B), the language of referendums is never far away, even when a referendum is formally speaking not on the roll. When Australia, between 12 September and 7 November 2017, organized a non-formal citizen survey on samesex marriage, the Economist (2017: 49) described it plainly as 'a plebiscite by another name'. 11 The Australian coalition government of the day had pledged to allow a private member's bill and a conscience vote in parliament on same-sex marriage if the informal plebiscite returned a majority 'yes', which it did (with 61.6%). This opened the way for parliamentary debate and ultimately an approved Marriage Amendment. The Australian informal plebiscite was a special exhibit of presentday plebiscitary democracy using non-digital infrastructuretechnically it was a non-formal postal survey. 12 When in 2018 the European Commission organized a citizen survey on the issue of daylight saving, however, it complied again with the default of the new plebiscitary democracy and organized it as an online survey. Digital consultations, such as the recurring internet votes on specific issues triggered by the populist Five Star Movement in Italy, are supposed to establish a direct connection between politicians and voters on an issue-by-issue basis. 13 Here, plebiscitary votations are closely associated with a populist vision of direct democracy, in opposition to the established elites and institutions of representative democracy (Franzosi et al. 2015). Looking into the Five Star Movement as well as Podemos, Paolo Gerbaudo (2019: 2) detects a dominant top-down and quantitative 'plebiscitarian' logic in their digital voting practices, overshadowing the bottom-up, qualitative, more or less deliberative digital innovations that have also been attempted. A next step in this category is the strategic mobilization of online and social media 'rallies' on hot topics, organized by political actors interested in showing mass traction on such topics. An illustration is the framing of the 'migrant caravan' by the Trump presidency in 2018 (Ahmed et al. 2018;Dreyfuss 2018). Social media traction was used as vindication of presidential policy: your worries steer my policy on this issue. Consent for (re)using such digital 'votes' is simply taken for granted and proper authentication (is this really 'one person, one vote'?) is not guaranteed. The issue of voting inflation The previous illustration prompts an issue that affects all four categories of votations. The idea of democracy assumes a demos consisting of free (non-coerced) and equal citizens ('one person, one vote'). Theoretically, it cannot consist of (ro)bots steered to push numbers of electronic votes (likes, retweets, and so on) or fake accounts suggesting individual citizens. The problem, however, is that it is not always clear when this is happening, which may result in artificially inflated claims dressed up as the public voice (cf. Tanasoca 2019). Proper mechanisms for authentication are needed but not always present. Experiment first and improve later is quite typical of how the new plebiscitary democracy is being designed. Another issue, also related to voting inflation, is the mobilization of click baits to give traction to 'leading' politicians and 'trending' topics in pumped-up numbers. New tech is clearly interfering here, although voting cascades and crazes are not new to democratic life. 14 Emerging formats: outside-in Type-III and Type-IV votations share a bottom-up or, more precisely, an outside-in logic of collecting and aggregating choice-signals via regenerated plebiscitary formats. This means that the initiative lies predominantly with private and societal actors that approach the political system and its dealings from an external vantage point, attempting to force their messages into the political system, and onto it (as opposed to being consulted by system actors, which is the realm of Type-I and Type-II). The organizers of Type-III and Type-IV votations emulate, in new ways, existing formats like the voter-imposed recall, the voter-imposed initiative, the bottom-up referendum, the signature-based petition, andagainthe opinion poll (here the bottom-up version of it, commissioned by actors external to the political system). Type IIIoutside-in and elite-focused In 1842, the Harrisburg Pennsylvanian organized one of the first political polls, asking a convenience sample about their preferred candidate in the Jackson-Adams presidential race. It was a typical straw poll based on a non-random sample, which in new guises can be found as 'vox polls' on the websites of numerous media and other public organizations nowadays (Bishop 2005; Holtz-Bacha and Strömbäck 2012). Such instances of digital polling, using electronic convenience samples to gauge the vox populi quickly, have become virtually countless since the massive uptake of broadband internet in the early 2000s. If digital readers and website visitors are asked to rate political leaders, parties or authorities, we have an instance of a Type-III vox poll. (If they are asked to say 'yes' or 'no' to particular issues, we get the attributes of the Type-IV version that will be discussed later.) A case of outside-in clicktivism (bottom-up activism using digital clicks) to challenge political leadership was played out in the Dutch city of Zutphen. In 2015, the politically selected candidate for the office of mayor (ad interim) in this town was attacked by an internet poll organized by a regional newspaper and by an e-petition organized by a worried citizen. Both were highly negative about the candidate: 95% of the participants in the internet poll agreed with the statement that this candidate 'should stay away'; the e-petition against this candidate immediately received 2,326 signatures. Even though neither had any formal status within the nomination procedure they effectively forced the withdrawal of the candidate, who before these bouts of clicktivism had been very close to nomination. 15 An example of political leader-supporting clicktivism pushed by non-system actors was the hashtag action #ImWithHer, a digital campaign that was meant to show massive support for Hillary Clinton as candidate for the US presidential elections of 2016. The hashtag was actively pushed by celebrities such as Jennifer Lopez, Alicia Keys, Rihanna and others (Leow 2016). #ImWithHer was not invented or hosted by the relatively centralized official Clinton campaign, which nevertheless jumped on the bandwagon quite happily, albeit not with the desired result. The 2016 Trump campaign organization showed a different, more decentralized, way of combining its own activities with external clicktivism. The combined effect in terms of aggregated supportive social media traction was significantly larger for Trump as a candidate and ultimately president-elect ( van Loon 2016;Pettigrew 2016). Distilling from social media choices how people react to political leaderswho's trending, who's not?is an important playing field for (new) media and (big) data and knowledge centres. When such elite-monitoring analyses are pushed outside-in by knowledge centres or media on their own initiative or commissioned by civil society organizations, they can be viewed as expressions of Type-III formats. The precondition for acknowledging these as formats of new plebiscitary democracy is, again, that the aggregated public voice must be publicly revealed and made part of democratic discourse. The difference with the elite-rating internet polls described previously is that in such polls people are explicitly asked to evaluate politicians, whereas in social media analytics evaluative questions are asked after data collection, which means that consent to use clicks for evaluative purposes is assumed rather than explicitly given (Craglia and Shanley 2015). The common denominator here is the aggregative construction of a public verdict based on individually expressed evaluations, combined with and driven by an interest in mass and quantity. The more positive digital traffic there is, the more support a politician is supposed to have. Type IVoutside-in and issue-focused Type-IV votations share this interest in mass and quantity, working from the outside in, but are primarily focused on support for public issues. On top of the longer-existing offline version of the petitionbasically an aggregated declaration of supportthe phenomenon of the e-petition has spread widely. Some portals for e-petitions are privately hosted, some are publicly hosted, but as a rule e-petitions are an outside-in phenomenon. For instance, the UK government may host www. petition.parliament.uk but it does not initiate the e-petitions that appear on this site, nor does it canvass support for it. When an e-petition receives more than 10,000 signatures the UK government promises to respond to the public request voiced in it, and above 100,000 signatures a debate in parliament is considered. 16 Shortly after the Brexit referendum, more than 4.15 million people supported an e-petition posted on this website calling for a second EU referendum. The government rejected the 'representative claim' of the initiators, arguing that the original referendum had produced a clear and legitimate majority, which did not silence the popular call for a second referendum. 17 Beyond purpose-built e-petition websites, various other electronic platforms serve to collect and amass electronic signs of public (dis)approval. First, these may be websites built for other purposes besides signature collection that, however, also facilitate the count of likes, checks, thumbs-up or equivalent signs of support. Illustrations include the websites Decide-Madrid and Frankfurt-gestalten, which among other things track the support for different urban initiatives in quantitative terms. 18 Second, these may be social media platforms such as Facebook, WhatsApp, Twitter and Instagram, which, in addition to many other things, facilitate the bottom-up aggregation of support for issues. This works to a large extent numerically and competitively (Nagle 2017;Sunstein 2017). How many Facebook likes did some claim by an ideational group get? How many (re)tweets were voiced and counted on Twitter in support of some political message? How many digital photos were shared under a particular hashtag? Famous hashtag actions for an issue are #JeSuisCharlie and #Blacklivesmatter. 19 As usual in the new plebiscitary democracy 'size matters': the more declarations of public support aggregated, the stronger the initiators' political claimin this category regarding an issue of public concernis assumed to be. 20 In 2010, the automobile club of the Netherlands, the ANWB, asked their numerous members to respond to a poll on their website related to government plans to introduce a version of road pricing. It worked as an unofficial bottom-up referendum also because it was presented as such by the car-friendly national newspaper De Telegraaf. For several days in a row it ran headlines like 'For or against?', 'Numbers go through the roof', 'Crushing no, more than 89% against road pricing'. The government withdrew its plans, with reference to the 'apparent' opposition in society. A next step in this category (onto a slippery slope, according to many, but it cannot be left out of a candid group portrait of new plebiscitary practices) is the mobilization of clicks on hot topics by political outsiders (sometimes foreign) with the intention of aggregating and amplifying political opinions that are favourable, materially or immaterially, to these actors. An example is a group working from former Yugoslavia, trying to get as many lucrative clicks from American Trump supporters, feeding them with anti-Islam content such as: 'MOB of angry muslims ravage through US neighborhood threatening to rape women'. 21 While getting their clicks, and perhaps kicks, such disrupters create public sentiments, revealed in numbers, around public issues. Again: not an entirely new challenge to democracy, but technically facilitated in ways hitherto unseen. Central points and caveats The claim here, it needs to be emphasized, is not that new plebiscitary formats are successful all round. The point is that new formats of plebiscitary democracy are widely emerging and, as an interrelated complex of practices, changing democratic discourse and relations in many significant ways that are as yet under-researched and under-conceptualized. Hence, the call to develop a systematic research agenda and the attempt to understand emerging formats as interrelated empirical phenomena. The four types previously outlined can help to map the variety of forms as well as the evolving hybrids involving present-day plebiscitary democracy (see Table 1). Table 1 maps new territory in four general directions, sketching the currently most relevant variety without pretending to be complete or exhaustive. This would indicate a grave misunderstanding of the situation. Plebiscitary democracy is a sprawling phenomenon that is still very much in development. The contemporary formats mentioned are in different stages of institutionalization. The e-petition and the internet poll, for instance, are further institutionalized than the digital primary and the electronic design contest. Some formats, like the Rotterdam City Initiative contest, were discontinued after a few years of practice and are exchanged for other experiments. Compared to the new plebiscitary formats, the older underlying formatsinitiative, referendum, recall, primary, petition, public poll (preinternet)are clearly further hardened and codified in 'textbook varieties' (Altman 2011(Altman , 2017Cronin 1989). The developing formats of the new plebiscitary democracy are not yet in that stage of institutionalization and codification. The emergent, varied and sprawling nature of the phenomenon will complicate but should not stop the exploration and documentation of the phenomenon. It is clear that the developing plebiscitary democracy comes in many shapes and forms. Yet, under the many expressions common traits can be detected, most prominently the centralization of individual choice signals, which in one way or another are aggregated into a collective vote or public voice. The related message, always implicit, sometimes explicit, is that everyone can rate things and people in the public realm -Andrew Keen (2007) New plebiscitary mechanisms are often electronically enhanced, which makes this a prominent instrumental feature. 22 More central to its character, however, is the fact that the new votations tend to result in binary public verdicts (for/against, yes/no) wherein a bigger-the-better logic prevails. Claims with many clicks, likes and checks behind them are assumed to be the more legitimate claims, able also to compete with the representative claims of political parties and ideational groups. Numbers of followers make the difference in a democratic ethos that is fiercely majoritarian and competitive. Jill Lepore (2018) argues that a 'democracy of numbers', as she calls it, is deeply American, but as we have seen a new democracy of numbers is coming to the fore in other places as well. 23 New plebiscitary practices tend to expand or radicalize longer-existing formats with new means. The centuries-old petition, for instance, is being echoed and blown up in numerous present-day expressions of clicktivism. In addition, new ways of voting are often likened to or presented as a 'referendum', even when technically speaking a party recall (see the Podemos example) or an internet survey (see the ANWB example) would be a more accurate frame of reference. The use of referendum language far beyond its formal niche is a remarkable by-product of the new plebiscitary democracy. When the US Congressional elections of 2018 are dubbed a 'referendum' on the Trump presidency, or when the European Parliament elections of 2019 are framed as a 'referendum' on the borderless Europe of the elites versus the Europe of the people, we see plebiscitary discourse hooking onto the realm of electoral democracy. As Figure 1 suggests, the new plebiscitary democracy develops to some extent connected with, and to some extent detached from, established electoral democracy. Clicktivism of the #JeSuisCharlie type, for instance, has a clear political message and meaning but does not primarily appeal to representative politics. To a considerable extent, however, plebiscitary democracy is also entangled with the realm of established electoral democracy. 24 Take the digital votations that are used to (de) select political leaders, or to select a winning 'city initiative' to be funded by a municipality. Or look at some of the other earlier exhibits: the Australian postal survey that opened the door to a private member's bill on same-sex marriage, and the informal 'referendum' organized by the Dutch automobile club that directly affected governmental decision-making. What we have here is more refined than a simple zero-sum game: what plebiscitary democracy wins is not necessarily lost by electoral democracy, or vice versa. To better understand the patterns of interactions, we need to delve deeper into them. In the next section, I will demarcate the interactions to investigate more deeply. There, I also discuss the relationship between present-day plebiscitary and deliberative democracy: in essence competing views on how democracy should be extended. The aggregative, majoritarian and competitive spirit that inspires plebiscitary democracy runs counter in many ways to the integrative, consensual and transformative spirit that infuses deliberative democracy (Gerbaudo 2019: 3; Hendriks 2019: 453). Towards a research agenda The argument advanced in this article is that we need to understand the new plebiscitary democracy better than we presently do, not only its inherent dynamics but also its relation to the established systems of electoral democracy and the prominent alternative of deliberative democracy. For this purpose, Figure 1 is transformed into an analytical scheme, with elements and relationships to be prioritized in researchmarked A, B and C in Figure 2. I readily admit that concentrating on empirical issues related to the new plebiscitary democracy as a political phenomenon is a choice. I do not deny that there are wider-ranging normative and societal questions to ask, which deserve separate treatment. 25 The analytical triangle of Figure 2 has three corners pointing to, first, an established system of electoral democracy; second, a comparatively advanced set of deliberative additions that have been promoted widely since roughly the 1990s; and third, a comparatively new set of plebiscitary additions that have accelerated strongly in the 2010s. This last, the bottom-left corner, is thus far least documented by empirical research. This may be understandable from a historical perspective, but in view of 21st-century developments this needs changing urgently. The exploration of plebiscitary democracy developments in previous sections prompt a number of follow-up questions, of which the following take precedence. A1: What are the enduring expressions of 21st-century plebiscitary democracy? A2: What are the main drivers behind these expressions? A3: What are the implications for citizen participation and civic culture? Expressions. First, we need to track and trace which of the developing formats of 21st-century plebiscitary democracy, summarized in Table 1, develop into more or less durable expressions. Of the many new formats that are tried and tested at some point in time, a smaller set of formats is expected to become institutionalized and passed on. Some of these formats may develop within the confines of the four ideal-typical categories distinguished above; a strong candidate is, for instance, the issue-supporting e-petition, mobilizing outside-in digital support for particular causes. Some other formats may cross boundaries: we could think of the new-style political party website that is used for leadership votations as well as issue-related vox polls. When the deliberative turn was proclaimed in the 1990s it took years of extensive research to reach a significant level of consensus on the main empirical formats of deliberative democracy (Bächtiger et al. 2010;Gastil and Levine 2005). A similar trajectory could be expected for the new plebiscitary democracy. To provoke future research, it is postulated that inside-out formats developed by governments and political actors will increasingly be designed to capture or placate outside-in pressures for votations, which will in turn trigger new and other outside-in formats. Additionally, it is postulated that plebiscitary formats with staying power will be leader focused and issue focused, as both reflect more general tendencies of information-age societies to rate people as well as things (cf. Hill 2013;Keen 2007;Nagle 2017;Susskind 2018). Drivers. Second, we should understand the driving (f)actors behind the new formats of voting better than we presently do. In addition to technological there are cultural and related political drivers to consider. The turn to deliberative democracy in the late 20th century was seen to be related to the coming of age of a new social and political culture, which had pushed values of active participation, open communication and self-expression since the late 1960s (cf. Dryzek 2000;Inglehart 1990). Likewise, it seems that the new plebiscitary democracy is pushed by the more recent rise of populismfavouring more 'hardball', aggregative and majoritarian practices ( If populism is about who should govern (Norris and Inglehart 2019: 248), then the new plebiscitary democracy seems to fill in how this can be done: with renewed and radicalized variants of plebiscites. Relatedly, it seems quite plausible that new plebiscitary instruments are turned and appealed to as a response to a real or perceived crisis of established parties and electoral politics (cf. Bardi et al. 2014;van Biezen et al. 2012). The technological push behind the new plebiscitary democracy is evident, but at the same time insufficiently understood. New digital and social media applications, connecting user-friendly smart devices to broadband internet, seem to push competitive, vote-counting practices (Halupka 2014;Harari 2017: 394, 435 ff;Hill 2013). But what are the underlying mechanisms and connections, and who are the actors and organizations that actually forge the technological push? Implications. Focusing on the empirical-political consequences of the new plebiscitary democracy, as we do here, the consequences for civic culture and democratic citizenship are highly urgent. 26 One of the obvious questions here is how and to what extent new plebiscitary practices help or hinder different types and groups of citizens in a political sense. Studies of political clicktivism suggest that its participants display a rather different profile than participants in deliberative-democracy practices: on average less highly educated, less interested in detailed policy-oriented meetings and more interested in quick messaging via mass media (Halupka 2014;Nagle 2017;Sunstein 2017). While this may be true for particular expressions, there is reason to believe that this does not work exactly the same for all expressions of plebiscitary democracy. For instance, the initiators of and the participants in the e-petition demanding a second referendum on Brexit, another previous example, displayed a rather different profile from the ones behind the #LockHerUp Twitter rally. 27 A more refined picture of what plebiscitary democracy in its various guises does with citizens and participation is thus needed. Another obvious question here is how and to what extent new plebiscitary practices push a shift from pluralism to populism (the reverse of what is asked under question A2). Plebiscitary additions and established electoral democracy While the interplay between deliberative democracy and established electoral democracy (the continuous line in Figure 2) has been problematized and investigated for many years (e.g. Dryzek 2000; Gastil and Levine 2005;Setälä 2017), a lot of catching up needs to be done for the connection between the new plebiscitary democracy and the established system of electoral democracy. As plebiscitary practices are basically more majoritarian in their setup, it would be pertinent to compare their uptake in majoritarian (winner-take-all) versus proportionally representative (PR) electoral systems, besides looking at how they impact on electoral democracy's central institutions and political culture: B1: To what extent and in which way does the uptake of 21st-century plebiscitary formats differ in majoritarian versus PR electoral systems? B2: What are its implications for electoral democracy's central institutions? B3: What are its implications for political discourse and governing style? Uptake. It could be argued that winner-take-all (district-based majority or plurality) systems present a more fertile breeding ground and more conducive political opportunity structure for 21st-century plebiscitary formats than PR electoral systems. 'Majoritarian institutions breed plebiscitary votations' at face value seems plausible, but this needs to be reviewed more closely. A rival hypothesis would be that PR electoral systems, because of their diluted compromises between multiparty elites, trigger plebiscitary reactions that follow extra-institutional pathways (cf. Caramani and Mény 2005). If we look at the underlying, longer-existing formats of plebiscitary democracy, we do not see one clear pattern for all formats. Primaries and recalls have spread most prominently in the US two-party system. Public polling (of both political personae and issues) also developed earlier and stronger in this contextbut not uniquely there. Referendum practices have developed strongly under PR circumstances in Switzerland, Italy and more recently subnational Germany, while the majoritarian electoral systems of France and the UK have also seen referendumsalbeit of different kinds (Altman 2011;Qvortrup 2018). By the same token, we should expect the emerging formats of the new plebiscitary democracy to follow not one but various institutional pathways. More specifically, we should expect majoritarian and PR systems to both trigger elite-focused and issue-focused votations, outside-in as well as inside-outfollowing different paths of action and reaction, yet to be understood. Impact on central institutions. New political tools and applications potentially impact positions and resources of central institutions in electoral democracy such as executive offices, representative bodies and political parties. Institutional and network analyses should reveal whether and how this is the case, first and foremost for the countervailing powers of executive and representative institutions. Both sides can organize inside-out votations, and both can be the target of outside-in votations, but differences are likely to exist. The expectation is that representative bodies (parliaments, regional and local councils) have more leeway to direct or redirect the vox populi towards the executive (governing boards and office-holders) than the other way around. The executive branch seems more challenged, potentially disrupted, by votations targeting specific issues and politicians, further reducing governing discretion. Particularly interesting to trace is how new populist politicians position themselves in the matrix of plebiscitary pressures and possibilities (see Table 1) when taking up executive responsibilities, and how this differs from more traditional governing elites. Not only in executive office, but in the political realm in general, the strategic positioning of populist parties is worth following. It could be argued that the new plebiscitary democracy gives them a strategic advantage vis-à-vis establishment parties as this 'medium' seems closest to their 'message' (cf. Mudde 2004;Müller 2016). History, however, teaches us to not rule out surprises. Catholic conservatives in 19th-century Switzerland, for instance, were originally far removed from the referendum instrument, but nevertheless discovered and captured its strategic use (Kriesi and Trechsel 2008). Political discourse and style. An important question is how political debate (including claim-making and rhetoric) and governing style (including manner of communication and interaction) change in connection with new plebiscitary practices. Do we see the discourse surrounding plebiscitary votationswith its strong focus on mass, traction and numbersechoed or reframed in the political (speech) acts that emanate from the benches in parliaments and local and regional councils? Do we see governments develop new ways of dealing with the general publicproactively or reactively tackling the public voice that is constructed around issues or people? Cultural and more specific discourse analyses should reveal whether and how this is the case. We already asked if and how plebiscitary practices spread differently in majoritarian (winner-take-all) versus non-majoritarian (consensual) democracies (question B1). Here, we can add: Do they touch these systems differently on a cultural level? The propositionsharp on purposeis that cultural disturbance following the emergence of new plebiscitary practices are more intense in consensus democracies than in majoritarian systems. The clash with consensus democracy's focus on integrative elite deliberation and its fear for mass politics by the numbers is comparatively more intensive, and could be expected to inspire repulsive discourse and aversive action sooner and more strongly (Hendriks 2009). 28 Deliberative and plebiscitary additions The various formats of deliberative democracyfrom mini-publics to consensus conferences and everything in betweenhave been extensively described; this pertains also to how deliberation may clash with and how it can contribute to established electoral democracy (cf. Beauvais and Warren 2019;Dryzek 2000;Hendriks and Kay 2019;Setälä 2017). The relationship between deliberative democracy and 21st-century plebiscitary democracy, however, still needs to be defined properly. The two can be viewed as rival democratic innovations, but also as formats that to some extent may be combined to contribute to the democratic process. This prompts two types of questions: C1: What are the comparative meritsadvantages and disadvantagesof new plebiscitary versus deliberative formats? C2: What is the feasible space for combinationsfor connecting new plebiscitary and deliberative formats? Comparative merits. Comparative (dis)advantages need to be analysed, first of all, at the level of internal qualities. What is it that new plebiscitary formats, because of their design characteristics, do better or worse than deliberative formats? As they involve different technologies and organizational models, they should be expected to have different sorts of leverage for different purposes. Frank Hendriks (2019) has compared different forms of mobilized and randomized deliberation with the digital quasi-referendum on four quality criteria: equality, participation, deliberation and concretization. By way of its design, the quasi-referendum can reach more participants, but in terms of equal opportunities for representation randomized deliberation generally has better credentials. Such a comparative analysis should be broadened to include other versions of 21st-century plebiscitary democracy (the e-petition, the digital primary, etc., outlined in Figure 2, and to be detailed from question A1). The analysis should be open to possible additional qualities, such as channelling collective self-expression (designed into e-petitions, hashtag-clicktivism and the like), or empathy for the other side of an argument (not habitually designed into these formats). In addition, we should compare the external effects of different formats in relation to established electoral democracy. Plebiscitary formats have a specific way of working with or against representative politics. Like deliberative democracy formats, plebiscitary versions start with criticism of 'thin' electoral democracy, which would be inferior to what alternative methods can offer. While deliberative formats can claim deeper and richer collective reflection (Bächtiger et al. 2010), plebiscitary formats may claim popular support in larger numbers. In a comparative analysis of merits, the question should not only be 'Is it true?' (do they deliver the quality and support levels that they claim), but also 'Does it matter?' (to what extent and how are they able to change courses of action, politics and policies in the real world). Space for combinations. New plebiscitary and deliberative formats display different democratic logics, which are in many ways at odds with each other. Does that mean the twain shall never meet? Not necessarily, as some empirical instances of deliberative-plebiscitary mixing show. The Irish mini-public on abortion was mainly a deliberative and integrative affair, but also included moments of aggregation and counting, most prominently in the final referendum that confirmed the advice that the mini-public had produced (Farrell et al. 2019). The 'citizen initiative review' is another hybrid, in which a deliberative mini-public is asked to look into and advise on the options put forward by citizen initiative, prior to massive, dichotomous voting (Gastil et al. 2017). In theory, various new combinations of digital voting and electronic deliberation could be envisioned (Susskind 2018: 212-213). For design thinking in the realm of democratic innovation this is promising land to explore. For the empirical research agenda advocated here, the relevant questions are where, when and how such new combinations appear. What appears to be the feasible space for such combinations? Does type or scale of public governance constrain the appearance of mixed models? In general, it may be expected that keeping deliberative and plebiscitary formats apart is the default position and that mixing them requires special circumstances. To put it differently, 'mixophobia' (fear of pollution) is the primary pattern to be expected in the relation between new plebiscitary and deliberative practices, and 'heterophilia' (love for the different) is the exception requiring special triggers, which need to be pinpointed. As an alternative proposition it is suggested that democratic innovators, notwithstanding possible inhibitions, are ultimately forced to respond to the heterogeneous needs of users and fields on application. Concluding remarks: taking the new plebiscitary democracy on The central conclusion of this article is quite simply that a new plebiscitary democracy is developing, with various new formats building and varying on longerexisting formats, and that this presents an urgent development that warrants more systematic research than is presently available. The article develops a matrix of central expressions of the new plebiscitary democracy (summarized in Table 1) and priority areas for research into the phenomenon itself and its relationship with established electoral democracy, and with deliberative democracy as an alternative source of democratic transformation (Figure 2). Plebiscitary transformations partly overlap with deliberative ones and with established electoral democracy. The Venn diagram with three overlapping spheres ( Figure 1) could be compared to the one used by Russell Dalton, Bruce Cain and Susan Scarrow (2003: 252-256) to summarize their seminal research of democratic transformations in 18 OECD countries between 1960 and 2000. While the sphere of established electoral democracy remained about as important, the spheres of 'direct democracy' and 'advocacy democracy'as Dalton et al. framed the two main alternatives to established electoral democracygrew significantly in the last four decades of the 20th century. Considering 21st-century developments in democratic practice, the two main alternatives to electoral democracy are reframed as plebiscitary democracy and deliberative democracy. As the 'turn' to the latter has been documented extensively (cf. Bächtiger et al. 2010;Dryzek 2000), the objective here was to unravel developments in the sphere of plebiscitary democracy, building on the formal expressions (referendum, initiative, recall and so on) that Dalton et al. classify as direct democracy. We saw a multitude of new 21st-century plebiscitary practices emerge on a substratum of older plebiscitary formats. Echoing the words of Dalton et al. (2003: 255), it is possible to sketch only in 'imprecise terms' the growing significance of the circle of plebiscitary additions. Admittedly, the evidence presented in previous sections is mainly qualitative. But as quantitative indicators for the prevalence of formal referendums have been developed (cf. Altman 2011; Qvortrup 2018), such indicators can also be developed for the (often digital) quasi-referendums of the new plebiscitary democracy, although this will need time. 29 Various objections to this account can be envisioned. The first and potentially most damaging objection would be that there is no such thing as a new plebiscitary democracy, or at least not a new plebiscitary democracy. It is a deep truth that in the world of democracy almost nothing is unrelated to something old. As we have seen, new plebiscitary formats reinvent and radicalize longer-existing formats. They do so in a period of revolutionary technological changea massive uptake of broadband internet, an explosion of smart devices and interactive social mediawhich takes plebiscitary formats to a next stage and level. While technological innovations make new ways of direct voting increasingly possible, related shifts in popular culture make them increasingly popular. Since the turn of the century interactive television with popular televoting formats has strongly converged with internet and social networking (Bignell 2012: 283-293). Countless new and old media have followed suit, which has contributed to the uptake and popularity of practices such as the ones described here (Ross 2008). At this point, a small thought experiment is suggested: take the matrix of new plebiscitary options in mind (Table 1), follow the political news for a month or so, and then ask yourself whether the new plebiscitary democracy is any less real in an empirical sense than the turn to deliberative democracy which was proclaimed earlier. A second objection is to say that new plebiscitary practices may be coming to the fore, but should not be taken seriously, accredited with academic research, compared with consciously designed democratic innovations backed up by refined political theory like deliberative-democracy theory. Are new practices of voting not often ill-designed, flimsy, quick-and-dirty, and potentially dangerous? New plebiscitary votations may be popular, but doesn't this make them vulnerable to populism, to democratic illiberalism? Even if we assumed that previous questions could already be answered with an unequivocal yes for all new plebiscitary practices, then the case for doing more research into the phenomenon would be fortified, not weakened. Even though there are dubious practices that need to be exposed, not all the new plebiscitary voting practices can be dismissed so easily. Guilty by association is not fair grounds for sentencing, and could be challenged with reason. If the European Green Party organizes an 'OpenOnline Primary' it is not automatically on the same page as the Italian Five Star Movement when it organizes some e-referendum. And if such parties or other organizations are experimenting with digital plebiscites, then the relation with democratic values needs to be investigated properly, not a priori assumed to be negative. 30 In general, the democratic claim associated with new plebiscitary practices must not be taken for granted, but neither can it be dismissed from the outset. A third objection would be to argue that the new plebiscitary democracy is indeed real and to be investigated seriously, but not described with enough detail in this article. Surely, specific exhibits of the new plebiscitary democracy (the Podemos digital referendum, for instance, or the EU online survey) can be developed into detailed individual case studies. But this was not the focus nor the objective here. Specification of depth and singularity has been deliberately sacrificed here to revealing empirical breadth and interconnectedness as well as typological variety of the phenomenon. From an explorative perspective, a wide-angle group portrait was deliberately chosen over close-up individual portraits. More fundamental than the details of individual cases, it was argued, are the general types of developing formats emerging on a substratum of longer-existing methods. The Podemos digital referendum, for instance, is put in a wider perspective here, exhibiting how older plebiscitary expressions are being reinvented with 21st-century tools and terms. Undoubtedly, more can be said about such a case when developed from a more hermeneutical perspective. In the research agenda that was set out, such research is actively promoted. The proposed research agenda is open-ended. It defines priority areas for research, and formulates urgent research questions that can be elaborated on, as will be required for an emergent and dynamic phenomenon such as the new plebiscitary democracy. 3 The term 'techno-activism' is preferred to Harari's (2017: 409-427) term 'techno-humanism' that may prompt much wider meanings of 'humanistic' (people's better qualities), which are not automatically included in techno-activism. 4 Susskind (2018) distinguishes five roads that future digital democracy can take: the one that he calls 'direct democracy' is akin to the plebiscitary additions to electoral democracy that we discuss in the next section; a particular part of what we discuss there also comes close to what Susskind calls 'data democracy'. His other three roads lead to 'AI democracy' (using artificial intelligence systems to perform specific tasks), 'wiki democracy' (digital co-production in Wikipedia-style) and 'deliberative democracy (reflective discussion by digital means), and are not plebiscitary by design. In terms of Susskind (2018: 224-225), the latter reflect the 'talkers' and not the 'counters' in democratic innovation debate. A lot of energy, according to Susskind (2018: 219-221), has been invested in 'new ways of doing old things', electronically enhanced but not radically new: working together on projects, organizing campaigns, action and protest. 5 The 'party referendum' organized by Podemos was targeted at its political leadership. The online poll/ design contest organized by the NERC was focused on the proper naming of a publicly funded research vessel: an issue of public governance. 6 Conventionally a plebiscite is called 'consultative' when political actors heed the voice of the people in a top-down fashion without formally binding consequences. This was the case here (although the Podemos vote was taken seriously, the compliance by the leadership was voluntary) and is usually also the case in similar forms of digital voting. 7 Although 'top-down' versus 'bottom-up' are often used as (slightly imprecise) shorthand for similar patterns, we take 'inside-out' versus 'outside-in' as the preferred analytical distinction, as it highlights the difference between votations that are initiated from positions within the political system à la Easton versus votations that are initiated from positions external or peripheral to the political system. 8 In these cases plebiscitary democracy overlaps with what Susskind (2018: 246-250) describes as data democracy. More on the instrumental version of data democracy in Dunleavy et al. (2005), Giest (2017). 9 In subsequent years the winning ideas were a pedestrian air-passage, a skating rink and an urban surf arena. 10 The slogan made popular by the Idols song contest, which travelled from the UK to the US, and then a great many other countries (Ross 2008). American Idol introduced text-message voting in 2003 and online voting in 2011. 11 The qualification 'plebiscite by another name' is consonant with e.g. Altman's (2017) classification, which distinguishes top-down, government-initiated, facultative plebiscites (the Australian example checks all these boxes) from bottom-up signature-triggered referendums and initiatives. 12 Another rare example would be a majoritarian 'hat-on hat-off' voting procedure, referring to popular TV formats ('you vote, you decide!') in public meetings that previously averted such votations. Non-digital, low-tech, but significant in cultural terms. 13 Other political parties experimenting with digital voter feedback include the German Pirates, Podemos in Spain, and Jeremy Corbyn's Labour Party in the UK, which organized internet polls on issues such as military action against IS. In the US, Capitol Bells is a voter app that allows constituents to informally vote on bills in the US House of Representative. 14 Some would say that creating a 'buzz' via old-school, podium-to-podium and door-to-door political canvassing was in essence a similar, though offline, process. 15 The candidate mayor (ad interim) was the liberal-conservative politician Loek Hermans of the People's Party for Freedom and Democracy (VVD), who was nominated for this office by Clemens Cornielje, the Queen's Commissioner of the same political party responsible for pre-selection. 16 Many countries have similar e-petition websites and procedures. In the US it is aptly named 'We the People' (see Noveck 2015 for a critical analysis). 17 A more successful example from the UK is the website 38Degrees, which hosted an e-petition to help stop England's publicly owned forests and woodland from being privatized. In 2011, half a million people put their name to its petition which forced the environment secretary to reverse her policy (Howard 2014). 18 See https://decide.madrid.es/condiciones-de-uso and https://www.frankfurt-gestalten.de/initiativen. Such websites are places where people can start an urban initiative. In addition they do what electronic formats do well: quantifying numbers of comments and declarations of support. 19 A more ironic, but no less iconic, example is the hashtag action/petition #JusticeForHarambe, commemorating the shot Cininnati Zoo gorilla called Harambe, 'demanding' the authorities hold the child's parents responsible (Nagle 2017). 20 Although uncontested definitions of the political versus the apolitical are hard to find, it is widely accepted that #Blacklivesmatter is deeply political, focused on an issue of public concern, other than, for instance, a hashtag action in support of some sports teamclearly not the focus of the exploration here. 21 Or: 'Muslim figure: "We must have pork-free menus or we will leave US" What's your response?' https:// hoax-alert.leadstories.com/3469931-old-network-of-anti-islam-fake-news-websites-turns-to-twitter-trolling. html. 22 Yet we must resist the temptation to equate 'plebiscitary democracy' with 'digital democracy', even when the double meaning of digital (electronic and dichotomous) nicely captures a large part of the new plebiscitary democracy. There are, however, also non-electronic expressions of plebiscitary democracy to consider, as well as non-binary votations. Moreover, digital democracy also comprises formats (for instance platforms used for networked deliberation) that are not plebiscitary in the sense of the advanced argument. 23 Cf. Susskind (2018: Ch. 3), when discussing 'increasingly quantified society', and Davies (2018) when pondering on 'the new era of crowds'. 24 We focus here on new plebiscitary practices in established democracies, but we should note that hybrid regimes and even authoritarian ones are not excluded from some of the formats described. See e.g. #WhiteWednesdays used in Iran to protest against the compulsory hijab. Thanks to Ammar Maleki for pointing this out. 25 Interesting questions beyond the scope of this article include: What does the new plebiscitary democracy mean for people's work/life balance? for codes of good governance? for normative frameworks of democratic innovation? 26 Implications for institutions of the established electoral system are dealt with under cluster B and implications in terms of democratic merits under cluster C. 27 Or compare the participants in other earlier examples: the digital rally for 'Boaty McBoatface' vs the EU online survey on daylight savingdifferent publics, different dynamics. 28 Hendriks (2009) uses Mary Douglas's classic formulation (1966) -'dirt is matter out of place'to illustrate processes of 'pollution reduction' in democratic discourse and practice. 29 See www.c2d.ch and https://www.direct-democracy-navigator.org/ for alternative ways of mapping the territory of formal direct democracy, particularly referendums and initiatives. 30 Gerbaudo (2019) gives a good example of such an investigation, although he does not focus on democratic values specifically.
2020-03-26T10:41:56.590Z
2020-03-20T00:00:00.000
{ "year": 2020, "sha1": "f1a5269b731774f071c3afcb9e8b14e85c90a156", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9A6059564F4A67E85896A3A1B43D877D/S0017257X20000044a.pdf/div-class-title-unravelling-the-new-plebiscitary-democracy-towards-a-research-agenda-div.pdf", "oa_status": "HYBRID", "pdf_src": "Cambridge", "pdf_hash": "542a86044205f1eb391636de438bcfba0d17414d", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
225863281
pes2o/s2orc
v3-fos-license
Foreign Direct Investment in India: Emerging Trends and Patterns This paper examines the trends in foreign direct investment (FDI) in India during 1980-2018 and the emerging patterns of the FDI inflows in the country during the last two decades (i.e., 200 Introduction Foreign direct investment (FDI) is the most important vehicle through which the activities of the multinational enterprises (MNEs) are spread across the world. Many counties across the world compete to attract FDI by liberalizing their foreign investment policies. India, after following four decades of restrictive policies towards, started liberalizing its policies towards FDI since 1991. This was aimed at attracting more FDI into the country not only for supplementing the domestic capital formation efforts, but also to bring advanced technology, managerial, and marketing skills as a package. The ongoing foreign investment liberalization commenced in India in 1991, as part of its drastic structural reform package, has reflected in drastic changes in the foreign investment inflows into the country. There are studies in the literature on various aspects of FDI in India such as the trends, patterns and issues [Kidron, M. (1965), Ganesh, S. (1997), Kumar, N. (1998Kumar, N. ( , 2005, and Nagaraj (2003)], its determinants [Kaur & Sharma (2013) and Archana et al. (2007)] and its relation with economic in the country [Chakraborty & Basu (2002)]. Since the trends and patterns of FDI are a dynamic phenomenon in an economy, a rigorous analysis of FDI with a long time series data set incorporating its recent developments will shed more light on its up-to-date evolving picture, and this paper aims to do this for India. This paper attempts to portray the trends in foreign direct investment in India from 1980 -2018; it also tries to expose the emerging patterns of the FDI inflows in the country during the last two decades. The remainder of the paper is organized in four sections, the second section describes the data sources and methods used for the analysis of the data, the third section illustrates the trends in the inward FDI flows as well as the FDI stock in India, the fourth section shows the emerging patterns of FDI inflows into the country, and the last section presents the concluding remarks. Data and Methods The analysis in this paper is based on secondary data obtained from two important sources viz., (i) United Nations Conference on Trade and Development (UNCTAD) and (ii) Department for Promotion of Industry and Internal Trade, Ministry of Commerce and Industry Government of India. The analysis of the data has been done using tables, line graphs, and bar charts, simple growth rates, and compound growth rates. Compound annual growth rates for (m+1) periods have been estimated by using the following model: Where lnY t is the natural logarithm of the variable of our interest (say, FDI in our case), Di is a dummy variable which takes the values '1' for the observations belonging to the sub-period of our interest and '0' otherwise, 't' represents time, which takes values 1, 2, 3, etc. and ut is the usual stochastic error term. The compound annual growth rate (r) of Y in percentage term can be easily obtained from the estimated β coefficients through appropriate arithmetic operations. The study primarily addresses two research questions: (i) whether the liberalization of FDI policy regime in India has resulted in accelerating the FDI inflows into the country and (ii) whether the pattern of FDI distribution has undergone any drastic changes in the second phase of liberation. The study posits two associated hypotheses: (a) liberalization of FDI policies in India has accelerated the FDI inflows into the country; and (b) the pattern of FDI distribution in India has undergone drastic changes in the second phase of liberation. Trends in the Foreign Direct Investment Inflows into India (1980 -2018) The trends in FDI inflows into India during the last four decades can be visualized from Figure 1. The FDI inflows into the country were negligible and almost stagnant until the beginning of the 1990s. The restrictive inward-looking policies followed till the early nineties made India one of the less attractive FDI destinations in the world, which in turn, have resulted in such a sluggish trend in the FDI inflows to the country during this period. However, the massive liberalization policies commenced in the early 1990s have resulted in a dramatic upsurge in the FDI inflows into the country. Figure 2 shows that the compound annual growth rate of FDI inflows into India has jumped from 14 percent during the pre-liberalization period (1980 -1991) to 26 percent during the first decade of the post-liberalization phase (1992 -2003), a drastic increase of 12 percentage points. There was a further spurt in the FDI inflows into India during the next four years (2004 -2008), and the compound annual growth rate in the FDI inflows stood at its peak during this period, crossing 50 percent. However, the growth rate in the FDI inflows has drastically decelerated to around 6 percent during the next decade (2009 -2018). FDI inflows into India as a proportion of the world's FDI inflows and also as a proportion of the gross domestic product of the country have been computed, and the trend has been plotted in Figure 3. The figure clearly illustrates that the trends in FDI inflows into India as a proportion of the world FDI inflows and as a proportion of the GDP of the country were showing somewhat similar trends that we observed in Figure 1. To be more specific, the share of India in the world FDI inflows, albeit negligible, has increased during the post-liberalization period, even though India's share in the global FDI inflows has declined and became stagnant during the last few years. An almost similar trend could be observed in the case of FDI inflows into India as a proportion of the GDP of the country. It is striking to note that India, a country sharing 2.4 percent of the total world area and 17.7 percent of the world population, had been sharing less than one percent of the world FDI inflows till 2005. The average share of India in the world FDI inflows was around two percent in the subsequent 13 years. The significant increase in the FDI inflows into India in the post-liberalisation period, particularly the increase in the proportion of FDI inflows to the country in the world FDI inflows, clearly indicates the influence of the liberalization measures resulted in accelerating FDI inflows into India in the post-liberalization period. The FDI stock in a country indicates the availability of foreign capital at a particular point in time to supplement the domestic capital formation of the country. Figure 4 reveals that the stock of FDI in India has been negligible in the pre-liberalization period. Still, it started picking up in the initial phase of the post-liberalization period (1992 -2005), and it has steadily increased since 2006. The FDI stock in India has grown, on average, by 13 percent per annum in the pre-liberalization period. It has accelerated to around 26 percent in the first phase of the post-liberalization period (1992 -2005), but the compound annual growth rate in the FDI stock has bounced back to 13 percent again during the last second phase of the post-liberalisation period (2006 -2018). Thus, it is evident that liberalization has resulted in enhancing the stock of FDI in India to supplement its domestic capital formation and technology efforts. India's share in the global FDI stock has been negligible in both pre and post liberalization period. However, the share of India in the world FDI stock has gradually increased in the post-liberalisation period. But, India's share was less than one percent of the global FDI stock till 2009. India's share in the global FDI stock stood, on average, around one percent since 2010. FDI stock as a proportion of the GDP of India was less than one percent in the pre-liberalization period, but it has steadily increased to around 10 percent by 2008 and further to 14 percent by 2018. The Emerging Patterns of Foreign Direct Investment Inflows in India This section discusses the emerging patterns of FDI in India in terms of (i) components of FDI inflows, (ii) source-wise distribution of FDI in India, and (iii) sector-wise distribution of FDI in India during the last two decades. The proportion of the three important components of foreign direct investment in India viz., (i) equity capital, (ii) reinvested earnings, and (iii) other capital, during the last 19 years is shown in Table 2 and Figure 7. It is perceptible that, as usual, the equity capital has been the largest component of FDI inflows into India; moreover, its share in the total FDI has increased from 60 percent in 2000-01 to 73 percent in 2018-19. The share of re-invested earning in the FDI has declined from 34 percent to 22 percent during the same period. Source-wise distribution of FDI can often indicate the nature and quality of FDI inflows into a country. Table 3 presents the share of the top 10 source countries in the FDI inflows to India; it can be observed from the table that Mauritius and Singapore were the two important sources of FDI inflows into India during the last two decades. Mauritius has contributed more than 31 percent of the total FDI inflows to India during the last two decades, followed by Singapore (21 %). It was distantly followed by the other countries listed in the table. It is striking to note that Singapore has become India's largest source of FDI in 2018-19. These two countries have become the toppers as they are considered as tax-heavens, and FDI from other counties like the USA and UK were routed through counties to India to enjoy tax benefits. Sector-wise distribution of the FDI inflows into India during the last two decades reveals that the service sector has attracted the largest share of FDI in India, which was distantly followed by computer software & hardware, and telecommunications sectors (See Table 4). These three sectors have attracted more than 35 percent of the total FDI inflows into India during the last two decades; the ten important sectors listed in the table has attracted more than 66 percent of the total FDI inflows in India during the last two decades. The sectoral distribution of FDI in India in the last two decades of post-liberalization is drastically different from that in the pre-liberalization period. Earlier, the manufacturing sector was the largest recipient of FDI in India, but now it has been replaced by the service sector. Concluding Remarks India, after following four decades of restrictive policies towards foreign direct investment, started liberalizing its foreign investment policies in 1991 to attract more FDI into the country. The liberalization measures have resulted in a dramatic upsurge in the FDI inflows into the country compared to a sluggish trend in the FDI inflows into the country in the preliberalization period. Not only the amount of FDI inflows but also the share of India in the global FDI inflows has increased in the post-liberalization period, clearly reflecting the role of liberalization in attracting FDI inflows into India. Liberalization has not only increased the FDI inflows, but it also has resulted in increasing the stock of FDI in India to augment the domestic capital formation and technology efforts in the country. Even though the share of FDI stock in India in the global FDI stock has not increased significantly, FDI stock as a proportion of the GDP of India has shown a sharp increase during the post-liberalization period. We have examined the patterns of FDI inflows in India in the last two decades to get a better understanding of the present status of FDI in the country. The equity capital has been the largest component of the FDI in India, and its share in the total FDI inflows has been increasing during the last two decades. Examination of the source countries contributing FDI inflows into India revealed that Mauritius and Singapore had been the two important sources of FDI inflows into India during the last two decades. More than half of the FDI flows into the country during the last two decades were from these two countries. These two countries have become the largest sources of FDI inflows into the country because of the tax treaty of India with these counties, and FDI from other counties like the USA and UK have been routed through counties to India. The sector-wise distribution of FDI in India has changed drastically during the post-liberalization period. Traditionally, manufacturing had been the largest recipient of FDI in India. But this has changed considerably in the second later phase of the postliberalization period. The service sector has emerged as the largest recipient of FDI during the last two decades of the post-liberalization period in India. Our analysis has given clear empirical support to our hypothesis that the impact of the FDI liberalization has a significant impact on enhancing the FDI flows as well as stock in India. Our analysis has furnished empirical support to our second hypothesis also by revealing the fact that the liberalization measures have resulted in changing the pattern of FDI inflows and its distribution in the county. In this context, we think that the government should continue formulating appropriate policy changes to attract more FDI into technology-intensive, employmentgenerating, and export-oriented sectors in India. Annual Growth Rate %)
2020-06-04T09:08:07.803Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "4e9fa9ea0ce251f24267af5755f8ad8030e862ae", "oa_license": "CCBYSA", "oa_url": "http://shanlaxjournals.in/journals/index.php/economics/article/download/2992/2666", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cb59e882990a5980083dc30b8e162631a8c7a623", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
244708049
pes2o/s2orc
v3-fos-license
Can polygenic risk scores contribute to cost-effective cancer screening? A systematic review Purpose Polygenic risk influences susceptibility to cancer. We assessed whether polygenic risk scores could be used in conjunction with other predictors of future disease status in costeffective risk-stratified screening for cancer. Methods We undertook a systematic review of papers that evaluated the cost-effectiveness of screening interventions informed by polygenic risk scores compared with more conventional screening modalities. We included papers reporting cost-effectiveness outcomes with no restriction on type of cancer or form of polygenic risk modeled. We evaluated studies using the Quality of Health Economic Studies checklist. Results A total of 10 studies were included in the review, which investigated 3 cancers: prostate (n = 5), colorectal (n = 3), and breast (n = 2). Of the 10 papers, 9 scored highly (score >75 on a 0-100 scale) when assessed using the quality checklist. Of the 10 studies, 8 concluded that polygenic risk-informed cancer screening was likely to be more cost-effective than alternatives. Conclusion Despite the positive conclusions of the included studies, it is unclear if polygenic risk stratification will contribute to cost-effective cancer screening given the absence of robust evidence on the costs of polygenic risk stratification, the effects of differential ancestry, potential downstream economic sequalae, and how large volumes of polygenic risk data would be collected and used. Section and Topic Item # Checklist item Location where item is reported Synthesis methods 13a Describe the processes used to decide which studies were eligible for each synthesis (e.g. tabulating the study intervention characteristics and comparing against the planned groups for each synthesis (item #5)). Page 10 13b Describe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions. Page 10 13c Describe any methods used to tabulate or visually display results of individual studies and syntheses. Page 10 13d Describe any methods used to synthesize results and provide a rationale for the choice(s). If metaanalysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used. Page 10 13e Describe any methods used to explore possible causes of heterogeneity among study results (e.g. subgroup analysis, meta-regression). Page 10 13f Describe any sensitivity analyses conducted to assess robustness of the synthesized results. Page 10 Reporting bias assessment 14 Describe any methods used to assess risk of bias due to missing results in a synthesis (arising from reporting biases). N/A Certainty assessment 15 Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome. Page 10 RESULTS Study selection 16a Describe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram. Page 10-12 16b Cite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded. N/A Study characteristics 17 Cite each included study and present its characteristics. Page 12-13 Risk of bias in studies 18 Present assessments of risk of bias for each included study. Page 18-19 and Appendix 4 Results of individual studies 19 For all outcomes, present, for each study: (a) summary statistics for each group (where appropriate) and (b) an effect estimate and its precision (e.g. confidence/credible interval), ideally using structured tables or plots. Appendix 3 Results of syntheses 20a For each synthesis, briefly summarise the characteristics and risk of bias among contributing studies. Page 17-18 20b Present results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g. confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect. 24a Provide registration information for the review, including register name and registration number, or state that the review was not registered. Page 2 24b Indicate where the review protocol can be accessed, or state that a protocol was not prepared. Page 2 24c Describe and explain any amendments to information provided at registration or in the protocol. N/A Support 25 Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review. Page 2 Competing interests 26 Declare any competing interests of review authors. Page 24 Availability of data, code and other materials 27 Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review. Page 24 Appendix 2 -Inclusion and exclusion criteria
2021-11-28T20:06:32.460Z
2021-11-28T00:00:00.000
{ "year": 2022, "sha1": "f5f8c9140b5d0de48271bc96d2c35fd886944dbc", "oa_license": "CCBY", "oa_url": "http://www.gimjournal.org/article/S1098360022007493/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4ea1a0f96042dcc55faa98ed3af1a5b73cec25c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229439880
pes2o/s2orc
v3-fos-license
Kinesthetic Imagination in Architecture: Design and Representation of Space Kinestetička imaginacija u arhitekturi: projektiranje i reprezentacija prostora Histories of architecture have long-recognized the vital role of concepts, strategies and principles exchanged between architecture and film, which reconfigured their systems of knowledge and made this relationship rich. Nonetheless, film has been used mainly as an instrument of narration and representation in architecture, only rarely engaged in questioning how it affects the way we understand, think and design space. Some of the most recent architectural design practices have recognized that film, using its specific screen environment, can provide a source of new architectural imagination while contextualizing our kinesthetic experience of space. In this article, I will examine how kinesthetic imagination has informed architectural practice in relation to the established practices of architectural representation. INTRODUCTION Early film projects, like Man with a Movie Camera by Dziga Vertov (1929), recognized an analogy between film and the cinematic eye, and thus the possibility of extending the perception from the paradigmatic change in the viewing conditions to the construction of reality by kinematic means. The origin of such a transfer from a positively defined real space to the space mediated by media was identified at the time of the emergence of modernist space-time paradigm. The analysis of the first modernist architectural experiments, from Sant'Elia to Le Corbusier, revealed the common tendency to reproduce movement; nevertheless, they undoubtedly testify to the neglect of certain ways of imagining space. On the other side, by emphasizing the problem of visual representation, Giedion draws attention to the multi-perspectival character (movement) and hence to the kinematic element that is embodied in the design (space) of certain examples of modern architecture.1 Taking this tendency to engage film as a tool to express kinesthetic experience in architecture, we can offer another interpretation of these attempts. Considering how the theorization of the post-WWII cinema had fundamentally influenced the spectator's perception of time and space, conditions have been created for connecting film with a specific spatial organization. BEYOND THE BOUNDARIES OF SELF-REFERENTIALITY The discussion's concern to explain the dominant modes of spatial representation significantly obscured the role of kinesthetic experience as a way of imagining space. Anyhow, some of the late 20thcentury architectural design experiments have set the practice beyond the boundaries of selfreferentiality, in which architecture is considered to be an indicator of pulse, liveliness, facing the senses. Accordingly, the contemporary experiments tend to generate conceptions which classify architecture not only in the context of a purely technological, geometrical and typological appearances, but also as a perceptual category. Today it is quite clear that architecture of the media age has finally set the observer in the central position of the analysis.2 In consequence, the recognized categories of knowledge have been replaced by the experience as a privileged category in architecture. In other words, the material entities, such as an object or a building, associated with the technologies of motion and media, are visually conditioned rather than materially assessed. That is why searching for appropriate terms to address this issue pushes the limits of the discussion to levitate epistemologically between the conceptual and the experiential. This tendency is characteristic of the work of German art historians of the Einfühlung.3 Through the most important principles established in their work, the art historians of the Einfuhlüng enhanced critical dialogues to meet the conditions of viewing space from the standpoint of the body in movement. Talking about a choice of the 'interpretations' of movement would be misleading, for what one is choosing here is also the experience which embodies interpretation, and therefore features our real kinesthetic experience of space. Apart from identifying mutual interactions between the space and the body, this method also involves the risk of direct juxtaposition between the form of space and the movement of the body, as anticipated. Being the mirror for the basic principles of modern architecture, it reflects unity and coherence, discontinuity and fragmentation, as pure compositional issues of shaping spaces. As such, they owe the definition of the object that recognizes the world as a quantifiable phenomenon, with optical and geometrical proof of the accurate description of the world.4 This is in contrast to Merleau Ponty's remarks about the role of body movement to highlight the idea "that object characteristics remain constant denies our ability to mentally change the identity of an object by displacing it." 5 With this contrast, my claim is that such negotiations between the design and representation of space are neither intent on discussing the form of space, the object itself, nor the observer. Today we can offer another interpretation to these attempts. Instead of studying abstract elements or the object itself, we tend to engage in quest ioning relationships between the objects. This is in compliance with some of the most recent architectural practices that have recognized the power of invention of film to provide new source of architectural imagination while contextualizing our kinesthetic relationship with space. In particular, the analysis of several recent collage and film montage experiments has revealed ways of departure from the conceptualization of architectural elements to the conceptualization of relationships between these elements in the immaterial environment. This invitation to bring together elements which have not been treated in connection to each other triggers a true cross-aesthetic review that goes beyond the established knowledge classification. As a consequence, we may achieve results that imply different spatial order, which is determined by the very unpredictability and uncertainty of their connections. ADDRESSING THE TERM KINESTHETIC IN ARCHITECTURE Although the definition of the term kinesthetic is still inconsistent and therefore varies in literature, we may point to the signification of the notion kinesthetic learning as learning by carrying out physical activity rather than listening to a lecture or watching a demonstration.6 Addressing "experience by doing" in this research, the notion kinesthetic denotes real-life experience of walking, perceiving and constructing movement, by inscribing a spatio-temporal continuity of the urban space. Burdened with connotations of the strict classification of knowledge, contemporary architectural debates typically avoid addressing the kinesthetic experience of space, and the question of how to translate that experience into architecture. Nonetheless, steps are taken to address the term kinesthetic (as featuring motion, direction, position, rhythm, speed, found in seeing, among others) to deal with the architecture of variability, ephemerality and transience. It is thus no concidence that the praxis of architectural representation is committed to capturing and managing movements and change over time, decisively addressing the role of direct participation in space. This visionary idea was most thoroughly treated by thinkers such as Giedion and de Certeau, and in recent years, by Iain Borden. Borden's 'body-centric space production' addresses this dialectic by involving the issues of time, touch, sound, muscle, movement, balance, rhythm, and counterrhythm as a set of complex spatial actions.7 By analyzing the action performed between the body, skateboard and terrain in his Skateboarding, Space, and the City, Borden no longer recognizes one fixed external reality: he believes that a perceived reality is inseparable from the actions performed in space.8 To avoid any direct reference to the shape, geometry and topology of space, a different reading is suggested here: hiding behind this problematic is the inseparability of the elements that provoke relationships in both space and body with their own actions, which are visually conditioned. Considering that traditional tools of imagining space have no capacity to convey these relations in the organization and representation of space,9 we are faced with the difficulty of translating it into architecture. Namely, our action is suspended between insufficiently defined relationships of the materiality of real space and the immateriality of movement and changes over time. In consequence, what we recognize during the process of notation is that the skateboarder, while "producing" the architecture, simultaneously defies and embodies its representation. By challenging possible ways of making space visible, the crisis is reflected in the recognition that our methods and conventions of design are not timeless. Quite to the contrary, these conventions are subject to change as are our negotiations between the arts, moving images and architecture throughout the 20th century. Observed through the lens of fluidity provided by the moving image debates after the 1960s,10 these disciplinary negotiations redefined what can be considered as art. Moreover, as anticipated by Bergson who claims that cinema has become a model for human perception, the category of experience has given way to a new interpretation of the dynamics of modernity: "cinema became art by modulating the viewers' embodied sense of space." 11 In consequence, constructing the viewer's perspective between real space and space mediated by media was aimed at allowing the transformation of objects and spaces of unconscious optics, illusions, fiction, and optical modes in their time-based regimes. The implications in architecture can be recognized in what Penz & Lu call "the possibility of challenging the traditional spatial organization through the ability of the moving image to reveal a new spatial and narrative structures." 12 From the theoretical perspective, this cross-twentieth-century exchange to finally "set" reality by kinematic means appears to be a hugely liberating Hill process developed on the screen. More importantly, it seems to express the new freedom for architects afforded by the immaterial world of the moving image. DETECTING RELATIONS BETWEEN REAL AND CINEMATIC SPACE The tools that we used to design space while interrogating visual representation, such as collage, system of notation and montage, are recognized today as part of the architectural design methodology. In this methodology, 'movement' functions as an interface between subject, space and views. Despite enormous potential that this shift has provided, the problem of translating 'movement' into architecture is still under consideration. This is due in large part to the fact that, at least in visual terms, 'movement' was usually resolved in the expression of dynamics that served to stimulate human senses and produce perceptual experience. Namely, from Marinetti's experiments in poetry to Duchamp's paintings, and from the architecture of Sant'Elia to Le Corbusier, we notice the common tendency to stop time in space and reproduce movement to become visible (Le Corbusier). This process was usually the result of creating movement which is subordinate to the forms used in particular spatial system, i.e. geometrically defined by urban planning. Nonetheless, in the transition from the 'creation of movement through space' to the 'creation of space through movement,' it seems that the decisive role in creating the concept of space was played by the participating subject. Yet, modern attempts of breaking visual conventions are related to discovering and interpreting positions and distances, showing motion, changing the orders and time of our spatial experience as participating subjects. Thus, conditions are created for separating form from the appearance of an object but, with the structuralist shift, the movement has taken on the role of organising space. As de Certeau encounters the transition to the 'creation of space through movement' in reverse, he claims that "lived space is a place of tactile apprehension and kinesthetic appropriation: territory in which seemingly unremarkable pedestrian movement begins to actively shape spaces in the city." 13 However, according to Vidler, architectural practice has been constantly suspicious of reified analogies, finding in poststructuralism a mode of setting architecture in motion.14 Eventually, Gandelsonas comments on his architectural practice "as an area of production where the subject works in transgressive way with the notion of rules as a limit." 15 Thus, architecture has begun to be seen not as a form of language per se, but instead as a form of writing,16 thereby expanding the cultural system, of which architecture and urban spaces are elements, to incorporate movement. Just as the form separates itself from the appearance in this context, it is now looking for support in other manifestations of the visible. In the environment of a completely new spatial system, such as the depth of field, zoom and frame, we can raise a question: how does the body movement and its interaction with the space become recordable, representable, and reproducible? Throughout the 20th century, there has been an ongoing struggle to overcome the intense dialectic developed between the systems To be specific, despite negating the potential dialectic between 'the creation of space through movement' (as immaterial practice) and 'its material expression in architectural representation,' which is articulated by Iain Borden and practically applied in the montage technique to record the consecutive sequences, the destabilization process of architectural representation has yet to begin. This was achieved by using montage to directly address the viewer, in order to detect the intention of the filmmaker to express the kinematic experience of space. It is particularly evident in the project "Instant City" (1968, fig. 1) by the Archigram group. By emulating the choreography of city movements, from moving objects (airships, tents, caps) and technology (cranes, refineries, robots), the film-maker transformed the city into an audiovisual event. In this way, the framed space disappears in favor of a "moving city" in time and space. It demonstrates an impossible representation, that of a city in permanent transformation which is only an incident in time and space. Accordingly, re-interpreted as a tool with which to detect clues in kinesthetic segments, the collage technique applied to "Instant City" ends up revising relations in order to establish an innovative re-evaluation of (archi)tectonics. In other words, architecture is replicated through the experience of its presence. In this way, the Archigram group revolutionized architectural representation and, by acknowledging the observer, challenged the status of the physical elements belonging to the real space. FROM THE REAL TO THE SCREEN ENVIRONMENT In reversing our traditional modus operandi strongly supported by screen immateriality, it is critical to regard the physical act of moving through space as a way of materializing relations from the real to the screen environment. On the other hand, operating ontologically from within both cinema and architecture to develop their own practices, we can say that recorded movement becomes the "condition" of a screen, and our experience of viewing now affects a change between sight and body movement. Welcoming this ultimately relational concern which allows the monitoring of its relationship with the Avant-garde film, film of the montage era and the most recent practices, I attempted to analyze different effects of the viewing conditions on the screen. I acted from the belief that their visual languages of communicating movement against fixed space on the screen can encrypt our contemporary visual experience. Examples range from ways of transforming relations between sight and movement on a screen, to the reflections on the glass façades. One case would be the projection of movement on the elevator's exterior glass surface ( fig. 2), which appears in the form of relations between images. Although these practices are based on different conceptual roots, we can learn about the unfolding of the spatial flow by measuring the temporal progression of the images flowing through a series of successive frames sequentially on the elevator's glass surface.17 In this case, observers are faced with the juxtaposed movements: the first movement direction has emerged from the function of the elevator to provide vertical transportation, and the second one acts as a multi-screen projection.18 The effectiveness of reading the image of reflected objects from panorama to the glass elevator surface is based on the illusion that allows movement to be rendered in new ways against fixed space. By using the effects of reflected movement, it retains its criticality in the observer who can experience that the displayed object no longer appears as an object, but as a structure based on light, sound and movement. Just as the incorporation of movement runs the game of separating form from the sign of an object, its projection on the glass surface comes up with transforming an object into the manifestation of the visible-the traces of movements. More precisely, reconstructing the spatial scene onto the glass surface is engaged with managing an immaterial illusion to embrace a new vision of architecture. Given that movement is no longer subordinate to a certain spatial system of the urban space, but tothe depth of field and frame of the two-dimensional environment, what we see on the elevator surface is not only the presentation of a new way of seeing space; rather, it gives form to a new mode of perception. In a similar fashion, searching for new ways to present the image of the city in Man with a Movie Camera (Vertov, 1929, fig. 3), the camera's own movement is augmented and multiplied as it is coupled with the city's vehicles of transport.19 Attempting to convey the idea of re-imagining cities through these film encounters, through illusory movement, the director Vertov stops the film flow suddenly and keeps a frame, displacing his viewers to a state of tranquility as a powerful epistemic break, and transporting them back again (in a reanimated state of the city) by way of montage. It is a mechanism that moves at a specific speed and rhythm, and shows traffic flows, blurred or slowed down; as a visual sign for speed, the scene refers to our internal reflection of movement. Therefore, by recognizing kinesthetic features of motion, direction, position, rhythm, found in seeing, we can identify Vertov's method of filmmaking as the direct focus on the viewer and his experience of watching a film. Respectively, by encoding our experience as an extended cinematic eye, Dziga represents our reaction-our inner reflection of the motion-a time-lapse that we produce by stepping back from this motion. As a result, it is not the city which is frozen; it is us, the viewers, who are petrified by becoming aware of this omnipresent speed of cities, surprised to recognize our individual perceptions in relation to it by blurring, at times, our personal viewing positions. IN CONCLUSION We have the power to choose how to perceive movement, thereby legitimizing multiple perspectives of observation and identifying consequences that cinema has had on our perception of time, space and movement. Nevertheless, from the depiction of foreign and domestic views in early panorama films towards the simulation of travelling through space, it seems to me that the very technique of filmic representation was challenged to aspire to motion. Not only do the subjects of urban views move, but their body movement is transfered from the real to the space of the screen to become part of filmic representation. As these accounts with the moving image have demonstrated, the displayed modes of representation aimed at embodying the subjective spatial and temporal mobility were only reinforced by the affinity of film towards architecture and art throughout the twentieth century. •
2020-12-03T09:05:32.678Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "f80f37fb07094154a344646bf3de145f67423b81", "oa_license": null, "oa_url": "https://hrcak.srce.hr/file/358523", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "31859882bd23406a57fe287816bf759cebed6f35", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Computer Science" ] }
31930532
pes2o/s2orc
v3-fos-license
Intervenção nas afasias com o uso da comunicação suplementar e/ou alternativa This study aimed to describe the use of the Augmentative and Alternative Communication in two cases of aphasia after stroke. The speech therapy was divided in four stages that approached since the display of forms of communication for the album until the effective use this resource. In all stages, was used the pictographic system Picture Communication Symbols, because it is a system that has greater translucent iconicity. By reapplication of tests, was possible to show improvement in oral ability, writing, reading and denomination the two participants. To this end, the speech therapy was performed with two aphasic participants, using the Augmentative and Alternative Communication. This study allows conclude that the speech therapy using the Augmentative and Alternative Communication in two case of aphasia after stroke, brought benefits to functional communication the participants, and these resources had a augmentative role making communication more efficient and brought benefits in the rehabilitation process, promoting development of skills in reading e naming. Considering technological advances in healthcare, the possibility of survival of patients with neurological disorders and alterations have been increasing the demand for treatment of this group.Among the consequences that these patients may present, are aphasias, which among other things, patients may require resources that facilitate or replace the communication 1 .Augmentative and Alternative Communication resources (AAC) involves the use of non-verbal modes of communication to supplement or replace the spoken language 2 . There are several types of materials that can be used for the AAC occurs.A lot of material resources an ischemic stroke, and therefore had to stay in hospital for eight days. He lives with his wife and has eight children.Before the neurological involvement had the habit of going to the site of the family to take care of animals, cultivate the garden and gather the family.After impairment, he rarely went out, but their children often gather on weekends at his residence. According to his wife, the participant always had a very good relationship with their children and neighbors.However, after stroke he had distanced himself somewhat from their family and friends due to the fact of not being understood.It is very emotional and cries easily. Evaluation of the individuals To assess comprehension of the subject it was used the Token Test Short Form 11 test.Both individuals had scores that rated their understanding with some difficulty, individual 1 showed slight difficulty and individual 2 moderate difficulty. Aiming to investigate the linguistic behaviors, the M1 alpha test was applied.In the listening test, subjects showed better performance in activities involving simple words and phrases.The task with complex sentences, the three stimuli, the subjects agreed two one respectively.As for listening, the same pattern was observed, except individual 2 understand no complex sentence.In evidence of copying and dictation, similar performance was observed, making a copy of the sentence, but failing to transcribe the dictated words and phrases. In the last stage of the test M1 alpha, which involves reading tests aloud, repetition and naming, individual 2 showed better performance than the first one, hitting five stimuli in reading aloud, repetition and nine in 10 in the denomination, while the individual 1 hit five seven two respectively. In relation to individual 2, it can be observed that it has been preserved the reading and likes to be flipping through newspapers and magazines, but according to his wife, unable to perform the reading of the news.During the application of the tests and spontaneous conversation, it was observed in speech language features classified as anomie, jargon, agrammatism, and bradylalia neologism. Speech Language Pathology Intervention First, it was made a visit to the place where the individuals live, and the family / caregiver respond to an interview to obtain information about the routine of the participant. An intervention plan divided into four stages, shown in Table 1 was developed. Although the studies reported I this article, there has been a predominance of research involving children, mainly aged 3-10 years 10 , thus resulting interest in this study that aims to describe the use of AAC in two aphasic patients after Stroke. CASE REPORT This work was approved by the Research Ethics Committee in Human Beings, of the School of Dentistry of Bauru -University of São Paulo, under the following number 052/2011.Before this study, the two individuals were already in speech therapy twice a week with trainees who were attending the last year of graduation in Speech Pathology, and during this research, two sessions were added per week lasting 30-45 minutes with the researcher. Identification of the individual 1 Subject 1 was male, 53 years old and right handed.He had completed high school and before the occurrence, he had worked as a freelance salesman, however, due to the communicative and motor changes, this profession does not exist anymore.He was divorced and has two daughters who do not reside in the same city as him. In early 2010 he suffered an ischemic stroke and remained hospitalized for seven days.After discharge, he was referred to a nursing home because there was no other place that could reside once lived alone before neurological involvement and did not have a good family relationship, except with a younger cousin, who accompanied in all appointments. According to his cousin, the participant always had a close relationship with his friends, and loved to go to bars and barbecues, showing to be always very cheerful and uninhibited.After the impairment, the patient kept the habit of going to bars and barbecues, in the company of a friend or even by himself.At the time of the research, he was almost all the time showing to have a cheerful humor, but with many episodes of anxiety and impatience, due to the difficulty in expressing themselves verbally. Identification of the individual 2 The second individual is a male, 77 years old and right handed.Studied up to the eighth grade and now is retired.In early 2009 he had suffered Table 1 -Intervention steps using the augmentative and alternative communication Step 1 The work was perfomed with the use of chips of different semantic class (eg.: routine actions, food, clothing, transportation). Step 2 The work involved the syntactic development during assembly of lyrics. Step 3 This step was reserved for the making of the album, in work between researcher and participant Step 4 Reserved for use of the communication album in outside therapeutic environment during conversational situations. For this research we used the Picture Communication Symbols (PCS), because this is a system that has greater iconicity 12 and has been one of the most commonly used one for aphasic patients 5,13 .The figures used throughout the study were made using the software version 6 Boardmaker® produced and marketed by Mayer-Johnson®. Seeking greater iconicity, it was first presented to the participants the figure proposed by the software, and if he did not recognize, it was searched on a site outside the program, a figure that would represent the meaning. In steps 1 and 2, the chips were fabricated in size 6,5 cm X 6,5 cm, colorfully printed on bond paper, laminated and used individually.Chips size 3 cm X 3 cm colorfully printed on plain, laminated and then bound paper were used for the making of the album (step 3).Considering the driving condition of the two participants, the form of the chosen statement was made in a direct way with the index finger. Stage 1 Individual 1 showed great interest in the proposed activities, demonstrated by a very active attitude towards the choices of chips, always saying when he missed some content or requesting the exchange of tokens when he thought it did not represent what he wanted to communicate.During the activities, he always sought to name the figures, but most of the attempts not performed successfully, and then asked the researcher through gestures to speak the beginning of the word.In some situations, only the articulation of the first phoneme helped though in others the issue of the first full syllable was required. In the intervention sessions, the individual 1 also use of writing to communicate something that he could not speak verbally, or that there were figures that represent, however this writing was always performed laborious way, since they had to carry by hand left hemiplegia due to the right. To illustrate the concomitant use the individual performed writing while using the chips, it is worth reporting the session in which it was crafted pieces of clothing, and the researcher had proposed assembly activity of a suitcase, simulating situations for travel locations with extreme temperatures.After selecting the records with the appropriate clothing for each environment, the subject first started attempt to inform something to the researcher, through gestures and issuance of sounds, after a few unsuccessful attempts, he asked pencil and paper and drew a picture that is shown in Figure 1. Each design made the individual first looked at the researcher as if asking her to interpret what he was doing and spoke what was represented.With the joint design and construction of meaning, the researcher realized that "the design symbolizes Brazil, where there is very different temperatures, such as the Northeast and South regions, and that in the state of Pará, it rains every day, between 3pm and 5pm".Individual 2, during the entire period of this research, presented more passive behavior when compared to individual.For installation routine and weekly schedule, it can be observed that individual 2 had no other activity outside his residence unless coming to the therapy session, his daily routine was limited to perform actions such as watching television and listening to music. During the sessions though he was always reading the letters, especially when he did not recognize.However, the reading occurred in very laborious manner, but effectively, since the recognition of the significance assisted.It was observed that with the advancement of therapeutic sessions, the patient showed an increase in reading, which he became faster in doing it. Step 2 This step was first to put together the pieces of lyric "Song of America" by singer Milton Nascimento with the use of tokens.The song was chosen by the researcher because according to individual's caregivers both liked this kind song.The song was reproduced on 42 tokens (Figure 2) and delivered to the participants so that after listening to passages of music, organize the tokens according to the letter. Step 3 Step 3 was reserved for the making of the album communication work between the researcher and participants.From the tokens worked in step 1, participants selected which pictures should contain on the album but also checked whether there was a need to add new records. After completion, the album communication was printed with colorful, glossy, bound and delivered to the participants and individual paint tokens that had been delivered earlier.The individual 1 resides in an institution, the researcher guided beyond the cousin who is the nearest relative, the coordinator and the caregivers at the nursing home on the use of communication of the album.During the orientation a few points already discussed at the beginning of the research were reinforced. Individual 1 quite enjoyed the activity and could play the song with the use of tokens.As he already knew the words, while assembling the participant was singing the song and the end of the session could sing it more easily by using the tokens.It is noteworthy that this participant was singing sometimes with of jargon, neologisms, phonological and phonetic paraphasias, but more intelligibly than during spontaneous speech. Earlier this activity, individual 2 reported he already knew the song.During the organization of tokens he knew the letter but had difficulty locating the records, therefore the researcher separated the tokens into smaller groups to facilitate the location, but even with the decreased amount of tokens, the participant needed broad aid from the researcher.such as going to the bathroom or drink something, do not need to be requested because the conduct is independently. Individual 1 also used the gestures and writing to communicate.However we cannot always verify if patients understood themselves and their messages can convey, in their situations causing some failures in communication and interaction with the environment. After the intervention, the questionnaire was reapplied and it can be seen that individuals used the album communication when verbal communication was not efficient.The wife of the second individual reported that a marked change of the participant was the absence of crying in the situation of not understanding the message of the speaker that this behavior used to happen frequently before the intervention. DISCUSSION The PCS was selected as the tool for this study because it is a material that has high iconicity 12 .Since a system with this feature can improve fluency in dialogue, since the symbols are familiar, easily recognizable by the patient and they are part of their rotine 14 . The choice for the organization of symbols in an album communication came from meeting the need of the two individuals of this study.Since a material of low cost, lightweight and low to make it possible to transport the participants to attend local size needed.In particular, for individual 1, the material should also be able to be stored in a pants pocket, since this patient has hemiplegia in the right side, but has autonomy in using public transportation alone, thus necessitating the left hand always free at these times.The highlighted features for selecting this type of material, are also reported by other pesquisadores 4,8,[15][16][17] . The communicative behavior manifested by the individual 1, characterized by the combined use of several symbols (graphical or pictorial), was effective during their communication.Similar results were reported in other studies 14,17,18 , and the different set of symbols used is referred to as an active behavior in communication, not an inability to use the AAC 17 .When working with the AAC, we should not insist on developing a system of certain signals, but in order to develop a global communication to be effective 12 . The construction of the album communication should be a joint effort between the therapist, the patient and the family 8,19 .In this study, the family had no active role during speech therapy, however, it was from an initial interview with the family / caregivers that the researcher selected the initial vocabulary of Step 4 Two activities were developed by subject.On the first visit to a cafeteria it was performed and the participant had to ask for a drink and food making use of AAC previously structured for this purpose in previous steps.In the second situation, he should go to the reception of the Speech Language Pathology Clinic where this research was developed, and inform the front desk the day and time of his next service. The development of the first proposed activity, the individual 1 completed successfully in order to be able to buy the food and the drink, requiring only album of communication for the choice of drink.For the purchase of food, the participant was able to pronounce the word "come", and when the clerk asked what food he wanted, he pointed to the your preference, once the showcase with exposed food found himself on the counter.When it was the drink request time, the clerk did not understand his request, this time using the album communication, which provided the choice of soda his second choice.In the proposed activity, the subject was required first album made use of communication throughout hisdevelopment, since it can only pronounce the weekdays during automatic speech.Again using the album communication was efficient and provided better understanding by the secretary as to day and time in which the participant would return to the clinic. The first activity that the individual 2 performed, the lack of a record that represented a coolant light was observed, since the participant managed to order food and drink, but the clerk did not understand what he would drink, and when the participant was not looking had this form of communication on the album.After this activity it was added a token that represents this item. During Activity 2, individual 2, and individual 1, the album needed help to communicate with the secretary.First the subject sought to convey the information through oral communication, though the clerk did not understand what he had said, since this issue came out with the presence of neologisms.Thus the participant indicated on the record of the communication schedule that day and had arranged with the researcher to the next service. A questionnaire developed by the outside researchers to verify the patient's communicative interaction with their family / caregiver also responsible-was applied.For individual 1, the questionnaire was applied to the coordinator of the nursing home where he lives, for individual 2 it was applied with his wife.The responses of guardians demonstrated that individuals communicate with people who are around you, and certain everyday their situations In the case of individual 2, his wife responded to the descriptive questionnaire of communication, that he did not cry more when he was not understood, which may be associated with the fact that the decrease in anxiety and greater efficiency in conveying his message.The use of AAC for aphasia, especially in the first months after the onset, can collaborate with decreasing family anxiety and patient when trying to communicate 12 . In a study 9 conducted with three individuals with aphasia due to not fluent stroke, it was concluded that intervention using AAC brought improvements in the quality and effectiveness of communication of all participants, especially in the visual, auditory and symbolic understanding.It was also reported improvement in communicative independence of subjects, which made the authors classify the AAC as a valuable approach to obtain functional improvement in communication of this group of patients. CONCLUSION The speech therapy with the use of AAC in the two cases of aphasia post stroke, brought benefits to the functional communication of participants.For both individuals, the approach of AAC had a facilitator role, making communication more efficient, bringing benefits in the rehabilitation process and promoting development of reading skills.About the functional aspects of communication, it was reported by family members / caregivers that after the intervention participants, were already using the resources in their family environment, along with other forms of communication.The two participants remain in the rehabilitation process, using and improving the use of AAC.It is suggested to continue their studies in the area and the replicability of this methodology in studies that propose to cover a greater sample number and variety of etiologic factor of aphasia, in order to enable confirmation of the data found in this study. the subject.Later in this study, the family had a more active role, even during the making of the album communication in the therapeutic session, once believed that the greater family involvement during this step will facilitate the time to use the resource in the home environment. However, the work to obtain information about the routine in which the participant is entered is fundamental and is in accordance with the guidelines of literature 6 that stresses the importance of knowing the various aspects of the daily routine, the physical and social environment of the participant since power up his mobility needs. As it could be observed in other studies 14,20 , symbols extend the possibilities of communication and the board serves as support to speak, increasing and making themes for dialogue emerge.Behavior was observed mainly in one subject, which has more active daily activities, presents greater use during situations of dialogue. For the two participants in this study, the AAC had more additional function that alternative.The two subjects could verbally communicate, but this was not totally effective, causing miscommunication between the subjects and the interlocutors.The use of AAC in aphasic patients is observed in other studies as a tool facilitating communication, and assist in the rehabilitation of these individuals and provide support to the Speaking and Writing skills 5, 14 . By using the descriptive questionnaire of communication it was sought to know how was the communicative behavior of the individuals outside the therapeutic environment.Observed change in behavior of the two subjects after the start of the intervention, both of which started taking album of communication, when they were not understood by the interlocutor. According to the report of the coordinating institution of the subject 1, using the album not inhibited from trying to communicate verbally, behavior that breaks the myth that using the album communication inhibit the use of oral communication, and will meet with research show that the use of AAC does not inhibit the development of such comunicação 1,14 Figure 1 - Figure 1 -Drawing performed by the individual 1
2017-09-18T07:50:43.073Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "f816402d013e38ae583d1630d6077942b2f4a0fe", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/rcefac/v17n3/en_1982-0216-rcefac-17-03-00956.pdf", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "2043db3b3f37e67fde28d07d9a948d5ae2ec7fdb", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
16412963
pes2o/s2orc
v3-fos-license
Vitamin A status in pregnant women in Iran in 2001 and its relationship with province and gestational age Background Vitamin A deficiency is considered as one of the public health problems among pregnant women worldwide. Population representative data on vitamin A status in pregnancy have not previously been published from Iran. Objectives The aim of this study was to publish data on vitamin A status in pregnant women in all the provinces of Iran in 2001, including urban and rural areas, and to describe the association of vitamin A status with maternal age, gestational age, and parity. Design This descriptive cross-sectional study was conducted on 3,270 healthy pregnant women from the entire country, 2,631 with gestational age ≤36 weeks, and 639 with gestational age >36 weeks. Vitamin A status was determined in serum using high-performance liquid chromatography. Result Retinol levels corresponding to deficiency were detected in 6.6% (<0.36 µmol/L) and 18% had insufficient vitamin A levels (≥0.36–<0.7 µmol/L). Suboptimal level of serum retinol was observed in 55.3% of the pregnant women (0.7–1.4 µmol/L). Only about 20% of the women had optimal values (>1.4 µmol/L). The level of serum retinol was lower in older pregnant women (p=0.008), and at higher gestational age (p=0.009). High vitamin A levels were observed in pregnant women in the central areas of Iran and the lowest values in those in the southern areas of Iran. Conclusions The vitamin A status was good in 2001 but should be closely monitored also in the future. About 25% of pregnant women had a vitamin A status diagnosed as insufficient or deficient (<0.7 µmol/L). The mean serum retinol decreased as the gestational age increased. The clinical significance of this finding should be further investigated, followed by a careful risk group approach to supplementation during pregnancy. V itamin A deficiency is considered as one of the main public health problems in developing countries, where it is one of the main causes of high morbidity and mortality. Vitamin A is important for several functions, including vision, reproduction, growth, immunity, maintenance of epithelial tissue, and regulation of cell proliferation and differentiation (1,2). In addition, vitamin A effects are critical during periods of rapid cellular growth and differentiation, such as during pregnancy, when it is supplied by the mother to the fetus (1). Vitamin A comes from two sources, one comes from animal sources, which includes retinol, and the other group comes from plants, which includes beta-carotene. The body converts beta-carotene to vitamin A. Vitamin A can have significant toxicity in overdose, but provitamin A carotenoids does not cause vitamin A toxicity even in large doses because the body converts only what it needs. Good food sources of vitamin A are liver, eggs, red and orange fruits, red palm oil, and green leafy vegetables (3), while in Iran the main vitamin A intake comes from vegetables (44%), meat and eggs (18%), fruit (9%), dairy products (12%), and fat (3%) (4). The majority of dietary vitamin A is stored in the liver in well-nourished people. Vitamin A restriction does not immediately lead to deficiency symptoms, those would emerge slowly since it takes 1 to 2 years for healthy adults, shorter for children, to empty liver stores (5). The major cause of vitamin A deficiency is insufficient intake, which is related to sociocultural and socioeconomic factors such as poverty, low access to vitamin A fortified food (6,7), low education, ethnicity, dietary practices and beliefs, and climate conditions (8). A low vitamin A status in pregnant women can also be attributed to multiple pregnancies and minimal antenatal care (9). In the World Declaration and Plan of Action for Nutrition (10) from 1992, 159 governments pledged to undertake all efforts to eliminate vitamin A deficiency within the decade, a pledge which was not fulfilled. Micronutrient deficiencies are one of the main public health concerns. Vitamin A deficiency is one of the most important nutritional problems because of its health consequences and wide geographic distribution. Globally, about 15% (19 million) of pregnant women have been estimated to be vitamin A deficient (3), and one study suggested that about 2.5Á5 million women worldwide would probably be suffering from night blindness (11). Low vitamin A status during pregnancy can result in a low vitamin A status of the infant at birth and in the early life. Vitamin A deficiency is strongly associated with depressed immune function and higher morbidity and mortality due to infectious diseases such as diarrhea, measles, and respiratory infections in children (12). Meanwhile, teratogenic symptoms of excessive vitamin A intake (in its preformed state, retinol) during pregnancy have been shown. Birth defects, such as the retinoic acid syndrome, include those affecting the central nervous system, cardiovascular system, thymus malformations, and neural tube defects (13,14). The estimated average requirement (EAR) for vitamin A in pregnant women is, according to the Dietary Reference Intakes from 2001, 770 mg retinol activity equivalents (RAE)/day for women aged above 19 years and 750 mg RAE/day for pregnant women aged 14 to 19 years (15). According to the Dietary Reference Intakes, the tolerable Upper Level (UL) for adults is set at 3,000 mg RAE/day of preformed vitamin A (15). Deficient and insufficient serum concentrations of vitamin A are determined as B0.36 and ]0.36 B0.7 mmol/L, respectively, and suboptimal and optimal concentrations of vitamin A as ]0.7 B1.4 mmol/L and ]1.4 mmol/L, respectively (16). According to the World Health Organization (WHO) (10), three categories of national prevalence of vitamin A deficiency (where the blood level is less than 0.7 mmol/L) have been proposed, mild ]2Á10%, moderate 10Á20%, and severe ]20% on a population basis. These categories are suggested to indicate the severity of the problem and to thereby show the need for making vitamin A nutrition a national priority where needed. The prevalence of vitamin A deficiency (less than 0.7 mmol/L) in cord blood in previous report from Tehran was 21% on a population representative material (17). From a public health perspective, the global concerns to prevent and eliminate vitamin A deficiency are mainly due to its serious consequences and multiple effects on human health, especially the health of infants and women in reproductive age in developing countries (11). The aim of this study was to publish data on vitamin A status in pregnant women in all the provinces of Iran, including urban and rural areas, and to describe the association of vitamin A status with maternal age, gestational age, and parity. Methods Iran can be divided into 11 regions with 28 provinces (Supplementary file), based on ethnographic, demographic, epidemiologic, and socioeconomic factors (18). This study was cross-sectional and was conducted during the year 2001 in Iran. It was a part of the micronutrient survey, which covered the whole country and was conducted by the Ministry of Health (MOH) of Iran. The study population was households and the sampling method was unequal cluster sampling with unequal household sizes randomly selected throughout the country. The clusters were sampled relative to the size of the urban and rural population of each province in all 11 regions. The center of the cluster was selected by systematic random sampling. There were 880 clusters in all 11 regions, 504 urban and 376 rural clusters. Before the study was conducted, a team of health workers in each health care center in urban and rural areas were selected. Each team was assigned to pick up at least five people for the study from each subregion. Pregnant women were informed about the study process. Assigned pregnant women gave informed consent, and accepted to give blood samples and complete questionnaires. These pregnant women were targeted in their own provinces by written invitation at the nearest health care center that belonged to their region. If a woman was not able to go to the health care center within the due date, the survey staff would go to the woman's house and invite her to participate in the study. As an inclusion criterion, it was important that the pregnant women were otherwise healthy and did not have any kind of chronic diseases such as diabetes or heart disease. The blood samples were collected carefully by trained health workers. The total number of pregnant women who participated in the study was 4,368 from all regions of Iran, but data on vitamin A levels were only available in 3,270 women. A questionnaire was designed by the MOH in Iran to collect information about mothers on general characteristics, health status, supplements if used, number of abortions, pregnancies, and deliveries. From each participant, a minimum of 6.5 mL venous blood was collected and kept in dry ice and sent to an accredited district laboratory for centrifugation. The samples were coded and kept at (208C until analysis. All samples were transported in a refrigerated vehicle to an accredited laboratory in Tehran. Vitamin A was analyzed as retinol by using high-performance liquid chromatography (HPLC) at the WHO collaborating laboratory in Tehran after the samples were transported there in a cold chain vehicle. Statistical analysis Data for 3,270 pregnant women were analyzed with SPSS version 17.0 (SPSS Inc., Chicago, IL, USA). Oneway ANOVA test was carried out to test the significant differences between the mean of vitamin A levels within groups with different age and gestational age. Vitamin A was normally distributed among the study population. Post Hoc tests were computed to investigate differences in the categories. One-way ANOVA test was computed to test the significant difference of parity on vitamin A status. Two-way ANOVA test was carried out to test the impact of age and areas (rural and urban) on the levels of vitamin A in pregnant women. Mean (standard deviation, SD) is given if not otherwise indicated. The level of significance was set at p50.05. Ethical approval The approval for this study was obtained from the Ethics Committee of the MOH in Iran. Informed consent was obtained from all women. Results The mean age of the participating women was 26 years, ranging from 14 to 47 years. The mean parity was 2, ranging between 1 and 12 deliveries. Gestational age ranged from 22 to 42 weeks at the time of blood sampling. The number of included pregnant women in the 11 regions varied between 229 and 330 (Table 1). Mean (SD) values of serum vitamin A ranged from 0.77 (0.46) to 1.12 (0.43) mmol/L. In Table 1, the regional as well as national mean (SD) levels of vitamin A as well as prevalence rates within the categories deficient, insufficient, suboptimal, and optimal is shown. The combined prevalence rates within the categories deficient and insufficient (B0.7 mmol/L) are also shown. The highest mean plasma retinol level (1.12 mmol/L [0.43]) was found in central Iran (region 6). Deficiency was mainly seen in regions 2, 5, 7, 8, and 11. Only 6.6% showed deficiency ( B0.36 mmol/L), while 18.3% had insufficiency (0.36Á B0.7 mmol/L) indicating that the population prevalence below 0.7 mmol/L for the whole nation is well above 20% (24.9%) and should therefore be characterized as severe according to the WHO categories. In some parts of the country the prevalence below 0.7 mmol/L is over 30% (3, 11) with one region reaching the level of 45.9% (7). In Fig. 1, the percentage distribution of vitamin A status is shown for the 11 regions. Deficiency was mainly seen in regions 2, 5, 7, 8, and 11. The highest optimal percentage of vitamin A was in regions 2, 4, 6, 9, and 11. More than half of pregnant women had serum vitamin A with suboptimal level, while vitamin A deficiency was only seen in less than 7% of all pregnant women in the study, and insufficiency in 18%. No statistical differences were observed between rural and urban areas. The mean (SD) vitamin A level in rural and urban areas were 0.98 (0.4) and 1.00 (0.4) mmol/L, respectively. Vitamin A in each age group was slightly lower in rural than that in urban areas. A two-way ANOVA for group analysis of variance was performed. The difference showed that the difference of vitamin A in both areas was not statistically significant. However, there was a statistically significant main effect of age on vitamin A level (p00.005). Using the same test, this effect size was rather small (eta squared00.003) (Fig. 2). Vitamin A levels in pregnant women decreased by increasing maternal age (ANOVA, p00.005) ( Table 2). Women who were less than 23 years old had higher levels of serum retinol than the older ones. There was a weak negative but significant correlation between age and vitamin A status (Pearson correlation, r 0(0.047, p00.007). Table 3 shows levels of vitamin A in pregnant women during different gestational ages. The mean vitamin A decreased as the gestational age increased (p00.009, oneway ANOVA). The mean serum retinol decreased as the gestational age increased (p 00.009, one-way ANOVA). There was no significant relation between parity and vitamin A status (F 00.79, p00.45) as shown in Table 4. Discussion This study showed that pregnant women in 2001 in this nationally representative study from Iran had a low vitamin A status since 25% of pregnant women had insufficient or deficient vitamin A levels. Using the WHO cut-offs for national prevalence of vitamin A deficiency, Iran should by this standard be considered a country with severe vitamin A insufficiency prevalence (B0.7 mmol/L) in 2001. In contrast, reports from the north of Iran, Marand district, in 2002 showed that 2.1% of childbearing women had a deficiency level of vitamin A (B0.7 mmol/L) but 18% had values between insufficient or suboptimal level of vitamin A (19). Prevalence of deficiency levels (B0.36 mmol/L) was 6.6% nation-wide and similar to the average global prevalence of maternal vitamin A deficiency in developing countries (7%) (20). The highest prevalence of deficiency to our knowledge was reported from Nepal (31%) (21), about five times higher than in Iran. China had the lowest prevalence of deficiency, only 2% in 2001 (21,22). In contrast, data from WHO in 2005 showed that on average 11.6% of pregnant women in Europe had vitamin A deficiency, ranging from 2.6 to 20.6% over the past 15 years (10). Maternal vitamin A deficiency was also correlated to a reduced intra-uterine growth rate and birth weight, and higher infant mortality (23Á25). The prevalence of vitamin A deficiency we found indicates a slight improvement in vitamin A status from severe to moderate deficiency during 1989 to 2001 (10). A general severe deficiency of vitamin A in cord blood in male infants in 1989 developed into moderate deficiency in 2001 in Iran (17). Despite the improvement in prenatal health care facilities, which in the meanwhile have been organized in each province supplying multivitamin (vitamins A'D) supplements free of charge through the Primary Health Care system. Supplementation of vitamin A to infants from 2 weeks of age until 2 years by MOH in the past decades may be responsible for a previous finding that vitamin A deficiency and insufficiency were rare in infancy (18). The higher prevalence of insufficient vitamin A levels in the south of Iran might be 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% associated with lower contribution of animal and milk products to the total energy intake, being 6 and 1%, respectively, in comparison with central and north of Iran, where animal and milk products contributed to about 8 and 4%, respectively, of the total energy intake (26). A low access in the south of Iran to main sources of vitamin A (dairy products, vegetables, and fruits) may be related to the hot and dry climate in this region (27). One study in northwest of Iran showed that the important sources of vitamin A in child-bearing women were nuts and green leaves, products that were not typical of other populations (19). Vegetables and fruits provide carotenoids equivalent to more than 50% of vitamin A (4). MOH reported that the 'intake to need' ratio of vitamin A shows a scattered provincial distribution, from 60% in Sistanbalouchestanto to 100% in East Azarbayejan (26). This is in accordance with the consumption of the main sources of vitamin A in the south of Iran, that is, Sistanbalouchestan is low ranking in both urban and rural areas. This might also be related to socioeconomic status, drought, and low production of fruits and vegetables. Referring to the UN Food and Agriculture Organization (FAO) regarding the nutritional intake in Iran (2002), the average daily intake of vitamin A was lower in villages than in towns, but insufficient in both, based on lower intake of animal products, fruits, and vegetables (26). In this study, the level of vitamin A deficiency was almost the same in both areas. The association between the status of vitamin A and age has been reported earlier (10,21) and has been related to factors such as dietary patterns and life-style that influence the nutritional status (28). The current study revealed that there was a significant correlation between gestational age and serum retinol. Similarly, among Nepalese pregnant women, even though the prevalence of vitamin A deficiency was higher only in late pregnancy, it lead to night blindness (9). This might be explained by an increased vitamin A-transfer to the fetus during late pregnancy. In contrast to our finding regarding the negative association of vitamin A with age, it has been reported elsewhere that night-blind women were more likely to be teenagers; however, the risk again increased with age among women aged above 30 (9). The difference could be explained by the fact that the Nepalese study reported data based on whether the women had night blindness during pregnancy or not, while our study reported serum retinol levels. In addition, we had no records of the socioeconomic status which could be an influencing factor here, as it has been shown in the previously mentioned study (9). Relation between serum retinol to parity. Similar to our findings, one study in Bangladesh showed no correlation between parity and serum retinol (29). A limitation to our study was that investigations at the regional level in the 11 included regions did not include data on sociodemographic differences. Conclusions This study illustrates that the status of vitamin A was good in 75% of pregnant women in Iran. As the WHO goal is to eliminate vitamin A deficiency globally, this study shows that Iran has achieved a relatively good result historically in the drive to eliminate vitamin A deficiency only in some provinces but not at the national level. Therefore, the problem needs to be closely monitored. Since this data does not represent the current status of vitamin A in the country, further information on vitamin A status in pregnant women are required for a strict surveillance of this important problem and to ensure that the vitamin A status continues to improve in Iran.
2016-08-09T08:50:54.084Z
2014-09-23T00:00:00.000
{ "year": 2014, "sha1": "380d23eb8654f6d9cac62146775902c3cf134028", "oa_license": "CCBYNCND", "oa_url": "https://foodandnutritionresearch.net/index.php/fnr/article/download/661/1623", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "380d23eb8654f6d9cac62146775902c3cf134028", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
36767002
pes2o/s2orc
v3-fos-license
Diagnostic accuracy of DWI in patients with ovarian cancer Abstract Background: Diffusion weighted imaging (DWI) is recently developed for identifying different malignant tumors. In this article the diagnostic accuracy of DWI for ovarian cancer was evaluated by synthesis of published data. Methods: A comprehensive literature search was conducted in PubMed/MEDLINE and Embase databases on the diagnostic performance of DWI for ovarian cancer published in English. Methodological quality was evaluated following Quality Assessment for Studies of Diagnostic Accuracy 2 (QUADAS 2) tool. We adopted the summary receiver operating characteristic (SROC) curve to assess the DWI accuracy. Results: Twelve studies including 1142 lesions were analyzed in this meta-analysis to estimate the pooled Sen (sensitivity), Spe (specificity), PLR (positive likelihood ratio), NLR (negative likelihood ratio), and construct SROC (summary receiver operating characteristics) curve. The pooled Sen and Spe were 0.86 (95% confidence interval [CI], 0.83–0.89) and 0.81 (95%CI, 0.77–0.84), respectively. The pooled PLR and pooled NLR were 5.07 (95%CI, 3.15–8.16) and 0.17 (95%CI, 0.10–0.30), respectively. The pooled diagnostic odds ratio (DOR) was 35.23 (95%CI, 17.21–72.14). The area under the curve (AUC) was 0.9160. Conclusion: DWI had moderately excellent diagnostic ability for ovarian cancer and promised to be a helpful diagnostic tool for patients of ovarian cancer. Introduction Ovarian cancer is the fifth fatal cause related to cancer among women in both developing and developed countries, causing approximately 125,000 deaths annually. [1,2] Ovarian cancer occurs frequently among women in perimenopause period, with few children and adolescents falling into this suffering. Since potentially curable ovarian cancers often do not produce any symptoms, [3][4][5] early clinical diagnosis is very difficult and ovarian cancer patients often present with an advanced stage at initial diagnosis. It is estimated that about 50% to 60% of the deaths in ovarian cancer patients are associated with local progress. Up to 10% of ovarian cancer patients suffer with distant metastases, including breast, gastrointestinal tract, and reproductive tract. [6] Although aggressive surgery combined with chemotherapy has resulted in prolonged remission for ovarian cancer patients, most advanced women present with poor prognosis. [7] The 5-year survival of early-stage patients with ovarian cancer exceeds 90%, while only 21% of advanced-stage patients survive 5 years upon first diagnosis. [8] Thus, new diagnostic techniques are indispensable to detect ovarian cancer and ultimately formulate treatment decisions aimed at improving life quality and survival rate of ovarian cancer patients at early stage. [9,10] A variety of diagnostic methods have been adopted in ovarian cancer. Color doppler ultrasound and computer tomography (CT) are commonly used imaging techniques for ovarian cancer diagnosis. [11] Cancer antigen 125 (CA125) as a serum biomarker of ovarian cancer has high specificity for early-stage disease (96-100%), but its sensitivity is poor. [12][13][14] Magnetic resonance imaging (MRI) has high resolution for soft tissues and can clearly display the anatomic relationship. To date, MRI tends to be an accurate imaging technique for ovarian cancer because of its noninvasive nature and there is no risk of radiation exposure, and no need of patient preparation. [15] MRI is substantially better than ultrasonography and CT. [16] DWI is a newly developed magnetic resonance functional imaging technique based on water molecules movement rather than structure. [17] Malignant tumors are composed of randomly organized tumor cells and the free movement of water molecules inside malignant dense mass is hindered. The inhibited diffusion of water is attributed to hypercellularity, [18,19] thus DWI could provide unique information of tissue structure by tissue cellularity evaluation. [20] Apparent diffusion coefficient (ADC) is calculated quantitatively to measure diffusion ability [21] and in general malignant lesions present higher ADC compared with benign lesions. DWI has being been used for early diagnosis of ischemic cerebral infarction over the past decade, [22,23] but now researches concerning cancer are rapidly expanding and a growing amount of data is published. It was reported that DWI had a desired diagnostic accuracy for lung cancer, pancreatic cancer, and prostate cancer. [24][25][26] ADC values are employed to differentiate between malignant and benign lesions and in general the former has a significantly lower ADC value. A series of studies have assessed the performance of DWI for diagnosis of ovarian cancer. However, diagnostic accuracy of DWI in detecting ovarian cancer varied because of some factors such as field intensity, imaging parameters, disease staging, and so on. The study is aimed to evaluate the diagnostic performance of DWI in detecting ovarian cancer by synthesis of published data. Search strategy Our meta-analysis followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations. A systematic literature search was conducted independently by 2 investigators in PubMed/MEDLINE and Embase databases published before January 2016 without other restrictions. We used the following search terms: "ovarian cancer or ovarian tumor or ovarian neoplasm" and "DWI or diffusion weighted Imaging or DW imaging." Also, manual search were performed for additional relevant studies. As this was a meta-analysis, no ethical approval was required. Eligibility criteria and study selection Two investigators, Xia Yuan and Yan Tie, screened all abstracts and checked relevant full-texts independently. Studies were enrolled in the meta-analysis if they satisfied the following criteria: the study adopted DWI in patient to determine the benignity or malignancy of ovarian masses; the study used histopathology of biopsy or surgery specimens as reference standard; the study provided sufficient data available to calculate true-positive (TP), false-positive (FP), false-negative (FN), and true-negative (TN) values. Studies were excluded from the meta-analysis if meeting the following criteria: the study did not involve ovarian cancer; the study did not provide complete and available data; the study is other research type, such as review, letter, meeting abstract, and case report; the study whose sample size was fewer than 10 patients. Data extraction and quality assessment The same 2 investigators who conducted the literature searches have extracted the relevant data independently. A third reviewer was responsible for coordinating disagreements. To perform accuracy analyses, the following data items of each study were extracted: the name of the first author, year of publication, country of origin, number and age of subjects, b values, techniques, and MRI field strength. For each study, 2 Â 2 contingency tables were obtained with TP, FP, TN, FN results. If diagnostic accuracy was executed by different observers, only 1 contingency table by the most experienced observer was extracted or reconstructed. Quality of relevant studies was examined according to QUADAS-2 which follows 14 items by scoring "yes" if done; "no" if not done; or "unclear" if it is not certain. [27] The quality assessment was performed by Xia Yuan and Yan Tie independently. Statistical analysis With TP, TN, FP, FN from extracted 2 Â 2 contingency tables, we quantified the pooled Sen, Spe, LR, and DOR with 95% confidence intervals (95%CI) to evaluate DWI diagnosis accuracy for ovarian cancer. Also, SROC curve was obtained to explain the interaction between Sen and Spe. Area under the curve (AUC) was calculated to assess the diagnostic ability of a test. [28] The heterogeneity between enrolled articles was estimated statistically using the Q statistic of the Chi-squared value test and the inconsistency index (I 2 ) and I 2 > 50% indicates the existence of significant heterogeneity. [29] If so, a random effects model was adopted. [30] On the opposite condition, the pooled analysis was performed using the fixed effects model. [31] Statistical analyses were carried out by Meta Disc statistical software version 1.4 (XI. Cochrane Colloquium, Barcelona, Spain) and Stata software version 11.1 (STATA Corporation, College Station, TX). Publication bias Deeks funnel plot asymmetry test was used to assess publication bias by Stata 11.0 and P > .05 indicates the absence of potential publication bias. [32] 3. Results Literature search and selection of studies The initial systematic literature search from the PubMed/MED-LINE and Embase databases yielded 169 relevant studies, of which 12 articles were finally identified. Thirty nine articles of full-text were reviewed and ultimately 27 studies were excluded. Thus, 12 studies [33][34][35][36][37][38][39][40][41][42][43][44] were included in our final dataset for the metaanalysis. The flowchart of study selection was shown in Fig. 1. Table 1 summarized the main characteristics of the included studies and Table 2 summarized imaging features of each study. Study characteristics In the 12 studies included in meta-analysis, a total of 1142 examinations were evaluated by DWI. We used histopathologic findings as the reference standard for the final result of DWI for ovarian cancer in all 12 studies. Of 12 studies, 5 studies used 3T MRI scanner with the others using 1.5T MRI scanner. Typical bvalues for imaging were 0, 500, 800, and 1000 s/mm 2 . Two methods were adopted to identify malignant lesions, one of which was to visually identify high signal intensity (HIS) areas and the other was to quantitatively calculate ADC value from region of interest on images. In 2 of the 13 studies, malignant lesions were identified by both HIS and ADC value, and 2 used the method of HIS alone. The remaining articles were only identified by ADC value, one of which calculated the ADC entropy instead of the mean ADC. ADC value of malignant lesion ranged from 0.878 to 2 s/mm 2 , and benign lesion ranged from 1.13 to 1.9 s/mm 2 . In general, malignant lesions had a lower ADC value. Assessment of study quality Detailed information about the QUADAS questionnaire of all enrolled studies is shown in Table 3. The overall quality of the studies was favorable, with all articles fulfilled 9 or more of the 14 items. Assessment of publication bias The result of Deeks funnel plot asymmetry test revealed that no publication bias was observed (P = .6). The slope was not significant (Fig. 5), suggesting the absence of potential publication bias. Discussion Ovarian cancer is one of the most fatal cancer-related diseases among women and even frequently diagnosed in young women. On initial diagnosis, most women were diagnosed at a progressive stage. Application of new techniques for differentiating between malignant and benign ovarian lesions has a positive Table 1 Main characteristics of all studies included in the meta-analysis. Author [17] A recently published research [45] discussed the diagnosis performance of DWI in ovarian cancer, but the study focused on the difference of ADC values between benign and malignant ovarian lesions, without clearly evaluating the diagnosis accuracy of DWI, such as specificity and sensitivity. A similar systematic review published in 2015 [41] on this topic reported the diagnosis accuracy of DWI in ovarian cancer. Since its publication, several new researches have emerged assessing DWI performance in detecting malignant ovarian cancer. Our objective was to provide an updated overview on this topic. The article included 5 of 10 studies in the previous review and also includes 2 literatures published in 2015. Although a recently published meta-analysis Figure 2. Forest plot of sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio of DWI for detection of ovarian cancer. Solid circles represent the study-specific point estimates of sensitivity, specificity, positive LR, and negative LR. Horizontal lines indicate 95% confidence interval (CI). The diamond represents the pooled estimates and 95% CI. DWI = diffusion weighted imaging, LR = likelihood ratio. Methodological quality was assessed using quality assessment of diagnostic accuracy studies criteria. Quality item 1: was the spectrum of patients representative of the patients who will receive the test in practice? Quality item 2: were selection criteria clearly described? Quality item 3: is the reference standard likely to correctly classify the target condition? Quality item 4: is the time period between reference standard and index test short enough to be sure that the target condition did not change between the two test? Quality item 5: did the whole sample, or a random selection of the sample, receive verification using a reference standard of diagnosis? Quality item 6: did patients receive the same reference standard regardless of the index test result? Quality item 7: was the reference standard independent of the index test (i.e., the index test did not form part of the reference standard?)? Quality item 8: was the execution of the index test described in sufficient detail to permit replication of the test? Quality item 9: was the execution of the reference standard described in sufficient detail to permit replication? Quality item 10: were the index test results interpreted without knowledge of the results of the reference standard? Quality [46] some obvious shortcomings should be mentioned. First, the latest studies about DWI for diagnosing ovarian cancer were not included in their meta-analysis. Second, the included studies involved relatively narrow geographical region. Most of them were conducted in China and 4 out of 10 were published in Chinese, which might make their results less unrepresentative. Thirdly, the general QUADAS score was not favorable with most scoring less than 9 points, which would discount the credibility of results. Fourth, main characteristics of the included studies were not descripted in detail. The defects mentioned above have been amended in our meta-analysis, which involved global areas and scored high for study quality. Results demonstrated that for ovarian cancer detection, DWI had both moderately high specificity (86%) and sensitivity (81%). Actually, high sensitivity and NPV of DWI indicated higher correct diagnostic rate for patients in early stages. [47] AUC was calculated by SROC which equaled 0.9160 indicating a promising result. Significant heterogeneity existed between the 12 included studies in our analysis. We found no significant threshold effect existed through the ROC plane and first eliminated threshold effect as the source of heterogeneity. DWI is a functional measure of tumor microenvironment with quantitatively calculated ADC values to improve diagnostic accuracy. ADC values mainly depend on extracellular/intracellular components and reflect the diffusion characteristics of water in tissues. [33] Small ADC values demonstrate restricted diffusion which tends to indicate the presence of malignant tissue or hypercellularity. [39] There was the presence of a significant difference of ADC values in some studies between benign and malignant masses with an optical cut-off value which showed ADC value is useful in discriminating ovarian cancer from benign masses. Sensitivity, specificity, PPV, and NPV were observed with a corresponding cut-off in each article but the 3. [40,48,49] There was overlap of ADC value between malignancy and benign lesions. Pathologic structures of benign tumors such as fibromas, Brenner tumors, and cystadenofibromas probably contributed to the apparent discrepancy significantly. Inside the extracellular matrix of benign fibrous tumors the presence of dense network of collagen fibers and abundant collagenproducing fibroblastic cells decreased ADC value. [33] In addition, malignant tissues exhibited increased ADC value due to the existence of necrosis or cystic areas and fluid collection intervening papillary components. [40] Meanwhile, we searched 2 articles on DWI for differentiating borderline from malignant ovarian lesions. [50,51] Histologically, borderline ovarian lesions are characterized by both benign and malignant masses, thus in this review they were excluded in order to avoid increasing the uncertainty of analysis results. Notwithstanding, some limitations of the meta-analysis also should be acknowledged. First, only a small number of studies were included in the final meta-analysis because many studies were excluded based on eligibility criteria and may not be qualified to evaluate the diagnostic accuracy. All included studies were published in English which may have negated some of the gray literature. Second, MRI protocols for diagnosis of ovarian cancer were not standardized. Not all the studies used similar DWI parameters, such as b-value and magnet field strength among the studies. Studies used 1.5 T or 3 T and b value varied from 400 to 1500. Standardization of DWI protocol for ovarian cancer across the multicenter studies is recommended. Finally, the considerable overlap of ADC between cancer and noncancerous tissue made it difficult to determine a cutoff value which might be a source of statistical heterogeneity. [52] In conclusion, DWI as an accurate noninvasive imaging method is a useful tool for diagnosis of ovarian cancer. Still, further prospective researches are required to build the value of DWI for diagnosis of ovarian cancer.
2018-04-03T03:50:09.862Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "f8151aac1885d4dd803e21ad5180e803ee91e037", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1097/md.0000000000006659", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8151aac1885d4dd803e21ad5180e803ee91e037", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259671255
pes2o/s2orc
v3-fos-license
Perceptions about massive environmental impacts: a Brazilian study case Abstract The year 2019 brought three such impacts of high socio-environmental proportions in Brazil: the dam collapse in Brumadinho, oil spills on the coast, and fires in the Amazon. We investigated the Brazilian population’s perceptions of the country’s overall environmental situation, the degree to which Brazilians felt affected by these impacts considering personal and social factors, and the entities they held responsible for these disasters. Through Facebook’s social media networks, we disseminated structured online surveys for Brazilian citizens above 18 years. Educational background explained how much the 775 respondents felt affected by the three evaluated events. Age was an explanatory factor for the degree to which the respondents felt affected by the dam collapse, and proximity to the disasters, while income levels were for the dam collapse and the fires in the Amazon. The government, criminal activity, and private companies were considered to be the main responsible for these three impacts. This perception reflects the series of changes in the country’s environmental laws and protections that threaten biodiversity and the environment. INTRODUCTION The term "environmental impact" refers to any environmental changes arising from, or aggravated by, anthropogenic activities in the biotic, physical and socioeconomic environments (Sánchez 2020).In this view, environmental impacts come from actions such as mining (Yang et al. 2020), natural disasters (Amato et al. 2020), accidents (Hou 2012), and crime (Williams & Dupuy 2017).Furthermore, there are contextual differences in the definitions of environmental impact and its causes among experts and laypeople, mainly in factors that affect the economy (Truelove & Gillis 2018). Perception studies evaluate how people organize, identify, and interpret data through their senses and previous experiences (Colley & Craig 2019, Heidbreder et al. 2019, Shackleton et al. 2019).Environmental perception is multifactorial, based on each person's natural experience and beliefs that are derived from values and norms (Bennett et al. 2017), which may motivate proenvironmental attitudes (Cruz & Manata 2020) or, at least, a tendency to respond with some degree of positivity to a situation (Jones & Dunlap 1992).These attitudes and perceptions could be influenced by personal and social factors, like age, gender, socioeconomic status, and basic opinions on economics, politics, and technology (Aslanimehr et al. 2018, Dorsch 2014, Gifford & Nilsson 2014, Kilbourne et al. 2002, Xiao & McCright 2015).Specific events or issues can generate particular understandings and influence people to respond in specific ways (Colley & Craig 2019, Heidbreder et al. 2019, Shackleton et al. 2019). In Brazil, recent years have been marked by several events and changes in environmental laws and policies, threatening the country's natural resources (Abessa et al. 2019).The year 2019 was notable in terms of damaging events of significant socio-environmental proportions (Capelari et al. 2020), especially the Mina Córrego do Feijão dam collapse (hereafter dam collapse) (Silva Rotta et al. 2020), oil spills on the coast (Soares et al. 2020), and fires in the Amazon (Silveira et al. 2020).The following is a brief description of the three events. On the 25th of January 2019, the "Córrego do Feijão" tailing dam collapsed in the city of Brumadinho (State of Minas Gerais), spilling about 12 million cubic meters of mud with ore (Thompson et al. 2020) in the administrative areas of the Vale S.A. mining company and surrounding communities (Porsani et al. 2019).It was one of the world's largest mining disasters and one of the most relevant Brazilian socioenvironmental and work accidents (Polignano & Lemos 2020), which culminated in the deaths of 266 people, while 4 people remain missing (Vale 2022).Water accumulating on the dam's surface since its deactivation (2005) and seepage may have caused the dam to rupture (Silva Rotta et al. 2020).The consequent mud spill suppressed 70.65 ha of native Atlantic Forest (Thompson et al. 2020) as well as flowed into the Paraopeba River basin, after traveling 10 km, affecting 18 other counties (Silva et al. 2020).The contamination of this river compromised the water supply for the dependent regions of this basin (CPRM 2019) along with the surrounding area, impacting biota (Vergilio et al. 2020), flora, and tourism.The local community also was affected by unemployment or inability to work, food unavailability, and declines in mental and physical health (Polignano & Lemos 2020). The second event considered in this study were the vast episodes of fires that occurred in the Brazilian Amazon, from July to December 2019.Brazil's National Institute of Space Research Agency (INPE in Portuguese) recorded 78 570 distinct fires in this ecosystem (INPE 2021): almost a three-fold increase compared to the values observed in the previous year (Barlow et al. 2020).These outbreaks of fires were not significantly influenced by meteorological conditions (Kelley et al. 2021, Silveira et al. 2020).The main causes have been attributed to accumulative deforestation (Barlow et al. 2020, INPE 2021) and the country's political instability (Escobar 2019a, Soares et al. 2020) which stimulated landowners and farmers to set fires to clear land (Silveira et al. 2020).The fires caused large ecosystem damage and released greenhouse gases to the atmosphere (Lovejoy & Nobre 2019), further contributing to climate change.The gas and the particulate matter emission from fires also affected the air quality (Lovejoy & Nobre 2019, Marlier et al. 2020), causing respiratory ailments for human beings (Marlier et al. 2020).Other impacts of fires are disruptions to social processes and functioning, psychosocial consequences, reduced tourism, and loss of landscape's aesthetic value (Paveglio et al. 2015).Moreover, the 2019 fires increased the instability and vulnerability of local communities of the Amazon, including indigenous and riverside communities (ISA 2020). The third event observed in this study were the crude oil spills first observed on the Brazilian coast (mainly in the Northeast) in August 2019.The peak incident occurred until December 2019 (Soares et al. 2020), but large slicks were reported in June 2020 and July 2021, during the COVID-19 pandemic, and also October 2022 (Bahia state) (Sousa 2022).This disaster was considered the worst environmental disaster that occurred in Brazilian and the most extensive in tropical oceans (Soares et al. 2022) , as about 5379 tons of oil residue (Oliveira et al. 2022), a toxic and carcinogenic substance (Pena et al. 2020) was found along more than 3000 km of beaches, and 11 Brazilian states (IBAMA 2020).The local communities suffered impacts on their health due to direct contact and indirectly from contaminated fish, inability to fish, or its devaluation.(de Oliveira Estevo et al. 2021).Additionally, the communities also were impacted by a reduction in tourism and local economic activity (e.g., food, accommodation, leisure, shops, and general services; Câmara et al. 2021), andunemployment (Ribeiro et al. 2021).The COVID-19 pandemic aggravated the oil spill's damage, contributing to a synergic effect on the economy, public health, and ecology (Magalhães et al. 2021).The causes and the culprits of the oil spill are still uncertain, but the Federal Police holds a Greek-flagged ship for this disaster (Porto 2021).Although the environmental and social dimension of this event, more than three years later, is still missing information about the origin of the oil and adequate attention to the socio-environmental damage and investment in research and public policies to analyze and mitigate impacts (Soares et al. 2022).The consequences of these three events of 2019 can take years to reverse.Their medium and longterm effects are not known, as we consider human perception and awareness are temporally and spatially dynamic with the environment itself (Mónus 2020, Truelove & Gillis 2018). Here, we investigated human perception of the country's overall environmental situation and the above-described socio-environmental events.We used interviews to evaluate (i) people's perception of the environmental situation in the last five years, (ii) the degree to which they felt affected by the three major disastrous ecological events of 2019 including socioeconomic characteristics such as age range, gender, income, education level, and proximity to the impacted areas, and (iii) who they held responsible for these events (Figure 1). Objectives i and ii will provide a social diagnosis as understanding people's perceptions of high-impact ecological events in the same country can emerge with insights regarding the interaction of society and the environment.This opens up an opportunity to contribute to identifying ways to reduce the future impacts of environmental changes on society.Objective iii relates to the contextdependent social construct of whom society interprets as responsible for threats to ecological integrity.Such understandings are imperative in megadiverse countries that are constantly threatened by human activities (Jones & Dunlap 1992). Data collection We u s e d s t r u c t u re d o n l i n e s u r v e ys (Supplementary Text 1) to investigate the perception concerning these three events and the Brazilian environmental situation in the last five years.We disseminated our surveys using Facebook's social media networks (SMNs), a useful research tool for Social Sciences (Kosinski et al. 2015) (see in Appendix A).We implemented a paid advertising campaign to target Brazilian citizens (from all states) above 18 years (the legal age in Brazil) between the 12th of May to the 9th of September of 2020.All advertisements shared on Facebook were also shared on Instagram with the same ad configurations.The online form was left available until we had enough respondents to reach a 95% confidence interval (Taherdoost 2017), corresponding to 938 people, considering a Brazilian population of 209 500 000 (IBGE 2021).This survey follows the standards of the Human Ethics Committee of the Federal University of Goiás and received the follow number of approval # 3.971.032/ 2020. Data analysis Of the 938 forms answered, we obtained the proximity to impact variables by estimating the Euclidian distance from the centroid of each respondent's resident state to the centroid of the affected state.For the dam collapse, we considered the centroid of the state of Minas Gerais, and for the other two events, we took the shortest distance between the respondents' state of residence and all affected states.We estimated the Brazilian centroids of states using the Political Boundaries of Brazilian States vector map (INDE 2018), which we converted to South America Albers Equal Area Conic projection in QGIS (QGIS Development Team 2019) to maintain accurate area measurements.Thus, we computed the centroid distances using the "spDists" function from the "sp" R package (Pebesma & Biband 2005).Furthermore, for each state, we measured the average number of aspects of respondents' lives that were affected by the three events.We represented this spatial distribution with maps elaborated using QGIS (QGIS Development Team 2019). We performed all statistical analyses with 775 of the 938 forms answered, as we did not consider duplicates, non-binary sex (only 1.4% of respondents), incomplete forms, and outliers (Supplementary Material -Figure S1).We used the Likert scale to measure the perception of the Brazil's environmental situation in the last five years (Table I).We used generalized linear models to investigate how much the people felt affected by these impacts considering characteristics of respondents.We quantified the number of aspects of respondents' lives that were affected and analyzed them concerning (2023) 95(2) e20220335 5 | 16 respondents' socioeconomic characteristics such as age range, sex, income, education level, and proximity to the impacted areas (Table I). We ran separate full models for each impact and fitted them under Poisson distribution errors.We obtained the Minimal Adequate Models (MAMs) by removing non-significant predictor variables (p > 0.05) from the full models (Crawley 2013).We used the p and Z-values of MAMs to make our inferences.We used the "hpn" function from "hpn" R package (Moral et al. 2017) to verify the models' assumptions (e.g., homogeneity and normality, Zuur et al. 2010).For it, we plotted the residuals versus fitted values, performing a diagnostic analysis based on halfnormal plots with a simulated envelope (Figure S2).To assess the independent contribution and relative importance of each predictor's variables of our full models, we performed a hierarchical partitioning analysis (Murray & Conner 2009).We used a parameter of significance as an evaluation based on R² goodness of fit, which allowed us to interpret the independent effects of each predictor as the proportion of the explained variance.For the hierarchical partitioning analysis, we also used the Z-value with values >2 to correspond to the predictor's variable importance using a randomization test with 100 interactions (Nally 2002). We investigated the respondents' perceptions of those responsible for the assessed impacts exploring the predetermined choices on the survey (Table I).We analyzed the answers through the "wordcloud" function from the "wordcloud" R package (Fellows 2018), which performs an analysis aggregating similar alternatives, and representing them graphically according to their frequency.All statistical procedures were performed with the software R v.4.0.1 (R Core Team 2020). RESULTS The 116 sampling days resulted in 775 analyzed forms, 497 of which were answered by women (64.04%) and 279 by men (35.95%).Brazilians from almost all states participated in the survey, with the exception of the state of Roraima.The state with greatest participation was São Paulo.The respondents' age ranges varied from 18-22 to 81-90, with 31-40 being the most representative (n=213; 27.44%) and 81-90 the least (n=4; 0.51%).A total of 220 (28.35%) respondents had incomes between one and three times the national minimum wage, and 18 respondents (2.31%) were in the highest sampled income bracket.Finally, the predominant education level was university education (Table SI). Most of the respondents (65%) reported that the environmental situation in the last five years had worsened considerably, while 26.2% of them perceived that it had worsened, 5.2% reported that it remained stable, 2.6% that it had improved, and none answered that the situation had improved considerably.Thus, in total, 91% of respondents perceived a worsening in the environmental status in Brazil (Table SI). The analyzed environmental impacts affected at least one aspect of life for 391 (50.45%) of the respondents for the dam collapse, 461 (59.48%) for the oil spills on the coast, and 528 (68.12%) for the fires in the Amazon.The average number of respondents' affected aspects of life was 1.19 for the dam collapse, 1.43 for the oil spills on the coast, and 1.60 for the fires in the Amazon.The states where, on average, the respondents felt more affected varied for each impact (Figure 2). The explanatory variables which composed the MAMs varied for each impact (Table II).Education level was the only explanatory variable in common for all three impact models (Table II, Figure 3).Gender was the only variable that did not explain any variation for all events.On average, from our research respondents, people with higher levels of education were affected in more life areas than those with low levels of education (Figure 3a-c).Income explained how much the respondents felt impacted by the dam collapse (Figure 3a) and Amazon impacts (Figure 3c), but not by the oil spills (Figure 3b).For these first two events, our data shows that people with higher incomes were more affected by the impacts.However, for the oil spill on the coast, income presented a low influence in how the respondents felt impacted.The age range was related to the respondents' perception only in the dam collapse, in which younger people were the most affected (Figure 3a).Proximity to the impact was relevant concerning the dam collapse (Figure 3a) and Amazon impacts (Figure 3c), in which respondents living near the regions where these events occurred felt more affected than people living further.Overall, level of education was the most important descriptor for the oil spills on the coast and fires in the Amazon, accounting for about 50% of the explanation of the full model.The dam collapse was better predicted by the proximity to the disasters, followed by level of education, age, and income (Figure 3a). Most respondents reported that private companies were the main actors responsible for the dam collapse and the oil spills on the coast (Figure 4).For the fires in the Amazon, the top ranked culprit was criminal activity (Figure 4c).The government and criminal activity were both placed in the top three positions of actors responsible for each of the three impacts (Figure 4a-c). DISCUSSION From our sampled population, limited to people that can read, use a computer, have internet access, and use social media platform, most perceived a worsened environmental status, which is congruent with the Brazilian political crisis (Escobar 2019b, Wade 2016) and a series of changes and events related to the environment, like actions and changes in environmental laws that endanger biodiversity protection (Abessa et al. 2019, Barbosa et al. 2021).The last decade's reduction of investments in national environmental protection also has been remarkable (Barbosa et al. 2021).Similarly, federal environmental agencies have been weakened by replacing specialists with military officials or by the appointment of officials without training in environmental protection (Vale et al. 2021).Moreover, the occurrence and knowledge of these three disastrous ecological events (Barbosa et al. 2021) within the last five years would contribute to a more negative perception of the environmental situation in Brazil. We revealed how the three disastrous environmental events of 2019 have affected the livelihoods of our surveyed respondents.However, it is important to highlight that the concept of environmental perception depends on history, culture and many individual characteristics (Dietz et al. 1998, Jones & Dunlap 1992).These internal conditions, plus social and psychological aspects, can engender personal environmental concerns and pro-environmental behavior and attitudes (Bennett et al. 2017, Colley & Craig 2019, Corraliza & Berenguer 2000, Cruz & Manata 2020, Jones & Dunlap 1992).Our results show that the way each respondent felt affected by the environmental impacts on Cienc (2023) 95(2) e20220335 8 | 16 their lives varies among the three evaluated events concerning their personal and social characteristics.This is not surprising since environmental damage perception is a contextdependent social construction (Bennett et al. 2017, Brody et al. 2004). An Acad Bras As we expected, the level of education explained the number of affected aspects of life for all impacts.Knowledge and level of education both have been considered predictors of environmental concern (Gifford & Nilsson 2014, Jones & Dunlap 1992), and educated people tend to feel more greatly affected by the environment (Gifford & Nilsson 2014).Proenvironmental behaviors and attitudes depend directly on having adequate knowledge about environmental issues (Robelia & Murphy 2012).However, we understand that it is complex and controversial to determine a cause-and-effect relationship of education with the population's ability to perceive a greater number of risks.Socioeconomic characteristics alone cannot predict environmental perception, since other factors such as history and culture aid in its determination (Bennett et al. 2017), but it is known that they can act as modifiers or amplifiers (Wachinger et al. 2013).People living in proximity to the dam collapse and living in the states directly impacted by the fires in the Amazon felt more affected than people who lived farther away.Previous studies have also found that proximity to impacted areas contributed to how many people became injured (Brody et al. 2004, Gifford & Nilsson 2014).However, proximity would not be a determinant factor without a personal experience of damage, as observed with people's perception of climate change and its possible consequences (Lujala et al. 2015).The age factor was relevant only for the dam collapse.Younger people, aged 18 and younger, tended to feel more damaged psychologically and emotionally than older people, despite the fact that the direct consequences of the disasters in their lives, such as material losses, were equal to both (Gifford & Nilsson 2014, Ngo 2001).Similarly, people with higher incomes felt more affected for all events, but this factor had low importance for the oil spill impact.Franzen and Meyer (2010) observed a positive correlation between environmental concerns and gross domestic product (GDP) per capita, converging for most upper-middle class environmentalists.A potential explanation for that is an inversion of people's values with increasing income, from materialist to post-materialist, and being focused on self-development and well-being (Gökşen et al. 2002).When income increases to such a point, basic material needs do not require great time and effort, and other aspects, such as education and environmental concern are prioritized (Gökşen et al. 2002).This can also be inferred at higher social scales, where rich people from developing countries tend to be more environmentally concerned than those from poorer ones (Fairbrother 2013). As these studied environmental impacts were huge, their repercussions reached a broad and worldwide audience (The Washington Post 2019).However, some geographical scale particularities have to be highlighted to provide insights into Brazilian perceptions for each of the three disastrous events.The dam collapse happened in a limited area, having an immense effect on local people's lives and the regional ecosystem (Polignano & Lemos 2020, Thompson et al. 2020).On the other hand, the oil spill has a broader scale, affecting those who live in the coastal areas as well as Brazilians that come from non-coastal areas for recreation and tourism purposes (Soares et al. 2020).In contrast, although the fires in the Amazon have occurred on a biome scale, it has not affected only local people but also populations from other regions, as the smoke extended to southeastern states (Lovejoy & Nobre 2019).However, this impact additionally raised international concerns (The BBC News 2020, The Guardian 2020).The three events occurred in the same country; therefore, we could expect a national identity to shape a perception pattern of how people felt affected.However, as it is a country of continental dimensions, this scale-dependent context only emerged on the regional scale (dam collapse). Of the set of culprits presented as the main responsible for the three environmental impacts, most of the respondents blamed private companies, the government, and criminal activity, although these groups ranked different positions for each impact.Private companies ranked first for the dam collapse and the oil spill events, which would be expected considering its scope, impacts, and market control.Ten companies in the world hold more than 50% of global productions of nickel, iron, and copper, and the same number of companies hold 72% of the world's oil reserves (Folke et al. 2019).These industries damage the environment with habitat destruction, air and land contamination, loss of biodiversity, and others (Folke et al. 2019).In the case of the dam collapse, the mining waste that destroyed the Córrego do Feijão district and damaged a long extension of the Paraopeba River came from a primate company, Vale S.A.For the Amazon fires, criminal activities ranked first as the possible culprits, followed by the government.This is not surprising considering events like "Fire Day", in which farmers were coordinated to set fires in agricultural and deforested areas during this day (Silveira et al. 2020).As a consequence, 20% of fire occurrences during 2019 happened in the two weeks that followed the Fire Day (Silveira et al. 2020).Nevertheless, citizens will also question the government's responsibility for environmental impacts since it is one of the government's duties to protect the country's biodiversity and natural resources (Brazil Law No. 6938/1981).Despite the potential and reach of SMNs, voices of groups can be omitted, and values such as loyalty, authority and social bonds can be maximized in this environment, representing some sampling biases on these platforms (Hargittai 2020).However, these biases do not invalidate Facebook as a research tool for demographic and psychometric aspects (Kalimeri et al. 2020).Another limitation for sampling is that only 21,7% of Brazilians have access to the Internet and social media networks (IBGE 2021).Since it is not possible to eliminate this type of bias, we limited the advertisement campaign for a random sampling of the legal-aged population (age 18 and over in Brazil).Although the biases are reduced, they are still present.Every advertisement shared on Facebook was also shared on Instagram with the same ad configurations. There is also a "distance bias" associated with the determination of a central point to assess the effects of the distance between the respondent and the events in the results.Some of these events, such as the oil spill on the Brazilian coast and the Amazon fires, happened in widespread areas, and Brazil is a continental sized country, with states that are large in geography.To minimize this bias, we calculated centroid areas based on the respondent's home state and the local of the environmental impact.These detailed procedures are presented in the Methods section.Despite addressing this bias, however, our results show a spatial distribution of perception from the perspective that respondents living closer to the impacted area are possibly more concerned with its environmental quality (Brody et al. 2004). In regards to the entities which respondents held responsible for these disastrous events, we provided on the online form with predetermined choices (Supporting text S1) as well as a blank space for other possibilities.However, few respondents used this opportunity, and the low number of responses do not allow for further analysis. The people's perception of a country's environmental situation are linked to the history of actions and positions that the country took in facing events that changed ecosystems and impacted biodiversity (Cionek et al. 2019, Colley & Craig 2019).The determination of environmental liability is linked to a lengthy judicial process.When the process determines culprits, the fines imposed do not recover the damage caused and do not reflect the real cost for lost biodiversity (Garcia et al. 2017, Ziliotto 2020).Moreover, those responsible for the impacts try to judicially exempt themselves from socio-environmental responsibility (Barbosa et al. 2021) or neglect to pay the imposed fines, as in the case of the dam collapse (Cionek et al. 2019, Garcia et al. 2017).Additionally, the fines cannot compensate for the huge environmental damage (Ziliotto 2020).This legal instability related to environmental issues is likely to foster the population's mistrust in the government's duty of biodiversity protection. Finally, considering the weakening of Brazilian environmental protection and the currently poor environmental governance, new disasters are likely to happen (Barbosa et al. 2021, Cionek et al. 2019, Ferrante & Fearnside 2019, Garcia et al. 2017).It will not take long to occur, as in 2022 when some oil spills were again found on the Brazilian coast (Sousa 2022), the Amazon caught fire again in 2020 (NASA Earth Observatory 2020), and the Pantanal biome suffered one of its greatest fire episodes (Garcia et al. 2021).Although these disasters have consequence for the rest of the world, the biodiverse environment of Brazil and its people are the most affected entities of these disastrous events.As part of this nation's history, these events are shaping social perceptions and Figure 1 . Figure 1.Conceptual framework for the surveys and interviews assessing perceptions held by Brazilians of their country's overall environmental situation and the three massive environmental impacts in 2019 (the dam collapse, oil spills on the coast and fires in the Amazon). FLÁVIA DE F. MACHADO et al.PERCEPTIONS ABOUT MASSIVE ENVIRONMENTAL IMPACTS An Acad Bras Cienc Figure 2 . Figure 2. The average perception of negative impacts on aspects of life, by Brazilian states, caused by the dam collapse (a), oil spills on the coast (b), and fires in the Amazon (c). Figure 3 . Figure 3. Perception of how much the respondents felt affected by the dam collapse (a), oil spills on the coast (b), and fires in the Amazon (c), concerning age range, income, education level, and proximity to the impacted areas.Dark gray bars represent significant effects (Z > 2) of the independent contribution of each explanatory variable (relative importance) on the perception of each disaster. Furthermore, Brazil had already experienced other huge dam failures in the recent past, such as the Fundão mine located in the municipality of Mariana, which was owned by the Samarco Company, controlled by Vale S.A. (Cionek et al. 2019, Garcia et al. 2017).The private monopoly of the world's oil reserves seems to be related to private companies ranking as the main culprit for the oil spill on the Brazilian coast (Folke et al. 2019).However, at the moment of this writing, the culprits that are legally responsible for this environmental disaster have not yet been identified (Barbosa et al. 2021). Figure 4 . Figure 4. Word clouds representing the frequency of respondents' choices about who they hold responsible for the dam collapse (a), oil spills on the coast (b), and fires in the Amazon (c) -Environmental impacts of 2019. FLÁVIA DE F. MACHADO et al.PERCEPTIONS ABOUT MASSIVE ENVIRONMENTAL IMPACTS An Acad Bras Cienc (2023) 95(2) e20220335 12 | 16 the decision-making processes in relation to the environment. Table I . Variables used in the analyses to measure the Brazilian people's perception of the country's environmental situation in the last five years considering the dam collapse, oil spills on the coast, and the fires in the Amazon -Environmental impacts of 2019. Table II . Deviance table of the Minimum Adequate Models for how much the respondents felt affected by the three major environmental impacts of 2019 in relation to their age range, income, education level, and proximity to the impacted areas.
2023-07-12T06:08:50.845Z
2023-07-07T00:00:00.000
{ "year": 2023, "sha1": "5ef205bf448cb37aee3f8e5e5068f1f0b5fbc7d8", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/aabc/a/Wrz9YFYFNnLHJ64bS7hk94N/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "606793778ba835f6ff8127a668a09032e6e4fbba", "s2fieldsofstudy": [ "Environmental Science", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
270114333
pes2o/s2orc
v3-fos-license
Mutational Analysis of Driver and Non-driver Mutations of Philadelphia Chromosome-negative Myeloproliferative Neoplasms; Diagnosis and Recent Advances in Treatment : Myeloproliferative neoplasms (MPNs) are hematological disorders affecting myeloid stem cells. They are classified as Philadelphia (Ph) chromosome positive-chronic myeloid leukemia, and Ph-negative polycythemia vera, essential thrombocythemia, primary myelofibrosis, chronic neutrophilic leukemia, chronic eosinophilic leukemia, juvenile myelomonocytic leukemia, and MPN unclassifiable. This review is mainly focused on the Ph-negative MPNs namely, PV, ET, and PMF. These affect both males and females with a slight male predominance, with patients mainly presenting in the seventh decade. Patients often present with thrombotic events resulting in complications that lower survival rates. The major driver mutations that have been identified in MPNs are JAK2 Exon 14, JAK2 Exon 12, MPL Exon 10, and CALR Exon 9. The importance of these driver mutations gives due recognition to their inclusion into the 2022 diagnostic criteria of the MPN WHO Classification. However, other non-driver mutations have also been reported, especially in triple-negative cases. These mutations lead to downstream constitutive activation of the JAK/STAT signaling pathway, as well as the MAPK, and PI3K/Akt pathways. Insights into the molecular pathogenesis of MPN and its association with JAK2 , CALR , and MPL mutations have identified JAK2 as a rational therapeutic target. Thus, as an approach to MPN therapy, JAK2 inhibitors, such as ruxolitinib, have been shown to effectively inhibit JAK2, and are currently in clinical trials in combination with other drug classes. This review comprehensively examines the molecular markers of the main Ph-negative MPNs, as well as diagnosis and treatment options. of 20 marrow stem cells resulting in unregulated proliferation of the myeloid cells; major overproduction of erythroid cells in PV, platelets in ET, and megakaryocytes secreting platelet derived growth factor (PDGF) and monocytes secreting fibroblast activation leading to fibroblast proliferation and bone marrow fibrosis in PMF [3].This can evolve into excess cells in the peripheral blood leading to thrombosis, overt myelofibrosis, or accumulation of mutations leading to acute leukemia, making diagnosis, risk assessment, and therapeutic approaches difficult [4]. The breakthrough discovery of the JAK2 V617F mutation, it's role in diagnosis of the MPN subtype, and the development of novel molecular diagnostic technologies, permitted identification of molecular markers.This made molecular testing the cornerstone in investigation of MPN in a routine diagnostic setting [5].The detection of mutations in JAK2 Exon 14, JAK2 Exon 12, MPL Exon 10, and CALR Exon 9 are now major diagnostic criteria within the WHO classification of myeloid neoplasms, 2022.Mutations in these genes are associated with constitutive activation of the JAK/STAT signaling pathway involved in sustained cell proliferation and survival of hematopoietic cells and thus are linked to the various diagnosed MPN phenotypes.Other clonal markers include non-driver mutations in genes affecting epigenetic control (TET2, ASXL1, and DNMT3A), splicing regulators (SRSF2, SF3B1, U2AF1) and regulators of chromatin structure, and cellular signaling (EZH2, IDH1, IDH2, TP53) [6].Here, we comprehensively review the molecular markers of the main Ph-negative MPNs, as well as the diagnosis and treatment options. Epidemiology MPNs affect both males and females with a slight male pre-dominance.It affects especially those >60 years of age [4].Common risk factors predisposing to MPNs include smoking, family history, obesity, occupation, and ionizing radiation [3].Based on a metaanalysis of studies, the estimated annual incidence of MPN was reported as 2.17 cases per 100,000 of the population.It reported a pooled annual incidence of 0.84 per 100,000 for PV, 1.03 per 100,000 for ET, and 0.47 per 100,000 for PMF across Europe, North America, Australia, and Asia [3,7].A study done in 2020 by Yassin et al [8] identified that the prevalence and incidence of MPNs were estimated to be 57-81 and 12-15, respectively, per 100,000 hospitalized patients per year over the last 4 years in Asia, including the Middle East, Turkey, and Algeria.Recent literature estimated the annual global incidence of MPN in pediatric patients to be 0.82 per 100,000 patients and found children, adolescents, and young adults (40 years or below) to represent a small portion of MPNs, although infrequent [9,10].The National Cancer Registry, Sri Lanka (2020) [11], reported an agespecific incidence rate for MPN in males ranging from 2.6 to 6.1 cases per 100,000 individuals, while in females, it ranged from 1.3 to 2.4 cases per 100,000 individuals.These rates were observed among individuals aged between 60 and 75 years. Diagnostic approach Accurate diagnosis of the MPN sub-types is important for patient prognosis and therapeutic recommendations [12].The revised World Health Organization (WHO) diagnostic criteria for MPN, 2022 (Table 1,2,3), highlights the importance of clinical evaluations in diagnosing PV, post-PV MF, ET, post-ET MF, pre-fibrotic/early PMF, and overt PMF.These include initial evaluation of palpable splenomegaly, thrombosis and hemorrhage, cardiovascular risks, and medical history [13].Further investigations include hematological (i.e.complete blood count, peripheral blood smear), biochemical (i.e.serum lactate dehydrogenase, uric acid, vitamin B12, ferritin, C-reactive protein), and histopathological (i.e.bone marrow morphology) investigations and cytogenetic studies [12,14]. Molecular markers for MPN The majority of Ph-negative MPN cases are due to point mutation in JAK2 Exon 14 (V617F) which results in constitutive activation of the JAK/STAT signaling pathway.Various studies have identified mutations in other genes including JAK2 Exon 12, CALR, and MPL in patients with V617F-negative MPN.Hence, the WHO has identified testing for mutations in these genes as part of the diagnostic procedures for patients with MPN [6].Different clinical phenotypes characterize the MPN subtypes, thereby making the accurate molecular diagnosis of the specific gene mutation critical to promote treatment and management outcomes. JAK2 Exon 14 and Exon 12 The somatic gain-of-function point mutation in Exon 14, codon 617 of the erythropoietin receptor tyrosine kinase, Janus kinase 2 (JAK2 V617F), was discovered in 2005.It was found to be majorly associated with Ph-negative MPNs and contributed to the understanding of the pathophysiology, pathogenesis, and underlying molecular drives of MPN [4]. JAK2 has seven JAK homology domains: JH1-JH7; the JAK2 kinase activity is regulated by the pseudokinase domain present in JH2 coded by Exon 14 of the JAK2 gene (Figure 1).A point mutation in V617F of Exon 14 causes constitutive expression of the JAK2 kinase (JH1) due to the removal of the auto-inhibitory role of the pseudokinase domain.This mutation is present in approximately 96% of cases with PV, 55% of cases with ET, and 65% of cases with PMF [14].The binding of hematopoietic cytokines, such as interleukins, colony-stimulating factors, interferon, erythropoietin, and thrombopoietin, to JAK2, promotes signal transduction of the JAK/STAT signaling pathway.These synergistically generate constitutive activation of the JAK/STAT pathway via transcription factors STAT3 and STAT5.They activate or repress genes important in hematopoiesis, cell division, cytokine-independent growth, and cell survival, such as c-MYC, CYCLIN D2, ID1, BCL-XL and MCL-1 [15]. V617F mutation also constitutively activates the mitogen-activated protein kinases (MAPK) and phosphoinositide 3-kinase/protein kinase B (PI3K/Akt) pathways, resulting in increased expression of mitotic proteins, cell cycle regulatory proteins, and antiapoptotic genes [16].The MAPK/ RAS pathway is involved in signal transduction through small GTPase RAS proteins [17].RAS proteins are frequently mutated in human cancers, including MPN.Mutant RAS in the active GTP-bound conformation, constitutively recruits RAF, a protein-serine/threonine kinase, and activates the downstream MAPK signaling cascade to promote cell survival, proliferation, and differentiation [15].PI3K is a heterodimer consisting of a catalytic and regulatory subunit which upon stimulation phosphorylates and activates Akt, thereby activating the serine threonine kinase-mammalian target of rapamycin (mTOR).This mediates various cellular functions including angiogenesis, metabolism, cell proliferation, survival, and apoptosis [18].In MPN, activating mutations in the receptor tyrosine kinases or loss of function mutation of PTEN, which plays a regulatory role on the PI3K pathway, promotes the proliferation of hematopoietic stem cells (HSCs) [19].Furthermore, bone marrow biopsies of V617Fpositive patients show increased phosphorylation of Akt which suggests hyperactivity of the pathway [20].Dysregulation of the PI3K/Akt pathway is suggested to contribute to resistance to tyrosine kinase inhibitors; hence, making the management more difficult [21].However, combined therapy involving inhibitors of the JAK/STAT, PI3K/Akt, and/or MEK pathways are supposed to be efficacious [22,23].Thus, a better understanding of the interconnectedness of these pathways with MPN pathogenesis will help improve treatment and management outcomes. The MPL gene (Figure 3), located on chromosome 1p34, encodes a thrombopoietin receptor which binds to thrombopoietin, the primary cytokine that regulates megakaryocyte development, platelet production, and hematopoietic stem cell homeostasis.This binding activates JAK2, phosphorylating MPL and initiating a signaling cascade that regulates cell survival, proliferation, and differentiation [14].A mutation in the MPL gene, such as W515L guanosine to thymidine substitution at nucleotide 1544, results in the impaired function of the auto-inhibitory region and ligandindependent thrombopoietin receptor activation.This activates JAK2 tyrosine kinase and transcription factors STAT3 and STAT5, leading to the transformation of hematopoietic cells into cytokine-independent clones, megakaryocytic hyperplasia, and marrow fibrosis [14].This can be associated with lower hemoglobin levels at diagnosis and a high risk of transfusion dependence in patients with MF [26]. The wild-type CALR gene (Figure 4) located on chromosome 19p13.2,encodes a Ca 2+binding chaperone protein, calreticulin, primarily localized to the endoplasmic reticulum (ER).The localization and retention of CALR are defined by the N-terminal signal sequence and the C-terminal ER retention sequence KDEL.Calreticulin is involved in Ca 2+ homeostasis, disposal of misfolded proteins, phagocytosis, and immune response.It also mediates gene nuclear transport, apoptosis, and integrin-mediated cell adhesion.However, frameshift mutation, due to indels, changes the reading frame and results in the loss of most acidic amino acids, including the KDEL sequence found in the C-domain.This generates a new tail with low calcium buffering therefore, the resulting mutant protein is positively charged as opposed to the wild type and leaks from the ER [1,30]. Recent studies have identified that the mutated calreticulin protein can induce the JAK-STAT signaling pathway by binding to the MPL receptor in a thrombopoietinindependent manner.Mutant calreticulin are 'rogue chaperones' that acquire unique oncoprotein properties, they promote clonal expansion of HSCs and megakaryocytes via binding and activation of MPL with the N-terminal lectin binding domain of CALR mutants and their positively charged tail [1,31].The MPL-mutant calreticulin complex requires cell-surface localization to induce autonomous activation of HSCs, this depends on full activation of mitogen-activated protein kinase, JAK2-STAT5/STAT3, and phosphoinositide 3-kinase pathways [31,32].In vitro studies have also revealed that cells co-expressing MPL and mutant calreticulin, respond less to thrombopoietin; suggesting that the binding of mutant calreticulin competitively hinders the binding of the normal cytokine [33].Other infrequent mutations in Exon 9 account for up to 15% of CALR mutations.The distribution of CALR mutation types differs according to the MPN type.When compared with JAK2-positive patients with ET, those with CALR mutation have higher platelet counts, lower hemoglobin levels, and leucocyte counts.Also, CALR-positive patients with PMF have a characteristic lower leukocyte count and elevated platelet count.Furthermore, it has been identified that patients with ET and PMF with CALR mutations have a higher survival rate compared to patients with JAK2-positive [27,34]. Other non-driver mutations in MPN Studies have shown that a small group of patients negative for the three main driver mutations (i.e.triple negative cases), acquire mutations in novel regions of the MPL and JAK genes, as well as genes involved in regular cellular processes, including epigenetic regulation, mRNA splicing, and cytokine signaling regulation [6].These genes can acquire mutations either before or after the onset of the driver mutations and are involved in disease evolution and progression to AML.It is considered that individuals, especially men and patients with PMF, having more than one mutation in these genes are at "high molecular risk (HMR)" and have a lower survival rate [35].Detection of this mutation requires Next Generation Sequencing (NGS), however, employing this technique in routine analysis remains a challenge [36].Critical mutations associated with the pathogenesis of MPN are illustrated in Figure 5. TET2 The TET2 gene is involved in DNA methylation for epigenetic regulation.Mutations in the TET2 gene, including insertions, deletions, or substitutions, result in a loss of catalytic activity and hence is associated with various myeloid neoplasms.Minimal prognostic impact has been observed from TET2 gene mutations, especially in patients with JAK2 V617F-positive and negative MPN [37,38].A study by Lundberg (2014) [39] demonstrated that TET2 gene mutation can occur before or after JAK2 V617F mutation resulting in clonal dominance.DNA hypermethylation is associated with the loss of function of the TET2 gene due to decreased 5-hydroxymethylcytosine (5hmC) production.This essentially affects HSCs leading to the release of self-renewal signals, incorporation of new mutations, and progression to myelomonocytic differentiation [40]. DNMT2 and DNMT3 DNA methyltransferase 2 coded by the DNMT2 gene is an enzyme that transfers a methyl group in epigenetic regulation.Less than 10% of cases of MPN are reported to arise from DNMT2 gene mutations, with exon 23 mutations being more prevalent [41].Mutations in the DNMT3 gene are present at the early stage of MPN, in about 5-10% of reported cases [42].Substitution at position 882 (R822H) is the most common mutation identified in myeloid neoplasms [43].Patients with JAK2 V617F-positive MPN with loss of the DNMT3 gene are suggested to have increased progression to myelofibrosis, reduction of HSCs, and lowered erythropoiesis [44]. IDH1 and IDH2 Mutations in IDH1 and IDH2 which code for isocitrate dehydrogenase 1 and 2 respectively, result in the generation of small molecules that interfere with histone demethylation; and consequently, block hematopoietic differentiation [45].Although IDH1/2 mutations have been found in low frequency (0.5-4%) in MPN cases and more prevalent in PMF than PV or ET, it has been suggested that mutation in these genes may facilitate progression to acute leukemia [46,47]. EZH2 EZH2 mutations are found in higher proportion in PMF, about 3% in ET, and infrequently in patients with PV.These mutations are suggested to result in complications in MPN including poor survival and progression to myelofibrosis [48].Studies conducted on transgenic mice have confirmed the critical effect of co-occurrence of JAK2 V617F and EZH2 mutations, showing the faster progression to myelofibrosis and shorter life expectancy compared to just JAK2 V617F alone [49]. ASXL1 ASXL1 gene codes for proteins involved in chromatin remodeling.Frameshift and nonsense mutations occur frequently in exon 13 of the gene and have a significant prognostic impact on patients with PMF, however, little evidence of this mutation has been demonstrated in patients with PV and ET [50].This mutation has also been associated with leukocytosis, increased platelet count, and the presence of ≥1% circulating blasts [37].About 47% of MPN cases that progressed to the leukemic phase were identified with heterogenous mutations in the ASXL1 gene [51]. SF3B1, SRSF2 and U2AF1 Mutations in SF3B1, SRSF2, and U2AF1 genes which are involved in mRNA processing, have been reported in critical stages of certain rare cases with MPN.In PV and ET, spliceosome mutation is suggested to lead to poor outcomes of survival [52].SF3B1 mutations were first discovered in patients with myelodysplastic syndrome (MDS); however, patients with PMF have been observed to have mutation clusters in exons 12-16, with an increased risk of thrombotic events [53,54].SRSF2 and U2AF1 genes are involved in the activation or suppression of mRNA splicing.In about 19% of cases of PMF that progress to acute myeloid leukemia, somatic mutations in SRSF2 genes are frequently found coexisting with TET2, ASXL1, or RUNX1, and offer a poor prognosis for cases with MPN.Less than 2% of cases with MPN have an associated with U2AF1 gene mutation; however, a strong indication has been reported for advanced-stage PMF and leukemic transformation [51,55,56]. TP53 TP53 is a major tumor suppressor gene involved in apoptosis and cell cycle arrest, and its loss of function is associated with various cancers [57].TP53 mutation with loss of heterogeneity is associated with leukemic transformation in about 16-17% of cases of MPN [58].MDM2 and MDM4 are upstream regulators of the TP53 gene and are found in increased expression and copy number in cases of MPN associated with leukemic transformation [59]. Molecular Diagnosis of MPN Detection of mutations in JAK2 Exon 14, JAK2 Exon 12, MPL Exon 10, and CALR Exon 9 are important for early diagnosis and timely management of patients with MPN.Various molecular diagnostic methods have been developed to detect these mutations in patients with MPN (Table 4).The most widely used method to identify and quantify the causative gene mutation in MPNs involves allele-specific quantitative polymerase chain reaction (AS-qPCR).Other methods include allele-specific-polymerase chain reaction (AS-PCR), high-resolution melt curve analysis, denaturing high-performance liquid chromatography (HPLC) followed by direct sequencing, capillary electrophoresis, pyrosequencing, restriction fragment length polymorphism (RFLP), fluorescent in situ hybridization (FISH), Sanger sequencing and cloning [5,14].Although used extensively, most of these methods, such as PCR-Sanger sequencing, have relatively poor sensitivities of greater than 5% of mutant alleles and a limitation of comprehensive coverage of the targeted genomic region [14], hence affecting the efficiency of the analysis.Next Generation Sequencing (NGS) has the potential to efficiently detect the common, novel, or rare mutations in JAK2 Exon 14, JAK2 Exon 12, CALR Exon 9, and MPL Exon 10, with deep sequencing of the targeted amplicons [5].It is increasingly being used in clinical laboratories but there is only limited validation of the use of NGS along with PCR in detecting MPN-associated mutations, especially in the diagnosis of patients with triple-negative MPNs [12,14]. Cytogenetic analysis of MPN Cytogenetic investigation of MPN involves inter-phase FISH and karyotype analysis, preferably with a bone marrow aspirate specimen or with a peripheral blood or bone marrow core biopsy specimen [12].Karyotype analysis distinguishes MPN from other closely similar myeloid malignancies presenting with thrombocytosis, such as MDS and chromosome 5q deletion, thereby aiding prognosis.In triple-negative patients with MPN, karyotype analysis is critical for confirming clonal hematopoiesis.Cytogenetic analysis of PV and ET patients is suggested to be of significance only in cases with triple-negative MPN [60].In PMF cases, karyotype analysis is crucial and may reveal its association with thrombocytopenia, showing greater than 1% circulating blasts, leukopenia, and reduced hemoglobin levels.Common findings of cytogenetic studies are gains of chromosomes 8 and 9, del(20q), del(13q), and abnormalities of chromosome 1, including duplication of 1q.Other less frequent lesions include -7/del(7q), del(5q), del(12p), +21, and der(6)t(1;6) (q21;p21.3)[61,62,63]. Standard Treatment The increasing insight into the molecular pathogenesis of MPN and the finding that JAK2, CALR, and MPL mutations activate JAK2 signaling has revealed rational therapeutic targets such as JAK2 inhibitors [64].The standard treatment, particularly for PV, is therapeutic phlebotomy to achieve hematocrit <45%, sometimes combined with other therapies.Antiplatelet drugs such aspirin, and cytoreductive chemotherapy, such as, hydroxyurea (Hydrea®), α-interferon or anagrelide (Agrylin® ), are used instead of or in combination with phlebotomy for patients with a history of thrombosis to suppress the excess production of blood cells [65]. Aspirin is indicated in the treatment of patients with moderate to high-risk PV and ET, and is shown to control microvascular symptoms such as erythromelalgia, transient neurological symptoms, ocular disturbances, migraines, and seizures.Hydroxyurea, is often used to treat older patients with high-risk PV and ET (>60 years old), and sometimes patients with low/intermediate-1 risk MF.It limits the bone marrow's ability to make blood cells and thereby reduces the platelet count and the incidence of thrombosis or leukocytosis [66]. Interferon may be used to stimulate the immune system and slow down the production of RBCs in patients not achieving sufficient thrombocyte control by cytoreductive therapy.BESREMi® , a mono-pegylated long-acting interferon was the first interferon, specifically approved in 2021 by the U.S. Food and Drug Administration (FDA) to treat adults with high-risk PV, regardless of their treatment history.In addition, interferons can be used to treat patients with high-risk ET and low-risk MF, but are not recommended for patients with high-risk MF [67].Anagrelide is frequently used to lower platelet counts in patients with high-risk ET demonstrating intolerance or experiencing complications with hydroxyurea.There is no single treatment option that can effectively treat all patients with MPN.The choice of treatment including the use of platelet-lowering agents, is based on risk factors including age, history of bleeding or thrombosis, vascular risk, severity of symptoms, and drug tolerance [65]. Allogeneic Stem Cell Transplantation (ASCT) Allogeneic stem cell transplantation is curative for all MPN subtypes, however, this is limited to a subset of patients with MF, due to the associated expense and risk of transplant-related morbidity and mortality, such as hepatobiliary toxicities.It involves the transfer of HSCs from donor to patient, essentially to replace the defective stem cells with healthy cells.This has reported an overall survival (OS) rate of 41% and a disease-free survival rate of 32% at 10 years in a cohort of patients with MF who underwent ASCT [68]. A favorable OS and low non-related mortality rate have been reported in patients with CALR and MPL mutation-positive PMF, post-PV MF, and post-ET MF, who underwent ASCT [69,70].However, identification of HMR mutations may aid in ASCT selection as a therapeutic option for patients with PMF, as some studies have associated lower survival rates in patients with one or more HMRs [71,72].Thus, treatment decisions regarding ASCT should be individualized based on mutational status, MF risk assessment, and transplant eligibility criteria. JAK inhibitors (JAKi) The discovery of the JAK2 V617F oncogene mutation contributed to the exploitation of the JAK2 tyrosine kinase as an important therapeutic target.Small-molecule inhibitors for JAK2 were developed for treatment of patients intolerant or refractory to hydroxyurea, particularly in patients with PV, and numerous drugs that inhibit JAK2 are currently in clinical trials [65,73]. The JAK1/2 inhibitor, ruxolitinib (Jakafi® ) was approved in 2012 by the FDA.It represents a current standard of care for the treatment of symptomatic intermediate to high-risk MF, including PMF, post-PV MF, and post-ET MF.It targets and partially inhibits the activity of JAK2 and the related protein JAK1 with clinical benefits in splenomegaly, mutant allele burden reduction, and symptom control [74,75].Further, it increases overall survival and reduces the incidence of death in patients with HMR.However, it does not prevent the onset of these HMR in patients initially negative for these mutations [76].It is suggested that the administration of ruxolitinib be discontinued if there is no reduction in symptoms [13].Figure 6 shows a schematic representation of the mechanism of action of ruxolitinib.Fedratinib (Inrebic) was approved by the FDA in 2019 for treating adult patients with intermediate-2 or high-risk primary or secondary myelofibrosis [13].It is an oral kinase inhibitor with activity against wild-type and mutationally-activated JAK2 and FMS-like tyrosine kinase 3 (FLT3) (JAK2/FLT3) with analogous type 1 mode of JAK2 binding.It inhibits the JAK2-selective mutation than the other JAK family members JAK1, JAK3, and TYK2 [65].The drug has also shown activity in reducing splenic size and symptom burden in patients with intermediate/high-risk MF resistant or intolerant to ruxolitinib [77].Ruxolitinib, fedratinib, or clinical trial can be considered for patients with high-risk MF, having platelet count >50x10 9 /L, who do not meet the criteria for ASCT [78,79].In addition, exposure to ruxolitinib before ASCT is suggested to be associated with positive outcomes [80] Pacritinib (Vonjo), also FDA-approved, shows promising profiles for anemic and thrombocytopenic patients with intermediate or high-risk MF with platelet counts below 50×10 9 /L [13].Pacritinib is yet another oral kinase inhibitor with activity against wild-type JAK2, mutant JAK2 V617F, and FLT3, and contributes to cytokine and growth factormediated signaling, to regulate hematopoiesis and immune function [65]. The disease-modifying potential of JAK2 inhibitors, however, remains limited and is further impeded by the loss of therapeutic responses in a substantial proportion of patients over time.The combination of novel compounds with JAK2 inhibitors is of specific interest to enhance the therapeutic efficacy of molecularly targeted treatment approaches, especially for patients with MF with poor survival rates [65,81].Clinical studies are currently evaluating approaches of JAK2 inhibition in combination with inhibition of MEK-ERK or PI3K signaling, as well as interference with apoptosis regulation by Bcl-2/Bcl-xL inhibition using navitoclax.In vivo studies in mice showed that combined inhibition of JAK2, PI3K, and mTOR in JAK2 V617F mutated cells causes reduction of both JAK2 and PI3K-mediated STAT5 phosphorylation, impairment of the clonogenic potential of JAK2 V617F mutated hematopoietic progenitor cells, reduced splenomegaly and myeloid cells infiltration [15]. Patients suffering from excessive splenomegaly, and ineligible for treatment with JAK2 inhibitors, may require splenectomy, splenic irradiation, or partial splenic artery embolization for long-term control of disease symptoms, such as anemia and thrombocytopenia [82].Other innovative treatment approaches under clinical investigation include telomerase inhibition, interference of the telomere function, MDM2 inhibition, and impact on the TP53 tumor suppressor function.Recent studies with epigenetic drugs such as histone deacetylase inhibitors, givinostat, panobinostat, and hypomethylating agents, azacitidine and decitabine, were effective in treating PV, MPN transformed to AML and minimally effective in treating MF.These efforts will add valid options to the molecular targeted treatment approaches for patients with MPN [65]. Regulation of Apoptosis Increased level of tumor necrosis factor-alpha (TNFα) is a characteristic feature of the pathobiology of MPN, especially in patients with MF, promoting the survival of malignant cells over normal cells [81].Second mitochondria-derived activator of caspases (SMAC) mimetics or inhibitors of apoptosis (IAP) antagonists promote cancer cell death by activating caspases and cytokine-mediated apoptosis in tumor models with high TNFα expression.IAP antagonism by SMAC occurs through the binding of the N-terminal tetrapeptide (AVPI) of SMAC to IAP at selected domains.Hence, small molecule compounds that mimic the AVPI motif of SMAC have been designed to overcome IAPmediated apoptosis resistance of cancer cells [83]. LCL-161 is a small molecule SMAC mimetic that inhibits multiple IAP family proteins.This oral agent is under phase II study in patients with primary, post-PV, or post-ET and MF (NCT02098161).Preliminary results of 44 patients enrolled in this clinical trial showed approximately 30% ORR (objective response rate), defined as complete remission (CR) + partial remission (PR) + clinical improvement [84,85]. BCL-xL inhibitor, such as navitoclax, is a high-affinity small, orally available molecule that inhibits the anti-apoptotic activity of BCL-xL.Navitoclax is under phase II clinical evaluation in combination with ruxolitinib for reduction of splenic size in patients with MF (NCT03222609) [86]. Targeting the Hematopoietic Micro-Environment Heat shock protein-90 (Hsp90), is a chaperone protein found in normal and cancer cells.It stabilizes proteins against heat stress, facilitates protein degradation, and plays a prominent role in cancer.Oral PU-H71, an Hsp90 inhibitor is recently under trial being assessed for the safety and tolerability in patients with MPN previously treated with ruxolitinib (NCT03935555) [87]. Histone deacetylase inhibitors (HDACi) function as a therapeutic agent by downregulating JAK2 through acetylation and interference of the chaperone protein function of Hsp90.Ongoing trials are evaluating the synergism of ruxolitinib in combination with HDACi, such as Givinostat and panobinostat [88,89]. TP53 modulation TP53 is an unstable protein due to continuous MDM2-mediated degradation.It is a principal mediator of growth arrest and apoptosis.Upon oncogene activation, MDM2mediated degradation is blocked and p53 stabilizes.Idasanutlin, an MDM2 antagonist, is under phase II study as monotherapy in patients with PV who are intolerant to therapy with hydroxyurea (NCT03287245), and in phase I in patients with PV and ET (NCT02407080) [90,91]. Another MDM2 antagonist, (KRT-232) is under phase II study in phlebotomydependent patients with PV, assessing independence of phlebotomy and reduction in splenic size as compared to ruxolitinib (NCT03669965) [92].It is also under phase II study in patients with MF, unresponsive to treatment with JAKi (NCT03662126) [88,93]. Inhibitors of Fibrosis A determining feature of PMF is progressive bone marrow fibrosis which is associated with poor prognosis.Treatment with a fibrocyte inhibitor may interfere with the development of marrow fibrosis.A fibrocyte inhibitor, serum amyloid P (SAP; pentraxin-2), significantly prolongs survival and decelerates the development of marrow fibrosis in mouse models [94].PRM-151, a recombinant form of SAP, is under phase II trial, for bone marrow response rate (reduction in bone marrow fibrosis score by at least one grade according to WHO criteria), in patients with MF (NCT01981850) [95].It acts at sites of tissue damage, thereby preventing and reversing fibrosis.Preliminary results showed 7 of 13 evaluated patients having a morphologic bone marrow response, with PRM-151 having durable safety and efficacy at 72 weeks [81]. A selective aurora kinase-alpha inhibitor, alisertib, eradicates atypical megakaryocytes and reduces marrow fibrosis.Alisertib was found to reduce splenomegaly and symptom burden in 29% and 32% of patients with MF, respectively, during a phase II clinical study, it also normalizes megakaryocytes reducing fibrosis in 5 of 7 patients [81,96]. TGF-β inhibitor Transcription growth factor-beta (TGF-β) is a cytokine involved in the fibrotic phase of MF.Sotatercept is an activin receptor IIA ligand trap.It sequesters TGF-β ligands, such as GDF11, secreted by the bone marrow stromal cells and improves erythroid differentiation, improving anemia [97].A phase II study of sotatercept, to assess the response to anemia (increase in hemoglobin and decrease in red blood cell transfusion requirement) in patients with MPN-associated MF and anemia (NCT01712308), showed response to anemia in 36% of the evaluated patients [98]. Furthermore, AVID200, an engineered TGF-β inhibitor found to decrease the fibrogenic stimuli leading to MF, is under phase I/Ib study in patients with intermediate-2 or high-risk MF (NCT03895112) [81,101]. Telomerase Inhibitors Imetelstat, a competitive and potential telomerase inhibitor, inhibits multiplication and triggers apoptosis of cancer stem cells by binding to the RNA component of telomerase.This therapeutic agent under phase II clinical study, showed molecular responses in patients with ET unresponsive to previous therapies [102].Imetelstat is under phase II clinical trial in patients with intermediate or high-risk MF, and previously treated with JAKi (NCT02426086), evaluating splenic and total symptom score reduction.Preliminary results showed complete remissions and a median survival of 19.9 months [81,101]. BET Inhibitors Epigenetic mechanisms enhance the inflammatory milieu in MPN via activation of NF-kB signaling.Hence, bromodomain and extra-terminal (BET) protein inhibition is a potential approach to therapy.BET inhibitor, CPI-0610, is under phase II clinical study as monotherapy or in combination with ruxolitinib for response of spleen size and independence of RBC transfusion rate in patients with MF (NCT02158858) [81,88,103]. Table 5 summarizes the therapeutic agents under clinical trials for treatment of MPN.The majority of patients with MPN, with regular monitoring and treatment, live for many years without developing major symptoms.Prolonged survival can, however, be challenged in a minority of patients due to the development of serious thromboembolic disorders, post-PV MF, and cancers, including acute leukemia.Hence, treatment aims to achieve control of symptom and prevention of the development of thrombotic and hemorrhagic complications [13]. Future Directions Correlative studies have revealed associations of specific gene mutations with clinical presentation, progression dynamics, and outcome.More studies are needed to validate and confirm the prognostic and diagnostic significance of non-driver mutations of MPN, as this might help to achieve satisfiable distinguishable features from other myeloid malignancies and drive the development of precision therapies to increase survival outcomes of patients.Several pre-clinical combination treatments are currently being explored and monotherapy with JAK inhibitors is under improvement.There is also an increasing urgency to adopt sensitive and specific NGS-based technologies in routine clinical practices to facilitate accurate diagnosis of patients with MPN, particularly for those with triple-negative MPNs. Figure 2 . Figure 2. Structural domains of JAK2 showing Exon 12.The amino acid position indicated in blue represents the region of most common Exon 12 mutations. Figure 3 . Figure 3. Structural domains of MPL showing Exon 10.EC, TM, and CD represent the extracellular, transmembrane, and cytoplasmic domains.The amino acid positions indicated represent the reported Exon 10 mutation hot spots; W515L or W515K in blue. Figure 4 . Figure 4. Structural domains of CALR showing the Type I (deletion) or Type II (insertion) mutations. Figure 5 . Figure 5. Acquired mutations associated with MPN pathogenesis showing constitutive activation of JAK/STAT, MAPK, and PI3K/Akt pathways; driver mutations (JAK2, CALR and MPL) in red; other mutations in yellow. Figure 6 . Figure 6.Inhibitory effect of Ruxolitinib on MPN progression; blue arrows indicate the normal JAK/STAT pathway and red arrows indicate the overactivation of the JAK/STAT pathway due to driver mutations. Table 1 . Simplified 2022 WHO diagnosis criteria for PV and post-PV MF; Adapted from: [6, 104]. + As a refinement to the 2016 WHO diagnostic criteria, increased red cell mass has been removed as a major criterion.*Majorcriterion 3 may not be required in patients with sustained absolute erythrocytosis: hemoglobin >18.5 g/dL and hematocrit >55.5% in men or hemoglobin >16.5 g/dL and hematocrit, 49.5% in women; if major criterion 2 and the minor criterion are present.
2024-05-30T15:18:51.935Z
2024-03-06T00:00:00.000
{ "year": 2024, "sha1": "4698f051cdbf7eda59581023dfed47957cba807f", "oa_license": "CCBY", "oa_url": "https://www.scipublications.com/journal/index.php/wjcor/article/download/909/602", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4bb5b589c8a2ee46fe1b26f3925e6af45b016459", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
255036989
pes2o/s2orc
v3-fos-license
THE IMPACT OF AUTOMOBILE POLLUTED SOIL ON SEEDLING GROWTH PERFORMANCE IN SOME HIGHER PLANTS : Rapid increase in automobile density and discharge of different types of pollutants from automobile are a serious issue for whole civilized world and in Bhakkar also.Vehicle emission from automobiles released an enormous quantity of toxic pollutants likewise nitrogen dioxide, sulfur dioxide, carbon mono oxide, heavy metals particularly lead, cadmium in environment and produced harmful effects on germination and growth of plants. This study was aim to investigate the effect of automobile polluted soil on the growth of some tree species. In present study the variation in seedling growth performance of three different selected tree species namely, Acacia nilotica L., Albizia lebbeck L. (Benth.) and Eucalyptus globulus Labill. raised in roadside polluted soils of District Bhakkar, Pakistan were recorded in the pots. Results showed that root, shoot, seedling length, number of leaves, and seedling dry weight of Acacia nilotica grown in polluted road soils of Bhakkar-Khansar road showed a significant (p<0.05) decline. Statistical analysis of recorded data showed that root growth and leaf area of Albizia lebbeck in the soil of Bhakkar-Khansar road soil significantly (p<0.05) decreased as compared to control. Similarly, a significant (p<0.05) reduction in shoot, seedling length, number of leaves, leaf area and seedling dry weight performance of E . globulus in polluted soils of Bhakkar-Notak was recorded. Introduction Vehicle passengers and goods transport from one place to another and they have made life easy and convenient. Motor vehicle traffic is a major source of air pollution in urban areas and contributing 57%-75% of total emissions [WHO, 2006]. The automobiles activities emit various compounds likewise, nitrogen, sulphur, hydrogen fluorides, hydrocarbons, particulate matter, peroxy acetyl nitrates and heavy metals into atmosphere which put harmful effects on human, animals and to the trees [UABOI-EGBENNI & al. 2009]. The use of diesel and petrol fuel in automobiles contributes various pollutants into air with different concentrations depending upon the operating conditions of automobiles [COLVILE & al. 2000]. In China, vehicles participate only 7.2% in 1995 but it would grow up to 11.3% in 2020 [STREETS & al. 2001]. It is estimated that annual increase of vehicles is 37% in Pakistan [ILYAS, 2007]. Water and carbon dioxide are produced in the complete combustion of petroleum and diesel but usually incomplete combustion occur giving rise to various solid particles, liquids and gases [ANDA & ILLES, 2012]. Different plant species vary in extent of response to vehicle pollutants exposure. Researchers are claiming that vehicle emission is responsible for increase the level of toxic pollutants in environment due to ever increase in number of automobiles [SULISTIJORINI & al. 2008;KABIR & al. 2012;SHAFIQ, 2002;SHAFIQ & IQBAL, 2012] and ultimately negatively affecting germination and growth of plants. ZHAO & al. (2009) accounted an unfavorable effects of air pollution on growth of plants that might be due to some poisonous substances releasing from automobiles. Effect of automobile polluted soil on early seedling growth performance and biomass production of Neem (Azadirachta indica A. Juss.) [PARVEEN & al. 2016]. Species description Acacia nilotica (L.) Willd. ex Delile is synonym of Vachellia nilotica (L.) P. J. H. Hurter & Mabb. [WFO, 2022] and belongs to family Fabaceae and used as application of afforestation in forestry. It grows commonly 3-15 meters high or sometime low as 1.5 meters. The seed germinate after a period of warm moist condition after scarification [PARSONS & CUTHBERTSON, 1992]. Its wood is useful in the production of fuel wood, charcoal, paper and medicines industry [KANAK & SAHAI, 1994]. When this plant is young, bark is whitish but it changed to dark gray when it gets matures and has deep taproot system with branching surface lateral roots [COX, 1997;MACKEY, 1997]. The fruit of Acacia nilotica is leathery pod and the color of pod varies from brown to dark gray, straight to curved and glabrous or velvety [BROWN & CARTER, 1998]. Growth rates are variable, it may mature in nine months under good environmental conditions or not for up to 13 years under harsh conditions [KRITICOS & al. 1999]. Acacia nilotica helps to improve the rural economy by providing fodder, timber, fuel, gum and medicines. This tree also play role to increase the soil fertility under its canopy [PANDEY & al. 2000]. Acacia nilotica is used in bridges, railway sleepers, sports goods, building of boats, carts, carriages, and construction of doors, window frames, decorative cabinets and carpentry work [KUMAR & KUMUD, 2010]. The distribution of Acacia nilotica includes Africa, Indian subcontinents and also planted in Pakistan along the roadside as shade tree along the field boarder as shelterbelts and windbreaker. It is of great value on both national and international level for timber and decorative wood and aromatic oil. Albizia lebbeck L. (Benth.) is a member of family Fabaceae and subfamily Mimosaceae. Albizia lebbeck is commonly called as Siris tree, Shrin and Vaagei. It is deciduous woody tree and cultivated in gardens as ornamental plants, along roadsides as shade tree, on irrigated plantation and in farmlands. This deciduous tree is found all over the world especially in Pakistan, India, Bangladesh tropical and subtropical Africa and Asia [AHMAD & BEG, 2001]. It is large multi stemmed tree with widespread canopy (30 m). Albizia lebbeck is used as fodder crop of high quality for animal food. Its tree has shallow and extensive root system making it helpful in soil conservation through soil erosion control [PRINSEN, 1986]. Albizia lebbeck is a valuable timber species also used for furniture, flooring, carving posts and in various kinds of agricultural implements. The bark contains 15% tannin used in tanning and dying industry. Due to property of high saponin contents also used in detergents [VARSHNEY & BADHWAR, 1970]. Its bark produces brown reddish gum used as a part of Arabic gum [FAROOQI & KAPOOR, 1968]. The seed oil is used in the treatment of lesions in leprosy disease [RAGUPATHY & MAHADEVAN, 1991]. Eucalyptus globulus Labill. is a tall tree and member of family Myrtaceae. Most of the species of E. globulus are tall trees with height of 100 meter and girth of 20 meter. Almost all species of it are evergreen and very few species are deciduous [POHJONEN, 1989]. E. globulus is tolerant to moisture stress and low soil fertility. E. globulus is planted in garden, along roadside and parks. It is also found useful for fuel wood, charcoal, timber, plywood, paper pulp, oil, fiberboard, tannin, shade and shelter, source of nectar for honey and ornamental purposes [MOGES, 1998]. The ever increase in vehicle density is producing environmental pollution issues and is affecting growth of roadside plants. There is no scientific study is available on the effect of automobile polluted soil of Bhakkar on plant growth. Keeping in view of the constant increase in traffic activities which is polluting the soil of the area, thecurrent research experiments was conducted with the aim to compare the effects of automobile polluted soil on three different economic importanttree species namely, Acacia nilotica L. Willd. ex Delile, Albizia lebbeck L. (Benth.) and Eucalyptus globulus Labill. of Pakistan. Description of experimental site Bhakkar, is the principal city of Bhakkar District and located in Punjab, Pakistan. It lies on the left bank of the Indus river. It stands on the edge of the Thal or sandy plain overlooking the low-lying alluvial lands along; the river, a channel of which is navigable as far as Bhakkar during the floods. To the west of the town the land is low, well cultivated, and subject to inundation, while to the east the country is high and dry, treeless, and sandy. A rich extent of land irrigated from wells lies below the town, protected by embankments from inundations of the Indus, and produces two or three crops in the year. (Figure 1). The composit soil samples were collected from each site at equal distance. The soil samples were taken to laboratory in polythene bag and kept at room temperature for drying. All collected soil samples were slightly crushed and passed through a 2 mm sieve to get equal size particle distribution. The air dried soil was then shifted into clean polythene bags, labeled and stored in the laboratory. Weekly climatic data of District Bhakkar during growth experiments (01-06-2018 to 31-07-2018) was recorded (Table 1). The experiment for influence of polluted soil collected from five different roadsides sites (namely A = University Sub-Campus road, B = Bhakkar-Darya Khan road, C = Bhakkar-Jhang road, D = Bhakkar-Notak road and E = Bhakkar-Khansar road) on seedling growth of Acacia nilotica L. Willd. ex Delile, Albizia lebbeck L. (Benth.) and Eucalyptus globulus Labill. was conducted at the Department of Biological Sciences, University of Sargodha, Sub-Campus Bhakkar (Punjab, Pakistan) under natural environmental condition in pot. The vigorous, healthy and same size seeds of Acacia nilotica, Albizia lebbeck and E. globulus were collected from local National seed store of Bhakkar. The seeds were surface sterilized with 0.20% of sodium hypochlorite (NaOCl) solution for two minutes to avoid any fungal contamination and washed with thoroughly with distilled water. The micropyle top of seeds of these plants' species were marginally cut to some extent with hygienic scissors to break external seed dormancy. Ten seeds were sown at 1.00 cm depth in earthen pots containing the soil of different polluted road such as A = University Sub-Campus road as control, B = Bhakkar-Darya Khan road, C = Bhakkar-Jhang road, D = Bhakkar-Notak road and E = Bhakkar-Khansar road. The earthen pots watered regularly. After two weeks of seed germination, equal size of seedlings was transplanted in plastic pots of 9.8 cm in depth and 7.00 cm in diameter. There were three replicates of each plant species seedling for each polluted roadside soil. One seedling was transplanted in each plastic pot and seedlings were watered regularly. After every week pots reshuffling were also carried out to prevent light shade or any other environmental effect. At the completion of experiment (eight weeks) the seedlings were removed from plastic pots, washed their roots with fresh water and measured the root, shoot, seedling length and leaf area with the help of iron scale and number of leaves were also counted. Seedling fresh weight was determined with the help of electrical balance. After that the seedlings were dried in a thermostatic drying oven at 80 °C and then oven dried weight of leaves, root, shoot and seedling were also determined by using electrical balance. The root shoot ratio, leaf weight ratio, leaf area, specific leaf area and leaf area ratio were also determined by formula as given by ATIQ-UR- REHMAN & IQBAL (2009 Statistical analysis Data of different growth parameters were analyzed statistically by analysis of variance (ANOVA) and Duncan Multiple Range Test (Duncan, 1995) at p<0.05 level on personal computer. Results and discussion The transport sector is an important source of environmental pollution. The chaotic and rapid vehicle growth is producing massive environmental pollution issues and is affecting not only the growth of plants but also might be influencing on the different characteristics of soil of the area. The influence of polluted soil collected from five different roadsides sites (namely A = University Sub-Campus road, B = Bhakkar-Darya Khan road, C = Bhakkar-Jhang road, D = Bhakkar-Notak road and E = Bhakkar-Khansar road) on seedling growth and seedling dry weight of Acacia nilotica (L.) Willd. ex Delile, Albizia lebbeck L. (Benth.) and Eucalyptus globulus Labill. with some variation was recorded (Table 2-10). Statistical analysis of recorded data showed that root, shoot, seedling length and number of leaves of Acacia nilotica were significantly (p<0.05) reduced in soil of Bhakkar-Khansar road as compared to other soil treatment (Table 2). The significant reduction in seedling growth of Acacia nilotica wasconsideredmainly depended upon pollutants released from automobiles. Air pollution directly affects plants via leaves or indirectly via soil acidification [LIU & DING, 2008]. Root length, seedling length and number of leaves and seedling dry weight of Acacia nilotica grown in Bhakkar-Jhang road soil was recorded significantly greater as compared to University Sub-Campus road, Bhakkar-Darya Khan road, Bhakkar-Notak road and Bhakkar-Khansar road showed some degree of tolerance to soil pollution (Table 3). Seedling fresh weight was significantly (p<0.05) high in plants developed from the soil of Bhakkar-Jhang road (0.32 g) as compared to control while other three polluted roads showed significant (p<0.05) reduction with control. Maximum seedling dry weight was recorded as 0.23 g for Bhakkar-Jhang road soil which was significantly (p<0.05) decreased to 0.20 g for control soil. The seedling's fresh weight and dry weight of Acacia nilotica showed significant (p<0.05) variations in different polluted roadside soils. Reduction in biomass of Acacia nilotica may be due to imbalance in carbon dioxide exchange as a result of which photosynthesis activities got reduced [SHAFIQ, 2002]. Only Bhakkar-Darya Khan road soil showed nonsignificant result with control. Specific leaf area of Acacia nilotica raised in Bhakkar-Jhang road soil demonstrated significant (p<0.05) increase in other polluted road side soil (Table 4). A better root/shoot ratio, leaf weight ratio and leaf area ratio of Acacia nilotica raised in Bhakkar-Khansar road soil was found as compared to other polluted road side soil. Plants growing along the roadsides facing continuously different challenges which would cause variations in the biochemical processes, total chlorophyll contents and storage of some metabolites [AGBAIRE & ESIEFARIENRHE, 2009]. Different plant species vary in extent of response to vehicle pollutants exposure. This variation in seedling growth of selected plant species may be related with the amount of vehicle pollutants [HONOUR & al. 2009]. Our results were according to the findings of IQBAL & SHAZIA (2004) that decrease in length (root, shoot and seedling) along with fresh and dry weight of Albizia lebbeck by the exposure to different vehicle pollutants (Table 5-6). In another study seedling growth of Albizia lebbeck and Pongamia pinnata showed significant (p<0.05) reduction in root, shoot and seedling length raised from polluted road soils [QADIR & IQBAL, 1991]. From the results of present research work it was indicated that soil of study area might be disturbed in future due to emission and settling of toxic pollutant from vehicles. The observations recorded in the present study clearly indicated that pollutants emitted from the automobile exhaust exercised a decisive influence on seedling growth of Albizia lebbeck. The significance of germination and seedling growth is an extensively recognized factor in plant growth. A significant variation in seedling growth of Alibizia lebbeck raised in different polluted roadside soils of District Bhakkar was recorded. Statistical analysis of recorded data showed that the polluted soil influenced root, shoot and seedling length. number of leaves and leaf area, root, shoot, leaves dry weight, seedling fresh and dry weight, root/shoot ratio, leaf weight ratio, specific leaf area and leaf area ratio of Albizia lebbeck. Statistical analysis of recorded data showed that seedling length and number of leaves of Albizia lebbeck in the soil of Bhakkar-Notak road soil. The shoot, seedling length and number of leaves of Albizia lebbeck were significantly (p<0.05) greater in the soil of Bhakkar-Jhang road (Table 5). In present study the seedling dry weight performances of Albizia lebbeck was responded differently when raised in different polluted roadside soils of District Bhakkar (Table 6). A significant reduction in seedling fresh and dry weight, root, shoot, leaves dry weight of Albizia lebbeck was recorded in the soil of Bhakkar-Notak road. Leaf weight ratio was significantly (p<0.05) high (0.07) in control soil. The seedling growth of Albizia lebbeck showed better growth in soil of Bhakkar-Jhang road and Bhakkar-Darya Khan road as compared to control and soils of other roads. All the growth variables were significantly (p<0.05) reduced in soils of Bhakkar-Notak road and Bhakkar-Khansar road indicating its less tolerance and adaptability to these polluted soils. The seedling length of different plant species exhibited the reduction in root and shoot length as these parts are exposed to either direct or indirect automobile pollutants present in the soil [ALLOWAY & AYRES, 1997]. Maximum value of specific leaf area was recorded in Bhakkar-Darya Khan road soil (674.44 cm 2 g -1 ) and Bhakkar-Jhang road soil (672.00 cm 2 g -1 ) as compared to control (573.85 cm 2 g -1 ). Leaf area ratio was high in seedling of control soil (37.30 cm 2 g -1 ) and minimum was recorded in Bhakkar-Notak road soil (25.00 cm 2 g -1 ). Toxic nature of available pollutants in soil usually varied in soil and ultimately effect growth of plants. Reduction trend in different growth variables was not same but changes from soil to soil. Plants do not exhibit similar trend of susceptibility to pollutants. Major variations in response of plants to air born pollutants have been also reported by JACOBSON & HILL (1970). The long period of even low concentration of automobile pollutants exposure creates destructive effects on seed germination and plant growth with visible injury [JOSHI & SWAMI, 2009]. Leaf area was significantly (p<0.05) high in the seedling raised in control (8.76 cm 2 ) while other polluted soils showed significant reduction in this parameter. Fresh (1.28 g) and dry weight (1.12 g) of seedling were significantly (p<0.05) high in soil of Bhakkar-Jhang road as compared to control (0.91 and 0.63 g respectively). A significant (p<0.05) increase was studied in root dry weight of seedlings grown in Bhakkar-Jhang road soil and Bhakkar-Darya Khan road soil recorded as 0.34 and 0.28 g respectively which was greater than control (0.24 g). Bhakkar-Khansar road soil (0.23 g) showed nonsignificant results with control. Maximum shoot dry weight (0.78 g) of seedling was recorded for Bhakkar-Jhang road soil which showed significant (p<0.05) result with control (0.54 g). Leaf dry weightof seedlings was high (0.08 g) developed from the soils of Bhakkar-Jhang road and Bhakkar-Darya Khan road when correlated with control (0.07 g). In our findings seedling's fresh weight and dry weight showed significant (p<0.05) variations in different polluted roadside soils. POWELL & al. (1996) reported that seedling fresh weight and dry weight got reduced under polluted environment. Both increase and decrease in biomass of seedlings were also recorded by NAWAZ & al. (2006). Table 7 showed significantly (p<0.05) high values of Root shoot ratio in Bhakkar-Khansar road soil (0.55) and Bhakkar-Notak road soil (0.49) as compared to control (0.44). Bhakkar-Jhang road soil (0.44) showed nonsignificant result with control. Seedling developed in control soil showed significant (p<0.05) increase in leaf weight ratio (0.09) followed by Bhakkar-Darya Khan road soil (0.08) while prominent reduction was observed in Bhakkar-Khansar road soil (0.06) in relation with control. Highest value of specific leaf area (179.67cm 2 g -1 ) was recorded in the seedling grown in Bhakkar-Notak road soil as compared to control (125.14 cm 2 g -1 ). A considerable amount of Arsenic in air particulates and in diesel exhaust particulates found [TALEBI & ABEDI, 2005]. In comparison, the shoot height and root length of wheat were found more sensitive to arsenic and might be used as indicators for arsenic toxicity [LIU & al. 2005]. Other three roads showed significant low results with control. Leaf area ratio of Albizia lebbeck developed in control soil (11.23 cm 2 g -1 ) was greater while other polluted road soil showed significant (p<0.05) reduction in this parameter. Among the most important parts of plants, the leaf is the mainly receptive part of plant to be badly pretentious by automobile pollutants. In our research work all parameters related to leaf which includes number of leaves, leaf area, specific leaf area, leaf weight ratio, leaf dry weight and leaf area ratio were reduced significantly (p<0.05) in seedlings raised from polluted roadside soil. So, the leaf at all stages of growth act as best indicator to different automobile contaminants [SHAFIQ & al. 2009]. These pollutants are responsible for stomatal clogging, leaf injury, senescence and reduction in leaf weight [TIWARI & al. 2006]. The reduced leaf area results in reduction of absorbed radiations and subsequently reduction in photosynthesis. Hence, declined in fresh weight and dry weight of leaf is directly interrelated to harmful vehicle pollutants. Our results were supported by the work of SIBAK & GULYAS (1990) noted the decline in leaf size due to automobiles pollutants available in environment. The seedling growth in terms of root, shoot, seedling height, number of leaves, leaf area, seedling dry weight, root/shoot ratio, leaf weight ratio and specific leaf area ratio performance of Eucalyptus globulus Labill. was found different in polluted and non-polluted soils of District Bhakkar (Table 8-10). It might be mainly depending upon nature of pollutants released from automobiles. The significant (p<0.05) reduction in shoot length, seedling length, number of leaves and leaf area of E. globulus were recorded in the soil of Bhakkar-Notak road as compared to control. Bhakkar-Jhang road soil showed significant increase in root length (5.76 cm) of E. globulus as compared to control (4.03 cm). Number of leaveswere significantly (p<0.05) high in Bhakkar-Darya Khan road soil (10.23) and Bhakkar-Jhang road soil (9.42) than control (8.80). significant (p<0.05) results with control. Root dry weight (0.24 g), shoot dry weight (0.62 g) and leaf dry weight (0.12 g) was significantly (p<0.05) high in Bhakkar-Darya Khan road soil and other polluted road soil showed reduction in these parameters as compared to control (0.16, 0.32 and 0.07 g respectively). The importance to the soil-root-shoot pathway for remediation of contaminated sites with polyaromatic hydrocarbons (PAHs) was reported [SCHWAB & DERMODY, 2021]. Root to shoot ratio, leaf weight ratio, specific leaf area and leaf area ratio of E. globulus found influenced by automobile polluted soil treatment. Root/shoot ratio, leaf weight ratio, specific leaf area and leaf area ratio of E. globulus in soil of Bhakkar-Darya Khan was recorded (Table 10). Root shoot ratio of E. globulus was higher in seedling established from the soils of Bhakkar-Khansar road (0.67) and Bhakkar-Notak road (0.56) when correlated with control (0.50) while Bhakkar-Jhang road soil (0.48) showed nonsignificant (p<0.05) results with control. Leaf weight ratio was high in the seedling raised from Bhakkar-Notak road soil (0.21) and Bhakkar-Khansar road soil (0.17) as compared to control (0.15). This parameter presented significant (p<0.05) reduction in the soil of other polluted sites. Specific leaf area was significantly (p<0.05) highest in control (91.43 cm 2 g -1 ) followed by Bhakkar-Khansar road soil (64.50 cm 2 g -1 ) while other three polluted road side soils showed reduction in this parameter. Leaf area ratio was significantly (p<0.05) more in control (13.33 cm 2 g -1 ) followed by Bhakkar-Khansar road soil (11.06 cm 2 g -1 ), Bhakkar-Notak road soil (8.72 cm 2 g -1 ) and Bhakkar-Jhang road soil (7.44 cm 2 g -1 ). Bhakkar-Darya Khan road soil (6.08 cm 2 g -1 ) had lowest value of leaf area ratio. Changes in soil characteristics influence plant growth and development. The seedling growth performance of Acacia nilotica was significantly decreased in polluted soils of Bhakkar- Conclusion Soil pollution due to release of pollutants from vehicle emissions effects plant growth. It was concluded that due to variation in resistance and sensitivity level to automobile polluted soil some significant changes in growth variables of selected woody plant species was recorded. The results of the present study confirmed that automobile activities polluted the soil of the some Bhakkar area and that make it differently for seedling growth performance of selected three plant species, Acacia nilotica, Albizia lebbeck and E. globulus. Acacia nilotica and Albizia lebbeck seedlings wereflourished well in the soils of Bhakkar-Jhang road. The seedlings length of Acacia nilotica and Albizia lebbeck raised in Bhakkar-Khansar and Bhakkar-Notak road soils were highly decreased. The recorded data also showed that seedling length of E. globulus were significantly (p<0.05) increased in the soil of Bhakkar-Darya Khan road soil and progressively decreased in Bhakkar-Notak road soil. Eco-friendly organizations in the city should be established so that problems of automobile pollution could be brought in the knowledge of citizen. Some advance techniques and enforcement of environmental protection laws should be implemented to reduce the level of automobile pollution.
2022-12-24T16:35:31.489Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "25ce28823fbb5f9e5febda831e26531d49704895", "oa_license": "CCBYNC", "oa_url": "https://plant-journal.uaic.ro/docs/2022/8.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "736ab8f3b97d1b9b9427aac3e89e4c3ec681db10", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
233225452
pes2o/s2orc
v3-fos-license
Evaluating the influence of music at different sound pressure levels on medical students’ performance of standardized laparoscopic box training exercises Background The influence of music on the performance of surgical procedures such as laparoscopy is controversial and methodologically difficult to quantify. Here, outcome measurements using laparoscopic box training tools under standardized conditions might offer a feasible approach. To date, the effect of music exposure at different sound pressure levels (SPL) on outcome has not been evaluated systematically for laparoscopic novices. Methods Between May 2017 and October 2018, n = 87 students (49 males, 38 females) from Heidelberg University Medical School performed three different laparoscopy exercises using the “Luebecker Toolbox” that were repeated twice under standardized conditions. Time was recorded for each run. All students were randomly assigned to four groups exposed to the same music compilation but at different SPLs (50–80 dB), an acoustically shielded (earplug) group, or a control group (no intervention). Results Best absolute performance was shown under exposure to 70 dB in all three exercises (a, b, c) with mean performance time of 121, 142, and 115 s (p < 0.05 for a and c). For the control group mean performance times were 157, 144, and 150 s, respectively. In the earplug group, no significant difference in performance was found compared to the control group (p > 0.05) except for exercise (a) (p = 0.011). Conclusion Music exposure seems to have beneficial effects on training performance. In comparison to the control group, significantly better results were reached at 70 dB SPL, while exposure to lower (50 or 60 dB) or higher (80 dB) SPL as well as under acoustic shielding did not influence performance. Background Laparoscopy represents the standard approach for many gynecologic surgery procedures because of its advantages over conventional laparotomy [1]. However, one major factor influencing outcome and efficiency in laparoscopic surgery is training, especially for novices before performing surgeries autonomously [2]. In contrast to a sheltered training setting, there are many distractors that affect novices in particular in the actual surroundings of an operating room (OR) and raise stress levels during surgery. One of them might be exposure to music played in the OR, which affects beginners more than experienced surgeons [3]. In many hospitals, it is common practice to play music in the OR during surgical procedures [4,5]. Findings have shown that music might affect cognitive performance [6], but limited data is known about its influence on surgical performance. The influence of acoustic factors such as music on laparoscopic technique performance is controversial and methodologically difficult to quantify [7]. Here, outcome measurements performed by surgical novices using laparoscopic box training tools under standardized conditions might offer a feasible approach to answer this question. Earlier studies of laparoscopic training show that the learning curve is prolonged for novice and intermediate surgeons [8] and that different learning curves exist, depending on the level of laparoscopic training [9]. There is a consensus that educational activities should be intensified and an assessment of surgeons' skills could be introduced in order to ensure that the quality of treatment is adequate, especially for training residents in gynecology [10]. Here, simulators are accepted as an important means of training and as an objective assessment of psychomotor performance. Standardized tasks can be practiced repeatedly, and simulators provide unbiased and objective measurement of surgery performance. Thus, simulation-based training has become increasingly relevant in laparoscopic gynecologic surgery training. In this context, both box trainers and virtual reality simulators seem to be equally effective as a means of teaching laparoscopic skills to novice learners before entering the OR [11]. In this study we applied standardized exercises on box trainers for evaluating the performance of laparoscopic novices who were exposed to music at different sound pressure levels. To our knowledge, this effect on outcome has not yet been evaluated systematically. Participants and study design For the study we recruited students at Heidelberg University Medical School at the end of their clinical curriculum who were participating in a 4-week module in Obstetrics and Gynecology at our hospital [12]. During these modules we offered a voluntary 90-min laparoscopic training course at our in-house skills lab. All module participants were eligible. The number of participants of each training course varied among the dates offered, with a minimum of 4 to a maximum of 8 participants at the same time accompanied by one tutor. As a training tool we used the commercially available "Luebecker Toolbox" (LTB Ltd., Luebeck, Germany; http:// www.luebeck-toolbox.com). This system consists of a laparoscopy training box including an integrated camera (connected to a monitor), four standardized modules with the possibility to perform six different exercises, as well as associated didactic videos that are available online [13]. We chose three exercises for the purpose of our study: (a) "Pack Your Luggage" (PYL), (b) "Chinese Jump Rope" (CJR), and (c) "Weaving" (WEA). A detailed description of the exercises has already been published by Laupert et al. [13,14]. These three exercises offer specific training in instrument handling, hand-eye coordination, and bimanual and crossing instrument use. All participants were given a short oral introduction on general aspects of gynecologic laparoscopic surgery and viewed instructional videos once for each exercise in order to standardize the training procedure. For those students with a dominant left hand, tasks were performed in the opposite direction. Original grasping forceps were used to perform the tasks (Karl Storz SE & Co. KG, Tuttlingen, Germany). All three exercises were performed in the same order and each exercise was repeated three times in a row. There was no training session before. Performance was evaluated by measuring the time needed to complete the exercise. For the purpose of this study all students were randomly assigned to different groups (at the time of the student's registration for the course), either to one of the intervention groups who were exposed to the same music compilation ("Deep House Autumn Mix 2017-The Best Of Vocal Deep House Nu Disco Music", found on "You-Tube", uploaded Oct 08th 2017) but at different sound pressure levels (SPL), i.e., exposure of 50 dB vs. 60 dB vs. 70 dB vs. 80 dB (dB), or the noise-shielded group with no music exposure and using conventional foam earplugs (ISO 4869). This additional intervention group was set up to exclude the effect of surrounding noise in comparison to the control group. In the control group the participants performed all exercises under exposure of regular surrounding noise (i.e. talking, noise deriving from instrument handling, etc.). The choice of the music compilation was made as this type of music is often chosen by surgeons at our institutions through its monotonic rhythm and the reserved use of vocal parts. The study was conducted under standardized conditions, i.e. music exposure came from one standardized source (SoundLink Mini Bluetooth Speaker; Bose, Germany) in the middle of a closed rectangle room that was 9 × 3.5 m in diameter. All participants had the same distance to the sound source of around one meter. Respective SPLs for each run were measured constantly with a calibrated sound pressure meter (WT1357, Akozon Ltd., P.R. China) and protocolled regularly every minute during each exercise. Discrepancies in sound pressure levels were adjusted immediately by the tutor. After finishing the exercises, the participants were asked to complete a short survey on statistical factors that might be related to the performance in this study (e.g., handedness, former experience in laparoscopic surgery during an internship, etc.) as well as to evaluate the exercise. The study was designed as a prospective trial and approved by the Heidelberg University Medical School ethics committee (Register No. S618/2017). All study participants provided written informed consent. Course and study participation were voluntary. Participants were assigned to one of the study groups randomly during the online registration period for the laparoscopy training course according to the time of registration on the different dates offered throughout the semester. All data were collected prospectively and handled anonymously for statistical analyses. Statistical analyses Collected data were transferred to a database in Excel (Microsoft Corporation; Redmond, USA). The statistical analysis was carried out using SPSS® (SPSS Inc., IBM Corporation; Chicago, Illinois, USA). All values given are mean values, ranges, and standard deviation. The significance test was carried out using the T-test to compare the mean values in an independent sample. A p-value < 0.05 was considered statistically significant. Study participants In total, n = 87 students took part in our study, 38 females (43.7%) and 49 males (56.3%). All participants were undergraduate students at Heidelberg University Medical School, most of them in the last clinical semester (9th clinical semester: 51%), i.e., at the end of their 5th year. Most of them did not have any practical experience in laparoscopic surgery (67.8%). Around 15% were left-handed, 85% right-handed. Between study groups no significant differences were detected concerning age, gender, handedness, and previous laparoscopy training experience. Table 1 shows detailed characteristics. Performance under music exposure at different SPL compared to the control group In exercise (a) best absolute mean performance was measured under exposure at 70 dB (121 s), which was significantly better than the control group (157 s; p = 0.007). Furthermore, the groups exposed to music at 80 dB (125 s) and the sound-shielded group (122 s) were significantly faster than the control group (p = 0.011). For exercise (b), no significant difference in performance between the groups was found, although the 70-dB group performed best (142 s). In exercise (c), the 70-dB group (115 s) again performed significantly better than the control group (150 s; p = 0.010). All other interventional groups showed no significant differences compared to the control group. Table 2 and Figs. 1a-c present detailed results for the three different laparoscopic exercises. Relative performance improvements between first and third run The relative improvements seen in the third run in comparison to the first run for all exercises are shown in Fig. 2. The highest relative improvements are seen for exercise (a) at 60 dB (42.7%), for exercise (b) at 70 dB (28.5%), and for exercise (c) at 80 dB (39.1%). However, the t-test for the independent sample does not show statistical significance (p = 0.05). Furthermore, overall improvements for the different SPLs for all exercises between the first and third run were calculated. The highest relative improvements are seen at 60 dB with 32.2%. Detailed results are presented in Fig. 3. Subjective music perception Concerning the post-hoc perception, 81.4% of all participants in the interventional groups did not feel distracted by music during the exercises, revealing a significant difference between the groups (50 dB: 0% vs. 60 dB: 41.6% vs. 70 dB: 0% vs. 80 dB: 50.0%). Moreover, overall course satisfaction was high (9.2 out of 10.0 points). Discussion The positive effect of music in motivating people has already been studied in a wide variety of areas of life, especially in physical leisure activities [15]. In therapeutic settings, too, the beneficial effects of music are used, for example, for pain management [16] or supportive therapy in cancer patients [17]. From the perspective of the occupational setting, music is also played at work: In many hospitals, it is common practice to play music in the OR during surgical procedures [4,5]. Here, music is being played in the background for the purpose of entertaining surgical staff, which must be distinguished from music being played in the preoperative setting to reduce anxiety among patients before surgery [18]. The decision as to whether music will be played in the OR or the kind of music that is chosen is predominantly the privilege of the senior surgeon [19]. The effect of music in this specific occupational setting is difficult to quantify. For other, nonhospital occupational settings, a study showed that background music is likely to reduce worker attention and performance [20]. For the OR setting, findings have shown that music is one of several mental distractors that might influence surgical performance negatively, but results differ [21,22]. A recent meta-analysis stated that the evidence to definitively determine whether music has a beneficial effect on surgical performance in a simulated setting is not sufficient [23]. When analyzing the effect of music in the OR, music is often just one factor among others comprising the general background noise there. The amplitude of background noise, in turn, depends on the specialty; e.g., an obstetrics OR has a comparably high baseline noise level [24]. Nonetheless, it is difficult to quantify the effect of music on surgical outcome. This is due to the varying test persons (advanced surgeons vs. beginners), different music genres and SPLs, as well as differing complexity of the tasks to be performed [7]. Finally, there might be a difference between measurement under standardized training conditions (with usage of simulators) or in the actual environment of an OR. In our study, we used a standardized training setting for laparoscopic exercises that were performed by surgical novices in order to control the relevant influencing factors. Laparoscopy is an adequate tool because it combines manual and neurocognitive requirements. The effect of noise (in general) on laparoscopic performance specifically is controversial. For experienced surgeons, one study showed that background noise at 113 dB had a negative impact on surgical laparoscopic performance [25], whereas another study on the effect of noise at 80-85 dB and background music showed no difference in task performance in terms of the time taken to complete a task [26]. Here, one must keep in mind that those noise levels are higher than most recommended standards for an occupational environment [27]. The SPLs used in the two studies also differed greatly, which makes a comparison difficult, but as an explanation it was assumed that experienced surgeons can effectively "block out" noise and music on a higher SPL of 80-85 dB. This is probably due to the high levels of concentration required to perform a complex surgical task. Recent studies of abdominal surgeries showed that surgeons' concentration was not impaired by measured noise levels [28] and there were hints that music might even reduce the heart rate, blood pressure, and muscle effort of surgeons while at the same time increasing the accuracy of surgical tasks [29]. In this context, the effect of routine and training in manual tasks seems to play an important role: Especially younger surgeons (i.e., interns or residents) seem more likely to be distracted by disturbing factors in the OR [30,31], not only by music but also telephone calls [32]. Under distracting conditions, the medical interns Total 87 showed a significant decline in task performance (overall task score, task errors, and operating time) and significantly increased levels of irritation toward both the assistant handling the laparoscope in a nonoptimal way and the sources of social distraction. Due to the fact that the influence of music on performance outcome of laparoscopic techniques in a real-life setting is controversial and methodologically difficult to quantify, outcome measurements performed using laparoscopic box training tools under standardized conditions might offer a feasible approach. To date, the effect of music exposure at different SPLs on the training performance of laparoscopic novices has not been evaluated systematically under standardized conditions. Therefore, we chose a highly standardized stetting for this study in order to maintain the ability to transfer the findings to a reallife OR setting. Simulation-based training in minimally invasive surgery has been validated for the "Luebecker Toolbox" [13]. Transferability of the task content to a (sub)-realistic setting could be demonstrated [14]. Nonetheless, besides training, individual talent also constitutes an important factor in mastering laparoscopic skills [33]. The influence of SPL on laparoscopic tasks has not been evaluated yet, although a positive impact on accuracy has already been shown for relaxing auditory influences, such as classical music on laparoscopic tasks [34]. Our data are in line with these preliminary data that background music at a moderate SPL of 70 dB has a positive effect on performance in comparison to higher or lower SPL, although the highest total relative improvement in all exercises was within the 60 dB group. In this context, it might be relevant that most participants did not feel distracted by the music in our study. In contrast to a real-world setting within an OR there were no other pressuring factors that might have influenced performance. This fits to the results that overall course satisfaction was very high. Fig. 1 Performance of the three laparoscopic exercises for the control group and five intervention groups (earplug, SPL 50-80 dB). a. Exercise (a), "Pack your luggage"; mean values of the required times are shown on the ordinate with their 25 and 75% quartiles against the respective groups on the abscissa; p-values for the comparisons between each group are shown below;°= outlier. b. Exercise (b), "Chinese Jump Rope". Mean values of the required times are shown on the ordinate with their 25 and 75% quartiles against the respective groups on the abscissa; p-values for the comparisons between each group are shown below;°= outlier. c. Exercise (c), "Weaving". Mean values of the required times are shown on the ordinate with their 25 and 75% quartiles against the respective groups on the abscissa; p-values for the comparisons between each group are shown below;°= outlier Limitations Our study design shows several potential limitations. Although a high standardization in the study design was intended, performance outcome of surgical techniques (such as laparoscopy) is methodologically difficult to quantify. Studies have shown that it is difficult to predict baseline laparoscopic surgery skills [35]. Moreover, our findings could have been relevantly biased due to differing subjective music perceptions, i.e., some students probably liked the music being played better than others, with a varying effect on their performance ("arousaland-mood-hypothesis") [36]. Studies have also shown that a listener's fondness of the music being played influences their performance [37]. Furthermore, we did not Fig. 2 Relative performance improvements between first and third run separated for exercises a, b, and c and different sound pressure levels plus control group / earplug group (in %) Fig. 3 Relative overall performance improvements (combined for all exercises a, b, and c) between first and third run for different sound pressure levels plus control group / earplug group (in %) use virtual reality simulators and therefore were not able to track the movement of the probands. Thus, the accuracy factor as part of the overall performance could not be recorded accordingly. In addition, the cohort size was relatively small; however, it could still deliver significant results. In addition, the implication of transferring study results from a simulator to the OR has not been clarified yet, although it is likely that the skills themselves can be transferred [38][39][40]. Further analyses might focus on other factors that might influence the performance of standardized laparoscopic tasks, e.g., differing music genres. Conclusions In general, along with previous studies, we could show that there is no negative effect of background music being played while performing exercises on a trainer in a standardized stetting. Moreover, our study suggests that even with rising sound pressure levels, performance is better than in a control group or a noise-shielded group. Here, the effect of blocking out music while performing the exercises might become relevant. It can be assumed that background music at a specific SPL even might enhance performance more than turning off music rigorously. To our knowledge, our prospective trial is the first study to systematically examine the influence of different sound pressure levels on laparoscopic performance of medical novices. Future trials need to show the influence of other distractors in the operating room, such as talking or answering phone calls. Moreover, it is still not known whether the music genre makes a difference for outcome performance.
2021-04-14T13:49:28.535Z
2021-01-13T00:00:00.000
{ "year": 2021, "sha1": "06337c8feaadc06095123130fe443040f656aba1", "oa_license": "CCBY", "oa_url": "https://bmcmededuc.biomedcentral.com/track/pdf/10.1186/s12909-021-02627-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "261c5c612ee969fd3304862372ba9c2175dd4e65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212948759
pes2o/s2orc
v3-fos-license
Study on the Protection of Intangible Cultural Heritage and the Utilization of Rural Tourism —Taking Yangliuqing Town, Xiqing District, Tianjin as an Example The report of the 19th National Congress of the Communist Party of China pointed out that without a high degree of cultural self-confidence and without the prosperity of culture, there would be no great rejuvenation of the Chinese nation. The tourism utilization of intangible cultural heritage has exerted the advantages of both tourism industry and cultural industry, which helps to establish cultural self-confidence and cultural identity. Based on this, this study takes Tianjin Yangliuqing Town as the research object, which is to solve the problem that its rich regional culture and many cultural relics cannot be rationally utilized. Using the field investigation method and household visits, it is concluded that only the intangible cultural heritage can be properly protected and the intangible cultural heritage unique to Yangliuqing Town can be excavated in order to promote the development of tourism, industry and inheritance. Provide countermeasures for tourism development I. INTRODUCTION At present, China is in the transitional stage from tourism to tourism and leisure tourism. The requirements of consumers can be summarized as consumption upgrading and multidimensional coexistence. Upgrades and supply-side reforms have therefore become the focus of the tourism industry. Originating from this, this article takes Yangliuqing Town, Xiqing District, Tianjin as the entry point, and pays attention to the intangible culture, the courtyard culture, the rushing culture, the wooden board painting, the red culture and many other intangible cultural heritages behind it. Promote the support of social development in the context of space and time in the context of industrialization, urbanization, and informationization. At the same time, the tourism utilization of intangible cultural heritage echoes the national strategy of improving and improving the tourism industry, adapts to the new changes in consumption trends, and is also an inevitable requirement for tourism to play the role of the strategic pillar industry of the national economy. II. NON-LEGACY PROTECTION AND DEVELOPMENT AND UTILIZATION OF RURAL TOURISM China's tourism development and intangible cultural heritage (hereinafter referred to as non-legacy) complement each other in practice. The core competitiveness of tourism destinations comes from the difference and regionality of tourism destinations, while the non-legacy preserves the regionality of tourism destinations. And the unique quality has become an important factor to attract tourists [1]. This paper believes that as a folk culture, it has obvious tension and resilience, can change in inheritance, adapt to change, and has the color of official ideology. There is also a style of non-government behavior; it can be extended and passed down, and it can be absorbed and created. Therefore, the protection of non-legacy is fundamentally to return to daily life, return to the people, return to the countryside, and it is the soil for survival and continuation [2]. As a researcher and practitioner of tourism phenomena, we are also a follower and beneficiary of nonlegacy. For a long time, there may be an illusion that as long as the concept is appropriate and the method is right, it seems that one can always find a kind. Transforming traditional nonheritage into a practical technology for modern tourism projects that tourists like to see and are willing to pay for, and implement it as a type of tourism product Folklore tourism, traditional tourism, etc. In fact, we probably have not yet realized and respected the essence of such heritage. Technical self-confidence may make us forget to think about some ground-breaking problems. Then we can't happen on scale Dislocation, think that it can be transformed, promoted and expanded from the level of "social" according to the modern economic concept and its organizational means. [3]. Really want to make good use of this kind of heritage to serve the tourism industry, but also in terms of perspective only by considering the two dimensions of scale can we effectively carry on and carry forward it III. OVERVIEW OF THE DEVELOPMENT OF YANGLIUQING TOWN AND ITS CURRENT SITUATION A. Abbreviations and Acronyms Yangliuqing is called Liukou in ancient times, and the old ones belonged to Wuqing and Jinghai counties [4]. During the Yuan, Ming and Qing Dynasties, with the increase in demand for materials, the towns and cities along the canal roads of Nanliangbeiyun were gathered together, and the trade was more prosperous. Yangliuqing Town It also developed. Qingzheng Zhengjiu (1731), Tianjin Government is attached to Tianjin County, Yangliuqingyi is under the jurisdiction of Tianjin, and is a major town of Jinxi [5]. Today, Yangliuqing Town belongs to Tianjin Xiqing District, which is Xiqing. The resident of the district government is also the economic and cultural center of Xiqing District. It covers an area of 64 square kilometers, with a town area of 15 square kilometers and a total population of 140,000. It is the largest township in Tianjin and the Bohai Economic Zone, and the largest manmade in Tianjin. Satellite city, and one of the eight characteristic tourist service towns. B. Status of cultural resources Yangliuqing Town is an ancient town with a thousand years of historical civilization ( Table Ⅰ).The characteristic ancient buildings of Yangliuqing Town and its unique folk culture (Shijia Courtyard, New Year Pictures, New Year Painting Hall) are important historical and cultural tourism resources of Tianjin, especially the national non-legacy projects represented by Yangliuqing wooden board paintings. It is also known throughout the country. Yangliuqing folk culture is very rich, and the theater, archway and Wenchang Pavilion are known as Yangliuqing Sanzongbao. Located in the center of the town, the Shijia Courtyard in the late Qing Dynasty is known as the "first residential house in North China". Together with the New Year Gallery, Ming and Qing Street and the South Canal, it forms the folk culture tourism base of Yangliuqing Town. 1) Water transport culture Yangliuqing Town is flourishing due to water. Water has cast the bones and spirits of the ancient town. The influence of water transport on cultural customs and water transport has certain influence on the development of local people's sentiments and social culture. "Tianjin Yangliuqing Xiaozhi" records that the number of boats Huge, a large number of soils are sold at the docks along the way. A large number of northsouth commercial goods are sold and sold along the riverbank. The commercial market is booming. With the development of the commercial economy, a large number of North-South merchants have flowed or settled here, making local dialects gradually. It combines the characteristics of different local languages and reflects the rich folk language and customs of Yangliuqing[6]. In the thirteenth year of Ming Yongle, the Beijing-Hangzhou Grand Canal runs through, and the Yangliuqing Town has risen. At this time, Yang Liuqing has developed from the military base of the Jin and Yuan Dynasties. An important commercial town in the north during the Ming and Qing Dynasties [7]. 2) Courtyard Culture The Shijia Courtyard was built in 1875 and was the former residence of Shi Yuanshi, one of the eight major figures in Tianjin in the Qing Dynasty. It covers an area of 7,200 square meters, including a construction area of over 2,900 square meters. The whole consists of 12 four-piece courtyards distributed on both sides of the 60-meter long ramp. Regardless of the overall pattern, architectural style, or artistic decoration, it reflects the cultural legacy of the late Qing Dynasty and the early Republic of China and the folk customs at that time. Anjia Courtyard was built during the Tongzhi Period of the Qing Dynasty. Covering an area of 200 mu, it consists of three courtyards. It is the tallest ancient building in Yangliuqing Town. It is also the largest ancient building in Yangliuqing Town. It is a typical northern residence. The Anshi Temple was built in 1720 and has a history of 285 years. The An's Ancestral Hall is located south facing south and consists of two entrance courtyards with a building area of 630 square meters. Both are bluestone high-rises, and the bricks are sewn together, which is a typical Qing Dynasty architectural style. 3) New Year painting culture Yang Liuqing is one of the hometowns of China's four woodcut New Year pictures.The Yangliu Qingmu New Year painting is a wonderful work in the history of Chinese folk art. It has a history of more than 380 years and is an outstanding representative of Chinese folk art.It has four unique advantages: obvious agglomeration effect, reasonable industrial structure, obvious characteristic advantages and talent attraction.Yangliuqing Town has built a New Year painting museum, a folk culture museum, Ming and Qing Street and other cultural exchange spaces for the New Year painting, which laid a solid foundation for the industrialization of the New Year painting.Among the more than a thousand businessmen, there are nearly one hundred studios of new year paintings. About 2,000 people are engaged in the design, creation, production and sales of Yangliuqing paintings. 4) Catch the camp culture "Catch up with the brigade" is a feat of the people of Tianjin. This is also a miracle in the history of modern Chinese business. From "Yangying", which has been going west from Yangliuqing, there are 153 stations, a total of 8171 miles, and it takes about half a year to reach Dihua. Bring it to Xinjiang. Advanced industries, handicrafts, traditional techniques, such as abacus, papermaking, metallurgical technology, crop planting, etc., have promoted the prosperity of all walks of life in Xinjiang, and introduced advanced concepts of service, business and transportation into Xinjiang. According to historical records "Catch the big camp" successfully made 15,000 Yangliuqing people successfully immigrated to Xinjiang, accounting for about one-fifth of the total population of Yang Liuqing at that time. It changed the political, economic and cultural features of Xinjiang and affected China with a small town. One-sixth of the land has been in existence for decades, which is indeed rare in Chinese history [8]. 5) Red culture Xiqing District currently has four red tourist destinations.Among them, the former site of Pingjin Campaign Front Line is located at No. 2, East Wangyao Street, Yangliuqing Town. It is now an exhibition hall and is a Tianjin-level cultural relics protection unit. The 12th and 9th Anti-Japanese National Salvation Movement Memorial Hall is located in Wanglanzhuang Village and is the only one in the country. The 12th and 9th Movement Memorial Hall is now a municipal-level cultural relics protection unit; the Xiqing Martyrs Cemetery is located on the bank of the Yangliu Qingnan Canal. It has a martyrs memorial and a monument, which vividly reproduces the feat of the Chinese and British people in the Tianjin campaign; China's anti-corruption first The large-scale exhibition hall is now the Tianjin Anticorruption Education Base. A. Non-genetic inheritance Taking Yangliuqing Town's New Year's painting as an example, it shows a natural development trend in production, and there are a large number of small-scale annual painting production and sales activities, and there is a flood of market driven by the market. On the one hand, due to production capacity constraints, the products of these small workshops lack innovation, and the works are similar, which reduces the quality of young Yangliu paintings. On the other hand, in order to maximize the interests, small workshops often only choose products with low prices for production. Even with false and shoddy, this has a huge impact on the production and sales of high-level new year paintings, which is detrimental to the healthy development of young willow paintings. B. The cultural industry structure is not reasonable First of all, although the industrial structure of Xiqing District has undergone tremendous changes since the reform and opening up, the tertiary industry has not received much attention and has been in a state of slow growth for a long time.As an important part of the third industry in Xiqing District, the cultural industry has always been a low proportion of the GDP of the whole region, and is far lower than the average level of other districts and counties. Moreover, in the overall industrial layout of Xiqing District, the cultural industry has not received enough attention. The "Tianqing District Master Plan (2008-2020)" and some special plans have not adequately and fully analyzed the cultural industry, and there are still some inadequacies in its understanding, and the corresponding development strategies are relatively few. In addition, the understanding of the connotation of cultural functions is more unilateral, and there is a wrong tendency to equate cultural industries with tourism, leading to more attention to tourism-related industries such as "tour, purchase, entertainment, food, housing, and travel", while ignoring culture. The related research and development, production and promotion of products have caused the current development trend of the cultural industry in Xiqing District. C. The tourism strength is not strong and lacks integration In addition, there are many problems in the internal development of Xiqing District. The propaganda of the courtyard culture and the canal culture has not been publicized. The current development and utilization is obviously lagging behind, and it still stays in the extensive management mode of ticket management (Fig.1).The folk culture with Yang Liuqing as the core, as well as the cultural resources of agriculture, red, martial arts, religion and other topics together constitute the rich cultural resources of Xiqing District. However, except for Yangliuqing Town, the scale of other scenic spots is very limited. In addition, the layout is relatively scattered, and the transportation links between them are not close enough, which leads to the isolation and development of each scenic spot, and it is impossible to form a complete industrial chain with agglomeration effect. Under the development model, resources of different themes are difficult to integrate, and it is not only difficult to coordinate the development and form a synergy. If this kind of individual situation continues, there will be problems such as redundant construction of infrastructure and vicious competition between scenic spots. Great waste of nonlegacy cultural resources in Xiqing District (Table Ⅱ). A. Broaden the direction of development Town should be guided by cultural characteristics, promote the matching of software and hardware environment with the characteristics of the New Year painting industry, maintain the characteristics of the New Year painting, integrate the characteristics of Ming and Qing architectural styles in the construction of the carrier, and maintain a unified appearance. The new year painting industry is the leading and combined with the ancient architectural tourism. Incorporating the culture of canal culture, "catch-up camp" culture, college culture, red culture, lantern festival and other cultural forms into people's life and social ecology, fully stimulating the vitality of innovation and entrepreneurship in related industries. Yangliuqing Town has a unique location Advantages and advantages of tourism resources, it is necessary to upgrade and reconstruct the roads and streetscapes of Yangliuqing Ancient Town, and build new parking lots, tourist centers and other public service facilities. Yangliuqing Ancient Town has created a national 5a-level tourist scenic spot. Yangliuqing Town can use it. Natural historical and cultural resources and tourism resources vigorously develop tourism and promote the third industry of Yangliuqing Town. B. Overall planning, coordinated development, upgrading of industrial structure In the development of tourism, we must also develop the first and second industries and other tertiary industries. Actively cultivate the economic foundation of Yangliuqing Town. In the process of development, we must focus on the market and focus on the industry, and focus on cultivating agricultural leaders in the region. Enterprises, according to the unique geographical conditions of the region, develop a variety of comprehensive or professional commodity wholesale markets, and promote the economic development of Yangliuqing Town. The government should make overall plans, give full play to the advantages of state-owned enterprises and private enterprises, attract various enterprises, master workshops and Individual industrial and commercial households actively participate in the development of Yangliuqing town's cultural function. At the same time, in the construction of Yangliuqing Town, we must adhere to the people-oriented, pay attention to industry talent training, achieve the gathering of artisans, play the role of masters, for the older generation and the new generation. Cultural heritage inheritors provide opportunities for exhibitions to provide a good cultural atmosphere and ecological environment for residents of Yangliuqing and enhance people's well-being. C. Green development, protecting historical culture While developing the tourism culture industry, we must focus on protecting the ecological environment, do a good job in ecological protection, create a good tourism environment, rationally develop tourism resources, and promote the sustainable development of the scenic spot. Yangliuqing Town has historical culture and natural scenery. In order to rely on the integration of planning and restoration, Yuhe Scenic Area, Shijia Courtyard, Anjia Courtyard and Ruyi Street and other cultural tourism resources, to create a good cultural heritage pilot for the whole town unit. Inherit and promote the New Year pictures, paper-cut, kites, brick carving, etc. Material cultural heritage, increase the protection and publicity of the excellent cultural heritage of Yangliuqing Town, and work together to improve the intrinsic value of Yangliuqing's intangible cultural heritage. VI. CONCLUSION It is very urgent and necessary for the protection of the current domestic awakening. Based on this, this paper is based on the research of many scholars, through the close integration of the background of non-legacy culture and the actual needs of the development of China's rural cultural tourism industry, to find a way suitable for China's development of rural heritage and excellent cultural functions. This research is mainly focused on, first, to vigorously explore and develop the non-legacy culture of the village, "not only must stand out, but also the spring of the park." At the same time, we developed distinctive cultural products and projects for the needs of tourists. Secondly, we found that non-legacy cultures have contributed to the deepening of social and economic development. Third, the research proposes to broaden the development direction, create a combination of farming and tourism, and coordinate Advise, coordinate development, upgrade industrial structure, carry out green development, and protect historical and cultural protection strategies. ACKNOWLEDGMENT This study is grateful to Tianjin Wuqing District Science and Technology Development Project (WQKJ201803) for funding.
2020-02-13T09:25:19.656Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "72bd35bab1c1b40952640cf55a85d6f783b142d6", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125931676.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0fcdf2a0e2103cd6d5b56e396e27eaf5231e3479", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Geography" ] }
209461179
pes2o/s2orc
v3-fos-license
Establishment of FTIR Database of Roselle Raw Material Originated From Western Coastline in Peninsular Malaysia Herbs from different geographical regions may differ qualitatively and quantitatively, hence it is crucial to determine the active components of herbs from different regions and build a reference database. This study focused on the database establishment for the authentication of the raw material of roselle (Hibiscus sabdariffa) collected at seven selected locations of the western coastline in Peninsular Malaysia. The validation on the unknown sample at the end of the study is to verify the accuracy of the established database. The inter-material distance (IMD) was presented as the mean distance of each sphere created by each batch of data from different locations. They were clustered with different folders and discriminated by Soft independent modelling by class analogy (SIMCA) algorithm. All materials from seven farms achieved 100% separation rate. The average IMD of these seven locations was 9.04. The FTIR techniques established in this study can be used to distinguish the geographical origin of the selected H. sabdariffa farm samples. Introduction The genus Hibiscus (Malvaceae) is distributed in tropical and subtropical zones [1]. Hibiscus sabdariffa (L.) planted in Malaysia endures high humidity and warmer climates. The main part of the plant with medicinal use is the edible red to pale yellow calyces or sepals that contain anthocyanin [2]. The various colour tones of the calyx rely on the location of planting and the composition of the soil. The factors such as genotype, types and intensity of light, orchard temperature, crop load and agronomic factors, including agrochemical application, irrigation, pruning and fertilisation, play certain roles in the quality of growth and products of roselle plant. Most of the roselle plantations are planted on Beach Ridges Interspersed with Swales (BRIS) soil in Malaysia [3]. Basically, this type of soil is not suitable for planting due to its high surface soil temperature and infiltration rate with low organic matter, nutrients content and water retention. Naimah et al. [4] reported that 20% of regulated deficit irrigation (80% irrigation) courses were required to enhance the roselle yield and preserve plant growth progression without adversely affecting calyx quality on BRIS soil. According to statistics of industrial crops of roselle in 2016 [5], mostly shortlisted for western coastline of Peninsular Malaysia, Johor was the largest state with planted area of roselle and also achieved the highest production of roselle, followed by Penang, Selangor, Perak and Kedah. Roselle can be commercially grown throughout the year in Malaysia. Many constraints limit roselle production, including climatic variability such as flood and draught in certain district. The limited suitable land is also another factor. Juhari et al. reported that the discrepancies of anthocyanin contents of H. sabdariffa reflected the difference in geographic origin of the plants which were selected randomly in the experiments, as the composition of anthocyanin was based on the geographic origin of the plants [10]. The anthocyanin content, however, reached 1.7-2.5% of dry weight of the calyces in all the strains examined [11]. Therefore, both biomass and production and anthocyanin biosynthesis rely on the nutritional factors which include type and concentration of carbon, nitrogen source and phosphate level [12]. Commercial H. sabdariffa products in various forms have been mushrooming in the market. The quality in term of the content of anthocyanin in these commercial products is a major concern since herbs from different geographical regions may differ qualitatively and quantitatively [13]. In addition, different processing methods including the harvest period, material of sample used and the time of delivery could be the factors affecting the quality of the roselle products. Hence, it is crucial to determine the active components of herbs qualitatively from different regions and build a reference database. There are many quality control technologies in this new era. Commonly, the types of chromatography consist of high performance liquid chromatography, gas chromatography mass spectroscopy and liquid chromatography-mass spectrometry. Fourier transform infrared (FTIR) is widely used as a new technology for many purposes [14][15][16], such as analysis of anthocyanin [17]. The advantages of FTIR are rapid, less-destructive and cost saving. Such information acquired can be utilised for the development of reference database of H. sabdariffa to provide basic information on the product for the purpose of authentication, as the spectrum of a product can be rapidly matched for validation of its geographical origin and to predict the anthocyanin contents. This study therefore focused on the database establishment for the authentication of roselle raw materials collected from seven selected locations of western coastline in Peninsular Malaysia. Plant material Only one variety of H. sabdariffa L. was obtained from seven different farms recognised by the State Agriculture Department along the western coastline in Peninsular Malaysia. The calyces of each individual plant were randomly collected ( Table 1) Sample processing Each of the individual calyces collected were processed individually. After removing the seed, the calyces were washed and air-dried at room temperature. After about 80% of dryness was achieved, the calyces were continually dried in the oven at 50°C for 3-4 days. The dried calyces were pulverised with a blender to the finest size for further use. The processing was repeated for all the individual calyces collected from the seven locations. FTIR method The measurements were carried out using a Fourier Transform infrared (FTIR) spectrometer Spectrum GX, Perkin-Elmer Ltd., England, equipped with a deuterated triglycine sulphate (DTGS) detector. Infrared spectra were recorded at 32 scans at a range of 4000-400 cm −1 with a resolution of 4 cm −1 [18]. The dried calyces were ground with potassium bromide (KBr) powder in the ratio of 1:200 under the lowest humidity environment. The KBr and sample mixture were pressed not more than 10 psi to form a thin disc to be scanned for mid-infrared spectrum. The spectrum that achieved more than 60% transmission was chosen for further use [19]. Three discs were produced from each plant calyces and scanned. Assured ID for chemometric analysis Software Assured ID (Assured ID Method Explorer 2015, PerkinElmer) was used for chemometric analysis. The chemometric SIMCA was chosen by selecting wave number in the range of 1900-515 cm −1 (Figure 1) instead of function with icon "COMPARE" in the software. The outlaying spectrum was excluded in the developed method ( Figure 2) when troubleshooting under the Coomans skill (Figures 3 and 4). Validation on unknown location sample Validation was done on three batches of roselle given by a colleague for testing the established database. These roselle samples were labelled as A, B, C, D, E and F. The validation was also done on a roselle sample purchased from a Chinese shop in Georgetown, Penang, Malaysia. The sample was in the dried form and pulverised with blender. The finest samples were obtained by sieving with a 150-μm sieve (Standard Test Sieve, "CE"). The finest powder form of sample was mixed with KBr and followed the similar procedure of FTIR method, as mentioned in Section 2.3. The spectrum of unknown sample was copied to seven sets and labelled in a series (such as A-1, A-2, A-3, A-4, A-5, A-6 and A-7) and imported into the established database. Later, each copy of the spectrum was given a location based on the location of the established database. The specified material total distance ratios (SMTDR) of the generated results were used to predict its geographical origin. The system has a default of specific material distance ratio limit with a value of 1.000 estimated by a ratio of the edge of the sphere with the diameter of the sphere. In fact, the SMTDR was less than 1.000, and the position of the spectrum was considered located in the area of the sphere. Fourier Transforms -Century of Digitalization and Increasing Expectations 6 Classification and performance report The software "Assured ID" has successfully separated the spectra of the seven H. sabdariffa location samples based on different cluster of spheres. The analysis consisted of samples with extreme data (1.04% of excluded data) that were excluded from the system. All the materials from the seven farms achieved 100% rejection rate (Figure 5), showing that each of the H. sabdariffa spectra from the same location was distinguishable from the other locations when the software made a border line for the group of spectra from the same location. The 125 roselle samples spectra from Penang derived a mean spectrum and used as reference, whereas 88 samples from Kedah were incorporated into another mean spectrum. Roselle sample spectra from other locations were also included in this database. All the raw data were tested with chemometric SIMCA. Analysis of the sample shown only the group of spectra from Johor (Muar) achieved 100% (69/69) recognition rate. The lowest recognition rate (92%) was the samples from Perak (Lenggong), as out of a total of 108 spectra of samples from Lenggong, 99 spectra were recognised to the cluster of Lenggong. The other nine spectra were considered different from the Lenggong spectra cluster. This different spectrum was not overlapping with another cluster; nevertheless, they were not incorporated into the cluster of Lenggong. Samples from Sabak Bernam, Dengkil and Batu Pahat reported 3-6% elimination of perfect recognition rate. Figure 5 showed the tabulated IMD of all the locations at western coastline in Peninsular Malaysia. Inter-material distances (IMD) Inter-material distance is the mean model distance created by the software based on the cluster of spectra which include the residual and compared with the other cluster of spectra in the same model. IMD indicated the average separation distance of two clusters of spectra. IMD with greater value suggested each cluster was separated far apart and their components were possibly different. On the other hand, IMD with zero value represented each cluster possessed similar components. The 3D principal component graph (Figure 6) illustrated the position of each cluster of spheres, which was viewed from different direction since their intermaterial distances varied. The 3D graph was established by three axes: PC1, PC2 and PC3. Each of the spheres was developed by the group of samples from their different locations. The spectrum of each sample was transferred to a particular dot form. They were surrounded by the residues and the whole sphere represented the mean of all spectra of the group. They were separated based on the intermaterial distance from the centre of the sphere. When the inter-material distance was closer, the two spheres would be overlapped. Since most of the inter-material was more than zero, the software was able to differentiate each group of samples. The areas of the spheres varied and relied on the derivative of the spectra from the main spectrum. When the size of sphere was smaller, the differences of each dot in the group from the mean spectrum were less and vice versa. Figure 6 illustrated that the seven area spheres were associated closely in a three-dimensional graph, which was viewed from different direction since the inter-material distances varied. The IMD with high value reflected the far distance of the sphere's separation. Some of the spheres overlapped at certain portion meaning they were having very small value of IMD. The average inter-material distance of these seven locations was 9.04. The highest inter-material distance was 20.1 between samples from Kedah (Sik) and Selangor (Sabak Bernam). The prediction of this scenario was that the development of H. sabdariffa from Sik in Kedah and Sabak Bernam in Selangor could be different in terms of their growing environment. The IMD from the Perak (Lenggong) and Johor (Batu Pahat) samples were lowest (4.07), showing that they shared 97.84% similarities of components in roselle grown under similar conditions of soil, water, pH and weather ( Table 1). The analysis by software "Assured ID" indirectly also indicated that the sample from these two locations showed very similar spectra and the ingredients of the calyces were produced under similar conditions. Samples from Kepala Batas showed IMD of less than 5.00 similar to samples from Lenggong, Muar and Batu Pahat. Samples of H. sabdariffa from Kepala Batas might have produced comparable chemical content as samples from these three locations. The IMD value of Muar and Batu Pahat was almost similar, as both locations are only 60 km apart. The soil condition, water and climate are less different. The IMD value of more than 10 for samples from Selangor (Sabak Bernam) showed that samples from Kepala Batas had different quality compared with them. Samples from Sik showed lowest IMD (6.52) similar to Batu Pahat when compared with other locations. Samples from Lenggong scored higher IMD value compared with sample from Dengkil and could possibly be due to the organic fertiliser and soil used in Dengkil farm. Higher rate of organic fertiliser increased the stem diameter and stem height, leaves number and leaves area as well as the biomass and number of calyx [20]. This could explain why the samples from Dengkil achieved higher IMD among all the samples even though samples from Sabak Bernam were obtained from same state. In comparison, samples from Muar showed lower IMD compared with Batu Pahat and Sabak Bernam, as these two locations are located in the middle of western coastline of Peninsular Malaysia. However, samples from Batu Pahat and Sabak Bernam still produced IMD greater than 10. This could be due to other factors such as the expanding of roselle disease [21] in two different locations. This kind of disease affected the yields and products of roselle as they caused leaf spot, stem rots and root rots. Validation of unknown sample Three batches of raw roselle sample showed the SMTDR value of more than 1.000 ( Table 2). This could be due to the raw material used included many overlapping spectral points. The spectra used for database have wide range of variation. Thus, the sphere was built by covering varied sizes. The exclusion process was done to eliminate the variation. During the trouble shooting step, the rare spectrum points discarded from the system also affected the average of the sphere size and diameter, and another spectra point could appear and needs to be excluded. Therefore, exclusion plays a key in validation. Since the SMTDR would not achieve less than 1.000, the prediction of the validation was based on the lowest value of SMTDR for the best result. By right, the range of SMTDR value of more than 1.000 was not mentioned in the system. There is no setting of SMTDR greater than 1.000, as the variation of database is built up by pure compound and theoretically the SMTDR of less than 1.000 for sample is validated within that specific sphere area. The validation of the sample needs to be conducted in a case by case manner. In the first batch of the sample, only sample F was predicted correctly. It is from Batu Pahat (Johor) with lowest SMTDR (5.6660). The prediction of the rest of the samples was inaccurate with SMTDR within the range of 6.000-9.000. Only sample B was predicted with highest SMTDR and totally out of the range, indicating that the sample was not in the list of the database. The result showed more than half of the sample was related to Batu Pahat (Johor). Sample E in the second batch of the samples was correctly validated from Batu Pahat (Johor). Sample B was validated from Johor also, but from Mersing another district, but the SMTDR was lower than sample E, showing that the established database was not able to distinguish the sample from another district ever though the SMTDR was lower. The prediction of the location of the unknown sample was 100% relied on the value of SMTDR. Sample F was validated with highest SMTDR of 28.9541 and was absolutely as a sample not from the western coastline. The other samples were validated with SMTDR of around 5.000-9.000. The pattern of results for the third batch of the validation sample was similar to first and second batch samples. Sample B was validated correctly from Batu Pahat (Johor). Sample A which originated from Kuala Rompin (Pahang) was validated with highest SMDR. The rest of the samples were validated in the range of SMTDR 3.000-8.000. In summary, most of the result of validation referred to the sphere with bigger size, in this case, Batu Pahat (Johor) and Kepala Batas (Penang). The average of the SMTDR was around 3.000-9.000 for these batches of roselle samples. Calculated SMTDR not within this range is considered roselle sample located far away. Validation of certain samples based on the established database showed the limitation and the reliability of the method. The database of samples from different locations with great variations caused the different sizes of the sphere in 3D graph. This phenomenon could affect the outcome, as it is preferable to possess bigger size sphere. The limitation of the established database includes the inaccuracy of determining the actual origin of the sample, since the outcome is only based on the SMTDR which is calculated by the software. Conclusion H. sabdariffa is the herbal plant adaptable to almost every state in Malaysia. It is easy to grow and prefers mineral soil with lower acidic pH. The calyces of H. sabdariffa are made into herbal tea and consumed by local Malaysians. Their anthocyanin contents have been reported as the key component in therapeutic studies. This project was sampled of the roselle farm in the western coastline of Peninsular Malaysia. There are some considerations when establishing the database with Assured ID. The preparation of the sample is important in ensuring accurate determination. Firstly, the sample size of the KBr disc should be minimum above 50. The exclusion of extreme spectrum may minimise the sample size. This is crucial to ensure the data are representative of the actual condition of the sample in the area. Secondly, the sample processing procedures must be simple and time saving. The selection of region of wavenumber must include the range of fingerprint of the sample, which is exhibited in the raw material spectrum. The IMD of the sample must be more than one. It is preferable to collect the sample over a wide area in order to minimise the error of determining the location of unknown sample. When the location of an unknown sample could not be determined from the established database, it is possible that its SMTDR value could be out of the range of the average. In this study, roselle raw material spectrum database was established by importing the spectrum of each individual plant into the system. Each of the sample spectrum from different locations has formed their own position in the 3-D principle component graphs and combined to form the sphere separated by IMD. Validation of given simples was used to test the established database for its accuracy. The validation showed that only one out of six samples from each batch of sample was validated correctly, indicating a success rate of only 17%. On the other hand, the method successfully discriminated sample location in western coastline. It is concluded that with this established database, more than 50% of the validation detected the sample within the range of western coastline. The established method of Assured ID database of roselle can be used as a reference database for roselle sample from unknown geographical locations in Malaysia with few limitations, but further improvement is needed.
2019-12-05T09:08:06.288Z
2019-12-04T00:00:00.000
{ "year": 2019, "sha1": "dca1588219ec03a9e372bf58061ada74a89aff28", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/69339", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9609055e96424258610429622420256fa12f4789", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
12038621
pes2o/s2orc
v3-fos-license
The Fate of Maleic Hydrazide on Tobacco during Smoking Tobacco mainstream smoke (MSS) and sidestream smoke (SSS), butts, and ashes from commercial cigarettes and maleic hydrazide (MH) spiked cigarettes were analyzed for their MH contents. The MH transfer rates obtained for MSS ranged from 1.4% to 3.7%, for SSS ranged from 0.2% to 0.9%, and for butts ranged from 1.1% to 1.9%. And as expected, MH is absent in ashes. The transfer rate of MH into mainstream smoke is the top one during in transfer rate into main-stream, side-stream, ashes, and butts, and higher MH levels lead to more MH in smoke. Further, analysis of total MH in butts and ashes along with that in MSS and SSS indicates that much MH is destructed during the smoking process. Introduction Maleic hydrazide (MH; 1,2-dihydro-3,6-pyridazinedione), a systemic plant growth regulator, is used by tobacco farmers throughout the world to inhibit growth of suckers on tobacco plants [1]. It is usually applied to the upper half or third of the tobacco plant within 24 h after topping. Subsequently, due to absorption and translocation, MH is found throughout the entire plant. However, this compound is suspected to be a carcinogen [2] and is classified as group C by the International Agency for Research on Cancer (IARC). The total level of MH in the processed leaf and in the tobacco from cigarettes ranges from a few µg/cig up to about 100 µg/cig. The German government and the CORESTA have set a tolerance level of 80 ppm of this substance in cigarette tobacco [3]. However, as far as the smoker is concerned, the most important question is not how much MH is in tobacco but how much of MH and its degradation products is present in cigarette main-stream smoke? Our investigations are, in part, an answer to that question. Pyrolysis at 750 • C of pure MH, performed in the present study, indicates that at elevated temperatures some of MH is transferred unmodified into the pyrolyzate. The transfer of MH from tobacco to cigarette smoke is therefore likely. Several previous studies report MH transfer into smoke. Liu and Hoffmann [4] were the first to report that the transfer rate of MH depends on the concentration of MH in tobacco. The transfer rate indicated by Liu and Hoffmann [4] ranged from 4% to 10%. Haeberer and Chortky [5] found a transfer rate of only 0.2%. A report by the US Department of Agriculture [6] indicated a transfer of 2-3% of MH to mainstream smoke (MSS), while Chopra et al. [7] showed values between 2.5% and 2.8%. Additionally, Liu et al. [8] indicated that MH is present in smoke even if the tobacco was not treated with MH. The disagreement between the reported results of MH transfer rate indicated a need for further studies. Our investigation was done to clarify the reasons for differences among previously published results. It started with the pyrolysis of pure MH to establish the capability of MH to be transfered unmodified into pyrolyzates. Further, MHs were applied to cigarettes by injecting their solutions into the cigarettes. Then, we have analyzed cigarette tobacco, cigarette mainstream and side-stream smokes (SSS), cigarette butts, and cigarette ashes for their MH contents and have calculated the transfer of MH into mainstream and side-stream smokes. , and tert-butylhydroquinone from Aldrich (Saint Louis, MO, USA). Cigarettes are smoked by a Borgwaldt KC 5 port sidestream smoking machine. All cigarettes were stored at 0 • C until required for use. The 3R4F cigarettes were purchased in 2010 (University of Kentucky), and the commercial cigarettes, all different brands, were purchased in 2011 from local grocery stores. Pyrolysis of MH. The pyrolysis of pure MH was done using an on-line pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS) system. The on-line system use a CDS 5000 instrument and an Agilent 6890-5973 GC-MS system. The sample was pyrolyzed in the Pyroprobe at 750 • C under helium for 10 sec, and the pyrolyzates were transferred to the GC-MS system by helium for analysis. The GC was equipped with a 30 m long, 0.25 mm i.d., and 0.5 µm film thickness DB-Wax capillary column which has PEG-20 M functional groups. The oven was programmed at an initial temperature of 50 • C for 2 min, a heating rate of 5 • C/min to 200 • C, a heating rate of 8 • C/min to 280 • C, and a final time of 15 min. The injection is done at 275 • C. The quantitation was done using the area of the MH peak in the select ion chromatogram. Sample Preparation. Some cigarettes were spiked with different levels of MH. For this purpose, a 10, 20 or 50 µL solution of MH was injected, and the spiked cigarettes were equilibrated for 48 h before smoking. The concentration of MH in methanol was prepared to yield the desired amount of MH in the cigarette. For MH analyses in mainstream and sidestream smokes, 6 cigarettes per analyses were kept in a humidifying chamber for 24 h at 60% humidity. These cigarettes were then smoked onto a small Cambridge pad, under Federal Trade Commission (FTC) conditions using a Borgwaldt KC 5 port sidestream smoking machine. The main-stream and side-stream smokes were collected at the same time. Ashes and butts from these smoked 6 cigarettes were also collected and pooled together (separately). Each of MSS, SSS, butts, and ashes parts was then extracted with 50 mL methanol on a mechanical shaker for 20 min. The solution is transferred into a test tube, heated at 65 • C under a N 2 stream, and methanol is evaporated to 1 mL. The 1 mL solution is cleaned up by a C18 cartridge and is eluted by 5 mL methanol. Then the elution solution is transferred into a test tube, heated at 65 • C under a N 2 stream, and evaporated to dryness. The residue is dissolved in 1 mL DMF containing 100 µg/mL tert-butylhydroquinone (internal standard), followed by the addition of 0.5 mL N,Obis(trimethylsilyl)trifluoroacetamide (BSTFA). The solution is heated at 75 • C for 30 min and analyzed by GC-MS using conditions similar to those used to analyze the pyrolyzate. MH is present in several tautomeric forms, and in the presence of BSTFA, the enol form reacts as shown in Figure 1. The mass spectrum of silylated MH or 3,6-bis[(trimethylsilyl)oxy]pyridazine is similar to that of silylated 2,5dihydroxypyrazine and is shown in Figure 2. Select ion mode quantization ion of silylated MH is 241 and 255 quantization ion of internal standard is 207, and 222. The quantitation of MH in cigarette smoke is done following a calibration curve made using standards of pure MH with mainstream matrix. The standard solutions used to make the calibration curve are prepared starting with a stock solution of 1000 µg/mL MH and 100 µg/mL tertbutylhydroquinone in N,N-dimethylformamide (DMF). This solution is diluted by methanol to 100 µg/mL MH with DMF containing the same concentration of internal standards, and various volumes from this solution were further diluted to generate each point. Three replicates of each concentration were prepared to generate the calibration curve. The calibration curve for MH in smoke is shown in Figure 3. The matrix effect could affect silylation of smoke with BSTFA and yield low recovery. So matrix effect of mainstream, side-stream, ashes, and butts was comparative, and the recovery of each part was determinated. The analysis of MH in tobacco is done following the next procedure. The analysis of total MH was performed by an external laboratory. The method used for this analysis is similar to that described in a literature report [9]. For the analysis, 5 g of tobacco was extracted by boiling under reflux in 100 mL of 4 N HCl for 120 min. The extract is allowed to cool and is filtered. A few mL of solution are passed through a C18 SPE column and a 4 mL sample is collected. Microsorb-MV 100-5 C18 column. The elution is done using isocratic conditions with 0.1 mol/L acetic acid in water (pH 4.8) at 0.7 mL/min. The optical density is measured at 313 nm. Results of MH Pyrolysis. The decomposed part of MH consisted of some ammonia, CO 2 , butanoic acid, aminobutyric acid, 1H-pyrrole-2,5-dione, and so forth. The behavior of MH under pyrolysis conditions could be anticipated from its stability during melting point determinations. It was observed that MH does not melt but degrades at 295 • C. This substantiates our findings of pyrolytic decomposition of MH in the burning cigarette. The result of MH pyrolysis indicates that a significant amount of MH passes unchanged into the pyrolyzate. The relatively high thermal stability of MH and its volatility allowed for part of the MH to volatilize and leave the pyrolyzer before decomposition. Our result indicates that beside MH itself, no specific toxicological concerns are raised by the presence of MH in cigarettes. But Patterson et al. [10] found benzo[a]pyrene in the pyrolyzate of MH when neat MH was prolyzed. This led to the possible implication that MH in cigarette tobacco could also contribute to benzo[a]pyrene in tobacco smokes. Chopra, however, on mathematical grounds, has questioned this implication [11]. Ninety-four percent of MH is known to degrade into CO 2 , CO, NH, HCN, and so forth [12]. This result is partly consistent with ours. And no one, so far, to our knowledge has experimentally implicated MH with PAHs. Silylation with BSTFA. Attempted application of our original method for MH in tobacco (silylation of MH and subsequent gas chromatography) to the analysis of MH in smoke was not immediately successful. Direct silylation of smoke with BSTFA yielded not good gas chromatogram. The analysis was complicated by interfering total particulate matter (TPM) constituents, not present in tobacco extracts. Removal of interfering substances was accomplished by C18 cartridge. Gas chromatography of the MH-BSTFA on DB-WAX showed that MH was successfully recovered from the C18 cartridge and derivatized quantitatively, in comparison with MH standards that were directly derivatized. Thus, the final procedure for MH analysis consisted of sample extraction, clean up of interferences by C18 cartridge, concentration and derivatization with the BSTFA reagent, and gas chromatography on DB-WAX. Matrix Effect. The matrix effect of main-stream, sidestream, ashes, and butts was comparative, as shown in Figure 4. We found that all of them have a matrix suppressive effect to MH; the strength order of matrix effect is sidestream, main-stream, butts, and ashes. Recovery and RSD. Also the recovery and relative standard deviation (RSD) of each part were determined, as shown in Table 1. 3R4F cigarette (whose amounts of MH is close to zero) is used. We found that the average recovery of mainstream smoke is 83%, with RSD of 5.4%; the average recovery of sidestream smoke is 85%, with RSD of 4.9%; the average recovery of butts is 92%, with RSD of 5.8%; the average recovery of ashes is 94%, with RSD of 3.8%. The limit of detection is 0.12 µg/cig, and the limit of quantitation is 0.39 µg/cig. Discussions of Transfer Rate. To obtain quantitative information on the transfer of MH in smoke, it was necessary to determine the levels of MH in both the tobacco and the smoke of each type of cigarette. Cigarettes with different levels of MH were used in this study. This included cigarettes with agronomically added MH in the tobacco and with spiked MH. The cigarette designs covered a range of possibilities, including 3R4F cigarette (whose amounts of MH are close to zero) and Parliament and Salem cigarettes. The levels of total MH and MH in smoke were analyzed as previously described. The results are given in Table 2. 4 The Scientific World Journal [7] also gave main-stream MH transfer rates of 3.80%. This is in an excellent agreement with our main-stream MH transfer rate of 3.7%. However, Liu and Hoffmann [4] also report a transfer rate of 7% and 10.3%; our highest rate is 3.7% for a cigarette spiked 71.5 µg/mL MH. Chopra et al. [7] have no explanation as to why Liu et al. [4] had such high transfer rates. But we found that Liu et al. [4] use water as extraction solvent in analyzing MH in tobacco; as we all know, water is not effective to extract bound MH in tobacco, so Liu got a low MH in tobacco and a high transfer rate of mainstream smoke. Our main-stream MH transfer rates of 1.4% are also consistent with those of Davis et al. [13,14] and Wood et al. [15]. It was of interest to determine whether or not any MH distills ahead of the burning cone of a cigarette. Apparently, sidestream smoke has a low transfer rate, which is due to the retention of moisture and particulate matter and so forth, by the butt segment during the process of smoking. Haeberer and Chortyk [16] report similar results. This indicated that MH does not distill ahead of the burning zone to be condensed and concentrated in the butt. Since distillation can now be ruled out, the disperse phase or aerosol particles must be responsible for transport of MH into smoke. This result is consistent with Chopra et al.'s [7]. Another interesting, and expected, finding is that the amount of MH is absent in ashes. The temperatures in the burning zone of the cigarette reach up to 900 • C [17], and it would be surprising if MH could survive temperatures that high. Conclusions This paper presents the fate of maleic hydrazide on tobacco during smoking. The transfer rate of MH into mainstream smoke is the top one during in transfer rate into mainstream, side-stream, ashes, and butts, and higher MH levels lead to more MH in smoke. The findings are in agreement with those of Liu and Hoffmann [4]. The most important consideration is not how much MH is in tobacco but how much MH is in main-stream smokes. According to our transfer rate, a cigarette containing 80 µg MH would give 2.9 µg MH in the mainstream smoke, 0.7 µg MH in the sidestream smoke, and 1.4 µg MH in the butt.
2016-05-12T22:15:10.714Z
2012-10-23T00:00:00.000
{ "year": 2012, "sha1": "7424bf245b8e41431998b7f42f51bfa4ee941287", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/tswj/2012/451471.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "962f9a20217fd7aef6367f20b926f2ec2da31ae3", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
256358586
pes2o/s2orc
v3-fos-license
A quantum interferometer for quartets in superconducting three-terminal Josephson junctions An interferometric device is proposed in order to analyze the quartet mode in biased three-terminal Josephson junctions (TTJs), and to provide experimental evidence for emergence of a single stationary phase, the so-called quartet phase. In such a quartet-Superconducting Quantum Interference Device (quartet-SQUID), the flux sensitivity exhibits period ${hc}/{4e}$, which is the fingerprint of a transient intermediate state involving two entangled Cooper pairs. The quartet-SQUID provides two informations: an amplitude that measures a total ``quartet critical current'', and a phase lapse coming from the superposition of the following two current components: the quartet supercurrent that is odd in the quartet phase, and the phase-sensitive multiple Andreev reflection (phase-MAR) quasiparticle current, that is even in the quartet phase. This makes a TTJ a generically"$\theta$-junction". Evidence for phase-MARs plays against conservative scenarii involving synchronization of AC Josephson currents, based on ``adiabatic'' phase dynamics and RSJ-like models. Besides this quartet supercurrent, another current component happens to depend on the quartet phase. It originates from multiple Andreev reflections (MAR), which promote quasiparticles across the superconducting gap 2∆ with the help of Cooper pair transfers, each one gaining energy 2eV [56]. New channels open in a three-terminal Josephson junction (TTJ) [34], where all pairs of terminals are simultaneously involved. Among those, specific processes involve emission of quartets at zero energy but with phase ϕ Q : the energy cost for promoting a quasiparticle between two terminals, say S 1 , S 0 (with V 1 −V 0 = V ), instead of transferring a pair between terminals S 1 , S 0 , can be provided by transferring a pair between terminals S 0 , S 2 (with V 0 − V 2 = V ) plus absorbing simultaneously a quartet from (S 1 , S 2 ) to S 0 ( Figure 1). This quartet carries a phase ϕ Q and these MAR processes become phase-dependent subgap quasiparticle currents [27]. Detailed calculations about the phase and voltage sensitivities of both quartet and phase-MAR currents can be found in Refs. 31 and 57. While quartet supercurrents are truly nondissipative, the phase-MAR currents are dissipative. Both of them depend on the control variables (ϕ Q ,V ) but with different symmetries [27,31]. Owing to time inversion symmetry, the quartet and phase-MAR currents have to be antisymmetric with respect to inverting both variables ϕ Q and V . The quartet current is antisymmetric in phase and symmetric in voltage, but the phase-MAR current is instead symmetric in phase and antisymmetric in voltage. This duality is reminiscent of the tunnel junction treated by Josephson [58] in his seminal work, concerning the DC current and the phase-sensitive quasiparticle current. The latter is AC in a two-terminal junction, but can become DC in a multiterminal one. Regarding experiments, an important question is about the interpretation of transport anomalies observed when a TTJ is biased at the voltages 0,V, −V [43,45,47,[53][54][55]. A conservative explanation involves the synchronization of AC Josephson currents flowing across each of the junctions polarized at V and −V respectively [59]. This mechanism is electromagnetic in nature and it involves the impedance (or the photon modes) of the whole circuit including the junction. Minimal models involve an adiabatic dependence of the currents with time-dependent phases, in a way similar to the standard treatment of Shapiro resonances [60]. This can be done in the presence of an external environment described by a circuit impedance, that includes the resistive part of the junction itself, within RSJ-related models [59]. This qualitatively accounts for the DC-current features observed in TTJs [46,[48][49][50]53], but is not a proof of the physical relevance of such a description. For instance, the zero-frequency currentcurrent cross-correlations [45] can hardly receive interpretation in terms of the RSJ model, and quite specific frequency dependence of the device external circuit impedance should be advocated to interpret a recent four-terminal experiment as originating from a RSJ model [47]. Still, complementary experiments would be important to ascertain the mesoscopic nature of multipair processes. A first requisite is the control over the quartet phase, which can be used to prove coherence of the multipair supercurrent. Such a phase coherence might be present in an extrin- sic synchronization scenario, although hampered by decoherence mechanisms due to the environment itself. On the contrary, phase coherence of the quartets is expected to be much more robust. To go further in the discrimination between extrinsic and intrinsic mechanisms, one must take into account the high transparency of the junctions, necessary to produce a mesoscopic multipair transport. The consequence is the existence of MAR processes, which in the standard two-terminal case have no explanation but with the help of subgap Andreev reflections, and thus go well beyond a phenomenological RSJ modeling. Specifically, in MTJs, the observation of the phase-sensitive MARs can be taken as evidence for truly mesoscopic processes involving quartets, thus disproving any classical synchronization scenario. In this work, we propose an interferometric scheme able to control the quartet phase and, at the same time, reveal the phase-MAR component, thus proving both the phase coherence of multipair processes and their truly subgap mesoscopic nature. Following Josephson's discovery that a current must flow in an unbiased junction and depends on the phase difference between the contacts [58], SQUID setups were invented in order to control and analyze this phase sensitivity [60]. The flux dependence exhibits period hc/2e that directly proves supercurrents carried by Cooper pairs with charge 2e. Similarly, one expects that interferometry also helps elucidating the mechanism of quartets in TTJs, in particular proving that they carry a charge 4e. Yet, this simple expectation meets a difficulty : a TTJ involves three terminals, two of them being biased. This prevents from building a trivial generalization of the original two-terminal SQUID which is fully equipotential. Such a device must necessarily be different from those already proposed for multijunctions at equilibrium [28,44]. In this work, we describe a four-terminal scheme building a true quartet-SQUID. The clue is to connect two TTJs in parallel by their unbiased as well as their biased terminals, in order to close them in a double-TTJ loop. Cooper pairs injected in the quartet-SQUID at voltage V = 0 can cross either TTJ as quartets, picking up the quartet phase of each TTJ, and recombine in the common outputs at voltages V and −V . The design encloses two loops instead of one. Generalizing the standard SQUID argument in the presence of magnetic flux shows that this imposes a difference between the quartet phases of the two TTJs, thus achieving a perfect parallel with an ordinary SQUID. This scheme allows analyzing the sensitivity of the quartet mode on voltage, as a new control parameter for a DC supercurrent. Microscopic models show that it is not monotonous, owing to nonadiabatic transitions between Andreev levels. Moreover it can switch from a generic π-junction behavior (perturbative and low voltage case) to a 0-junction one. Such an evidence goes beyond classical synchronization scenarii unless assuming ad hoc an unlikely voltage (i.e. AC Josephson frequency) dependence of the circuit impedance. The proposed quartet-SQUID also allows exploiting the interplay between quartets and phase-sensitive MARs. Separation between those two distinct processes could in principle be achieved in ideally symmetrical TJJs. More generally, the different phase symmetry of quartet and phase-sensitive MAR currents results in a phase lapse in the periodic flux response of the quartet-SQUID. Measuring this phase lapse quantifies the presence of phase-MARs in transparent enough junctions. Phase-MARs are mesoscopic and they involve quartet excitation amplitudes, therefore they bring the necessary proof of a truly new physics being involved in TTJs. Three-terminal junction quartet-SQUID: The principle of the quartet-SQUID is to make two TTJs interfere with each other by joining their biased arms in a secondary circuit, as pictured in Figure 2. The two TTJs thus enclose a secondary loop with two branches respectively at the voltages V (hereafter denoted as "L-branch"), −V (hereafter denoted as "Rbranch"), threaded by a flux φ . The main loop is threaded by a flux Φ. Both loops are separated by the L branch (see Figure 2). The total current is injected as I tot = I 1M + I 2M where I 1M and I 2M are the currents entering each of the TTJs from the unbiased branch, and eventually exiting in the biased branches (second circuit) as I L and I R . Current conservation reads: Let us define the phases at the unbiased branch of TTJ1 and TTJ2 as ϕ 1M , ϕ 2M respectively, and the phases at the biased branches of the TTJs as (ϕ 1L , ϕ 1R ) and (ϕ 2L , ϕ 2R ) respectively. From previous works [26] one knows that the stationary quartet phase components are while the oscillating phase components (at frequency 4eV /h) are ϕ iL − ϕ iR (i = 1, 2). Let us define the normalized fluxes between 0 and 2π as Φ * = (2π/φ 0 )Φ and φ * = (2π/φ 0 )φ , with φ 0 = hc/2e. The fluxoid argument is applied to the main loop containing the L branch, then to the main plus secondary loop containing the R branch. This is perfectly allowed, in spite of the main loop and the biased branches not being at the same potential. In fact, the fluxoid argument takes care of the phase variation inside each superconductor, whatever its potential. The supercurrent circulation in the bulk of each superconductor is assumed to be zero as for a thick superconductor, see Ref. 60. The presence of voltage biases between the different superconductors only enters in the phase difference at the junctions, that can depend on time in the present scheme, with frequency 2eV /h. The fluxoid argument [60] amounts to equating on both paths the sum of the phase differences at the junctions to the normalized flux in the loop (modulo 2π), which yields: Taking the difference between these two equations, one obtains a relation between the oscillating phases components at the two TTJs: This central result shows that, like an ordinary SQUID, the interferometer imposes a phase difference between the stationary quartet phases at the two TTJs. Because of the (L, R) symmetry of the quartet current, the corresponding flux is the arithmetic mean of the fluxes delimited by the L (i.e. Φ * ) and the R (i.e. Φ * + φ * ) branches. Interestingly, if the TTJs are symmetric by exchanging their contacts to branches L, R, the currents I 1,2M entering the TTJs are pure quartet currents i.e. I 1M = I 1Q (ϕ 1Q ), I 2M = I 2Q (ϕ 2Q ). In turn, a pure MAR current I L − I R flows between branches L and R thus in the secondary circuit, and one can write: In this case, a Cooper pair current circulates in the main loop, while the secondary one contains a superposition of a quartet current -flowing as parallel Cooper pair currents in the L and R branches, thus insensitive to the flux φ * -and a circulating MAR current -sensitive to φ * via its phase-MAR component. In the general case of asymmetric TTJs, all currents I 1,2M , I L,R contain components of both quartet and MAR origins. One can carry the analysis further in the simplifying case of weak transparencies. Noting that the quartet and MAR components are respectively odd and even in the quartet phases, and one can write: where the first terms in Eqs. (9)-(10) are the quartet currents, with "critical currents" I Qci . The second terms are the phaseaveraged MAR components, including the phase-independent two-terminal MAR processes, and the last terms contain the phase-sensitive MAR components, with "critical currents" I MARci . The critical current values defining the amplitude of the phase oscillations of the quartet and MAR currents are voltage-sensitive, and have in general nonmonotonous variations with V [31,37,38,42,57]. The sine and cosine dependences of the respective quartet and MAR currents stem from their symmetry in phase. Such expressions can be checked by microscopic calculations in the low transparency case [31]. From Eqs. (1), (6), (9) and (10), the total current injected in this quartet-SQUID can be written as (omitting the voltage sensitivities): with (i = 1, 2): The total current appears as the sum of (i) a phaseindependent MAR current and (ii) a typical SQUID current, which depends on the quartet phase ϕ 1Q , and on the effective flux Φ * + φ * /2, with phase lapses α 1,2 that measure the ratio of phase-sensitive MAR currents to quartet currents. As in a usual SQUID, maximizing the total current with respect to the (quartet) phase yields the following expression for the critical current: This relation achieves the goal of building a quartet-SQUID. As a first result, the factor 2 in the flux sensitivity, that results in a hc/4e periodicity, manifests the fact that quartets are made of two entangled Cooper pairs and carry charge 4e. Second, the phase lapses α 1,2 directly contain the information about the presence or not of phase-MARs. These phase lapses disappear in the case of TTJs with symmetric branches V, −V (α 1,2 = 0) or in the unlikely case of identical TTJs (α 1 = α 2 ). In experiments performed at low voltage and in incoherent diffusive regimes, the MAR currents are negligible, and the quartet-SQUID gives direct access to the pure quartet currents. The above discussion is not restricted to harmonic dependences of the quartet and MAR current with phase. In resonant dot models, nonharmonic behavior is easily obtained and the quartet current can be quite large, actually comparable to the ordinary Josephson current of a two-terminal junction in the same conditions [27,31]. Exploring the voltage dependence: from "0−" to "π−" junction: Having a quartet-SQUID in hands allows a thorough study of the dependence of the quartet (and phase-MAR) currents with voltage, as a new control parameter for DC Josephson currents. Focusing on the quartet current, different models, suited to different types of junctions (single or many level quantum dot, or diffusive metallic) lead to the same conclusions: the quartet current-phase characteristics changes sign several times with voltage, owing to nonadiabatic transitions between Andreev levels, triggered by the voltage via the running phase (ϕ L − ϕ R )(t) at frequency 4eV /h [31,37,38,42,57]. This means that, in terms of the quartet current component, a TTJ can be either a "0−" or a "π−" junction, with respect to the quartet phase ϕ Q . The same occurs with the phase-MAR current component that also changes sign but at different voltages. Superposition of quartet and phase-MAR components actually makes a generic TTJ a "θ -junction". More generally, the characteristics of a TTJ (transparency, asymmetries between the three contacts, degree of decoherence) all conspire to shift or even suppress the sign changes. For instance, if the couplings to the biased terminals are much smaller than the one to the unbiased terminal, and the quartet TTJ current keeps a "π−" junction character at low and intermediate voltages [57]. Focusing on quartets only, in the case where backgates allow to separately control the transparency of the different contacts, one can reach a situation where, for a given voltage, the pair of TTJs of the SQUID can be both "0−" (or both "π−") junctions, or one being a "0−" and the other a "π−" junction. This strongly reminds the experiments performed with carbon nanotubes [61] (nanoSQUID) where the mechanism for "0−" to "π−" transition is instead the Coulomb interaction and the gate control of the nanotube junctions. In addition, "0−" to "π−" transitions have also been observed in superconductor-ferromagnetsuperconductor Josephson junctions [62]. To illustrate the possibilities of such a quartet-SQUID, let us assume that TTJ1 is fully symmetric and resonant, with high quartet critical currents and several sign changes as V is increased from 0 to 2∆. On the contrary, TTJ2 couples weakly but equally to the L, R terminals. This suppresses the MAR component in the SQUID current, and leaves us with a very asymmetric quartet-SQUID, with (neglecting the anharmonicity in this example): and |I c1 (V )| >> |I c2 (V )|. As said above, TTJ2 remains a "π−" junction so that I c2 < 0, while the sign of I c1 depends on V . Following the classical argument of an asymmetric SQUID, one first maximizes I tot ∼ I c1 sin(ϕ 1Q ) with respect to ϕ 1Q , which yields ϕ 1Q ∼ π/2 if I c1 > 0 and ϕ 1Q ∼ 3π/2 if I c1 < 0. Inserting this value into the -small -second term of Eq. (15) yields with ± sign depending on the "0−" or "π−" character of TTJ1. First, this reconstructs the current-phase relation of TTJ2, including the sign of I c2 . Second, as V is swept upwards from 0, the sign changes of TTJ1 reflect themselves in π− shifts in the flux dependence of I tot . As another example, Eq. (14) shows that phase-MARs can be investigated in TTJ1 only, provided TTJ2 is symmetric in (L, R) thus α 2 = 0. The relative amplitude of phase-MARs and quartets in TTJ1 reflects directly in the shift α 1 of the total current vs flux dependence. In the generic case of nonsymmetric TTJs, one instead measures the difference α 1 − α 2 of the two TTJ phase lapses. An additional piece of information can also be gained by measuring the currents in the biased loop, i. e. the combination I L − I R which, contrarily to I tot , eliminates the quartet components at low transparency and is thus sensitive to MAR currents only. Conclusion: We have proposed a quartet-SQUID generalizing the standard SQUID geometry to make quartet and phasesensitive MAR current interfere under control of a magnetic flux. The periodicity in the flux dependence of the total critical current through the SQUID reflects the quartet charge 4e. In addition, the distinguishing phase symmetries of both cur-rent components imply a phase lapse in the flux sensitivity of the critical current of the interferometer, which allows to quantify the phase-MARs with respect to the quartet current. Finally, phase-MARs are a consequence of both quartet emission and coherent subgap transport. Thus, they provide evidence against scenarii based on extrinsic synchronization via the outer circuit of junctions described by an adiabatic currentphase relation. A full quantitative analysis and comparison with future experiments requires to inject into the present description the expressions for quartet and MAR currents of each TTJ, obtained from microscopic theories.The principle of the present quartet-SQUID can obviously be generalized to higher order multipair transport in MTJs with four or more terminals. R.M. acknowledges support from the French National Research Agency (ANR) in the framework of the Graphmon project (ANR-19-CE47-0007).
2023-01-30T06:42:07.561Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "22e9321cb0f886a8ddd5ff2fd265b6610064c19b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "83c28f28f62cef35a36f5199913295148829b247", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247223056
pes2o/s2orc
v3-fos-license
Modeling of an On-Orbit Maintenance Robotic Arm Test-Bed This paper focuses on the development of a ground-based test-bed to analyze the complexities of contact dynamics between multibody systems in space. The test-bed consists of an air-bearing platform equipped with a 7 degrees-of-freedom (one degree per revolute joint) robotic arm which acts as the servicing satellite. The dynamics of the manipulator on the platform is modeled as an aid for the analysis and design of stabilizing control algorithms suited for autonomous on-orbit servicing missions. The dynamics are represented analytically using a recursive Newton-Euler multibody method with D-H parameters derived from the physical properties of the arm and platform. In addition, Product of Exponential (PoE) method is also employed to serve as a comparison with the D-H parameters approach. Finally, an independent numerical simulation created with the SimScape modeling environment is also presented as a means of verifying the accuracy of the recursive model and the PoE approach. The results from both models and SimScape are then validated through comparison with internal measurement data taken from the robotic arm itself. INTRODUCTION As humans endeavor to conduct missions of increasing complexity, both in Earth orbit and beyond, operations such as asteroid mining, space debris removal, and on-orbit servicing will have to be performed routinely. To model the complexities of the contact dynamics involved in these procedures a robotics test-bed design is proposed consisting of two interacting robotic arms. The first robotic arm is mounted on an air-bearing platform, acting as the servicing satellite, while the second arm is stationary, acting as the client spacecraft. This test-bed environment is essential to continue the innovation process and improve space exploration, as there has been an increased interest in servicing missions. By testing here on earth, the dangers of on-orbit servicing can be evaluated and assessed. Different examples of robotic systems in space can be found in literature [1], three of them reside on the International Space Station (ISS) such as: The Canadian Mobile Servicing System (MSS), which is used to perform maintenance to the station and provide aid for approaching maneuvers, the Japanese (JAXA) Experiment Module Robotics System (JEMRMS) which is used for experimentation and payload maneuvers, and finally, the European Space Agency (ESA) European Robotic Arm (ERA), used for maintenance and extravehicular activity support. These examples are all implemented on a system with considerably more mass and volume than the manipulator. However, with the current trend in miniaturization, smaller, more agile, spacecraft will need to carry out similar missions, like what has been done by Antonello [2] using thrusters and Schwartz [3] for decentralized control. Because of the small size of this satellite when the robotic arm is actuated in orbit, as there are no external influences, the attitude of the satellite will change as a consequence of this motion. This effect is undesirable as it can cause unwanted maneuvers that could have drastic effects on communication, power generation, even compromise the mission. Using the air-bearing test-bed these dynamics can be studied, and their effects better understood, giving the possibility to develop controllers rejecting said undesired phenomena using different control approaches. Each robotic arm on the air-bearing test-bed features seven single degree of freedom revolute joints. When mounted on the air bearing platform, the assembly gains an additional three degrees of freedom due to the spherical air bearing acting as a universal joint. Because of its kinematic complexity, a recursive method is used to model the system. To generate the equations of motion the kinematic properties are propagated in a forward recursion from the first to the last link to generate velocities and accelerations. The process is then conducted in reverse, from the last to the first link to determine the forces and torques at each joint. For validation purposes, a numerical simulation was created through the use of MATLAB ® and the SimScape TM Multibody environment [4]. Torque results generated by both the recursive (D-H and PoE) and numerical models will be compared. This model is now used in both the synthesis and analysis of control algorithms for the simulator as well as a design aid for the platform itself. The implementation of these control algorithms are outlined in Korczyk's work [5]. ROBOT ARM DYNAMICS To analyze the dynamics of the robotic arm, the mathematical model formulation described by Ploen [6] which was developed originally by Park [7], was used; yielding the following recursive algorithm for an open chain serial manipulator: where M (q) denotes the mass matrix, C(q,q) is the Coriolis/centrifugal matrix, φ(q) denotes the gravity terms, and τ represents the the applied torques at each joint: and the definition for each term is: where, S matrix is composed of smaller vectors, s i , that are composed of the zero vector and the unit vector along the axis of rotation [5]: This method was developed using Lie group algebra formulation as the dynamics of the arm can be differentiated in a straightforward manner. Because the method is solely based on the matrix exponential, a basic mathematical primitive [8]. The Product of Exponentials -Forward Kinematics (PoE-FK) formulation is presented as follows [9]: where T sn ∈ SE(3) is the configuration of an n-th point on the manipulator; M sn ∈ SE(3) is the n-th point's home configuration; S i ∈ R 6 is a "unit" screw axis represented in the inertial (space) frame such that ω = 1 and v is the linear velocity at the inertial-frame origin, expressed in the inertial frame produced purely due to the rotation about the i-th screw axis (v = −ω × r q ), where q is a point on the screw axis: Screw axes are defined by the rolling and pitching joints of the Sawyer robotic manipulator, which makes the forward kinematics simpler in implementation, since, compared to D-H parameters, PoE does not require initializing frames for each joint. The screw axes are defined by the physical locations of the joints, for Sawyer robotic manipulator the screw axes are given as follows: Square brackets around screw axes represent skew-symmetric mapping such that: (3). For instance: PoE-FK utilizes exponential mapping such that: The PoE inverse dynamics algorithm is carried out by employing the following formulation [9]: where θ ∈ R n is the vector of joint variables, τ ∈ R n is the vector of joint torques, M (θ) ∈ R n×n is the symmetric positive-definite configuration-dependent mass matrix, and h(θ,θ) ∈ R n is the combination of centripetal, Coriolis, gravity, and friction terms that depend on θ andθ. Exhaustive derivation process of M (θ) and h(θ,θ) matrices is outlined in Malik's work [9]. MATLAB SIMULATION OF ROBOTIC ARM Using this recursive method the torques for each joint can be calculated using D-H paramters approach shown in Eq. (1) or PoE approach described by Eq. (13). To generate the torque and the angular positions, velocities and accelerations of each joint must be specified. The specific trajectories are provided as functions of time as seen in Table 1, that result in the torques plotted in Figure 2 and Figure 4. Modeled in SimScape TM , the numerical simulation of the dynamics described analytically by the D-H parameters model generates identical results as shown by the subtraction of the results of the D-H parameters method from that of the numerical simulation ( Figure 3). The PoE method, on the other hand, shows different torque profile as demonstrated in Figure 5, which indicates that the SimScape TM and D-H approaches calculate the torque profiles in a different way. This was performed to analyze the veracity of both mathematical approaches, D-H and PoE methods performed completely coded using functions developed by the team and exclusively from the code editor in MATLAB. The SimScape TM numerical method described the algorithm graphically using the Simulink environment through a modular block approach. VALIDATION OF THE MODEL To ensure the validity of D-H parameters approach and PoE approach an experiment was conducted to compare the simulated torque trajectories with experimental ones. The comparison proved the accuracy of both methods, and it was observed that the PoE approach is more accurate than the D-H parameters approach. The trajectory for these tests was generated by commanding the robot to move from its zero position, all links were configured to extend to the robot's furthest reach in the x-direction (this is defined as the rest position). Afterward, all links were configured to extend to the robot's furthest reach directly to the upward pose in the z-direction. The joint positions, velocity, acceleration, and torque profiles corresponding to this motion are shown in Figures 6-9. The simulated torque profiles of D-H and PoE approaches are shown in Figures 10 and 11 respectively. The error between the models and the robot can be determined by subtracting the simulation profiles from robot generated torque profiles, Figure 12 to Figure 18. The PoE error (shown in red) is less than D-H parameters error on all of the joints except the first joint, where both methods demonstrated exactly the same degree of accuracy. It is worth to mention that the PoE method exhibits exceptional accuracy in all pitching joints (joints 2,4, and 6), where, in contrast, the D-H parameters method demonstrates high margin of error up to 7 N m in joint 2 ( Figure 13). One of the possible explanations for this error could be the omission of frictional and damping effects of the joints that exist in the real robot that create torques that resist motion, which is not represented in the D-H, PoE, or SimScape TM representations. The modeling of this phenomena is not in the scope of this primary research, however is being explored to increase model fidelity of future iterations [10,11,12]. The largest error results in joint two, Figure 13. Joint two is the first pitching joint, meaning it is the first joint that resists gravitational effects while moving the subsequent masses. As joint number two is required to move such a large amount of mass, it is expected that this joint will most likely result in the largest torque, which is confirmed by the prediction of the D-H and PoE algorithms (Figures 10 and 11) and shown by the Sawyer torque profile (Figure 9). While profiles appear to be similar in shape, the magnitude, however, is not quite exact. This can be contributed to the lack of damping and friction modelling of the developed algorithms. PLATFORM DYNAMICS To confidently design a controller for the air-bearing platform, the dynamics of simply the robotic arm will not be enough, therefore, the dynamical motion of the air-bearing platform as a system must be described. The recursive algorithm outlined by Ploen will not work for the entire system as a whole due to the addition of the spherical universal joint, which cannot be properly modeled under that algorithm. While the spherical air-bearing can be kinematically described as a system of three coincident revolute joints each actuating about a different axis, describing the joints in the recursive algorithm is not as trivial as it sounds, resulting in singularities and yielding incorrect results. Instead, an analytical dynamic approach was taken to model the system, using the definition of angular momentum [13] to develop the equations of motion about the spherical air-bearing. To begin with, the first assumption was made that the spherical joint was locked in place, therefore, it cannot rotate. In other words, this means that the torque due to the dynamical motion of the robot and its control box is determined about a fixed point. The torque is then determined at the spherical air-bearing, point o, due to the motion and weight of each link. To simplify the problem, this process can be conducted using one link at a time first, as seen in Figure 19, and then building up to seven links to better resemble the dynamics of the system. To correctly represent the inertia of each link about the spherical air-bearing the parallel axis theorem [13] was used. The angular momentum of the i th link about the spherical air-bearing in the body frame can now be determined to be: where the position of the center of mass B r i is: Using the transport theorem the torque due to rotational motion can now be determined. Notice that the time derivative of the inertia in the body frame exists, as the parallel axis theorem adds the position vector into the inertia calculation: when expanded it becomes the following: where the time derivative of inertia about the center of mass goes to zero d As the rotational torque is the time derivative of angular momentum: However, this is not the only torque acting on the system, there is also torque due to the weight of each link and the control box that is on the other side of the platform. This torque is calculated using the cross product: Combined gravitational and dynamic torque result in the total torque at the air-bearing that is given below: There are seven links in the robotic arm and the control box can be considered the eighth link although it does not dynamically move; meaning N˙ H o 8 → 0. AIR-BEARING WITH ROBOTIC ARM SIMULATION Using SimScape TM , a model of the system can again be modeled where the air bearing platform is an interface between the robot and the inertial frame, Figure 20, meaning that there is a displacement in the model from the inertial frame to the robot. The spherical air-bearing is modeled as a gimbal joint (Figure 21) with the inputs being the trajectory. The trajectory given in Table 1 serves as an input to both theoretical and SimScape TM simulations. Resulting torques about the x-, yand z-axes are shown in Figure 22. The error between the theoretical model and the SimScape TM simulation is negligible as demonstrated in Figure 23. This error calculation is generated from the subtraction of the theoretical model results from simulation results. To ensure the accuracy of analytical dynamic models (D-H parameters and PoE) the methods were compared to a numerical simulation developed in the SimScape TM Multibody environment and experimental results obtained from experiments with Sawyer robotic arm. The comparison showed that the PoE calculates torque profiles in a different way as compared to D-H and SimScape TM approaches. Nevertheless, the PoE approach was more accurate when compared against the experimental results, where the D-H and SimScape TM methods were less accurate. Proving that the PoE approach is ultimately a more accurate approach when it comes to inverse dynamics computation. The analytical model of the dynamics of the robotic arm on air-bearing table was successfully derived and compared with the SimScape TM simulation, which resulted in good correspondence between the two. The accuracy of the proposed analytical model is planned to be improved by adding damping and frictional effects. The proposed air-bearing test-bed can be used to test relevant space servicing maneuvers and control algorithms for disturbance rejection and attitude control. It is planned to finish manufacturing the air-bearing table to experimentally test these controllers based on the derived analytical model.
2022-03-04T06:47:10.893Z
2022-03-02T00:00:00.000
{ "year": 2022, "sha1": "ef1a1fa9e43627d87913691d26dd5c805885802e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ef1a1fa9e43627d87913691d26dd5c805885802e", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
246024455
pes2o/s2orc
v3-fos-license
Organizational Capabilities and Social Entrepreneurship: A Fuzzy-set Approach Purpose: The purpose of this paper is to identify the leading organizational capabilities of social enterprises and their empirical verification in terms of their simultaneous impact on social entrepreneurship. Design/Methodology/Approach: In this paper, a fuzzy set Qualitative Comparative Analysis (fs/QCA) was used to empirically analyze the complex relationships between a set of organizational capabilities and social entrepreneurship. These relationships were analyzed using data from selected social enterprises in Poland. Findings: The results indicated that there was no single condition that necessarily and exclusively contributed to high or low social entrepreneurship. However, the sufficiency analysis performed revealed several configurations of conditions (organizational capabilities) that lead to high and low outcomes for social entrepreneurship. Practical Implications: The main achievement of this research is the discovery of two configurations that lead to a high level of social entrepreneurship and one configuration of a low level of social entrepreneurship. This result is important for practice as it shows managers different combinations leading to social entrepreneurship. Importantly, by focusing on combining different organizational capabilities, it is possible to help formalization and encourage social entrepreneurship. Originality/Value: This paper not only presents the different organizational capabilities that influence social entrepreneurship, but also tries to find out how the interplay of these different capabilities creates alternative configurations that contribute to social entrepreneurship. Introduction Social entrepreneurship is a sub-discipline within entrepreneurship which remains a complex and still poorly developed and understood phenomenon (Rey-Martí et al., 2016). Social entrepreneurs undertake various activities aimed at introducing fundamental social changes that are transformative and innovative in nature (Zhang and Swanson, 2013). However, like other entrepreneurs, social entrepreneurs need to source valuable resources and develop capabilities to create sustainable and profitable organizations (Renko, 2013). It is important to note that social enterprises face significant resource constraints due to the fact that they operate in an environment that makes it difficult to obtain resources and their core social mission in many cases causes them to resign from higher margins in order to reach more beneficiaries (Desa and Basu, 2013). As a result, many social enterprises are unable to solve large-scale problems and thus the scale of their social impact is significantly limited (Renko, 2013;Smith et al., 2016). Despite previous suggestions in the literature for undertaking research focusing on the 'enterprise' side of social enterprises to understand how differences in their capabilities lead to differences in their social entrepreneurship, few empirical studies have used a resource-based approach (RBV) to investigate the scale of social entrepreneurship of these enterprises. Taking an RBV perspective, social enterprises are organizations whose scale of social impact depends on their ability to create, combine, and leverage resources and capabilities. In relation to social entrepreneurship, RBV creates a framework for understanding how resources and capabilities increase a company's competencies and enable it to more effectively serve the target market (Desa and Basu, 2013). It should also be emphasized that in adopting the more radical alternative to RBV (Bel and Dyck 2011), attention is focused on the social enterprise's ability to exert social influence rather than on financial performance. Thus, by adopting the RBV extension for social entrepreneurship, the study may include the identification of capabilities that contribute to the achievement of social impact through the entrepreneurial activities of the social enterprise. Consequently, the aim of this research is to identify the leading capabilities of social enterprises and their empirical verification in terms of their simultaneous impact on social entrepreneurship. This study attempts to answer the following research questions: "What are the basic capabilities for social entrepreneurship?" and "How do these capabilities combine to create alternative configurations (pathways) to social entrepreneurship?". The study used a fuzzy set qualitative comparative analysis (fs/QCA), a set theory approach suitable for studying complex relationships on a sample of 83 Polish social enterprises. This approach enables the study of various social capabilities, such as the capability to engage stakeholders, the capability to earn income and the capability to provide mission-oriented management of social enterprises in an interdependent manner. Rather than estimating the average net effect of particular capabilities, the study assesses how multiple alternative configurations of these lead to low or high social entrepreneurship. The structure of this study is as follows: Section 2 explains the theoretical framework of social entrepreneurship and capabilities of social enterprises relevant to the development of the research model. Section 3 describes the study's method and data. Section 4 presents the empirical results. Section 5 discusses these results. Section 6 offers conclusions, highlights some limitations and future research opportunities. Theoretical Background The phenomenon of social entrepreneurship is an innovative area of research (Kraus et al., 2014), however, the literature still lacks a common concept of who a social entrepreneur is. This, in turn, raises questions about which social or profit-oriented activities fall within the spectrum of social entrepreneurship (Abu-Saifan, 2012). There are essentially three perspectives in discussions of social entrepreneurship, namely, pursuing both social and financial outcomes, the duty of innovative spirit, and the adoption of commercial activities to generate revenue. In the first perspective, reference is made to the main goals of social entrepreneurship, which are the efforts of entrepreneurs to achieve social justice and ensure a decent quality of life for all (Thake and Zadek, 1997), or a source of a sustainable competitive advantage over time that enables the fulfilment of the social mission (Weerawarden and Mort, 2006) or identification of a situation that excludes a group of people who lack the resources or capabilities required for a decent quality of life and the possibility of solving this problem by creating a company (Peredo and McLean 2006). The second perspective refers to an innovative approach to achieving desired goals. In turn, in the third perspective, it is emphasized that social entrepreneurs disseminate their socially innovative models through market-oriented activities, e.g. by creating alliances and partnerships, in order to achieve broader and more sustainable results (Huybrechts and Nicholls, 2012). In this study, following Carraher, Welsh and Svilokos (2016), social entrepreneurship is defined as a process involving the innovative use of resources and capabilities in order to trigger social change and meet social needs in the field of sustainable social transformation, as well as to achieving the commercial or economic objectives of the enterprise. According to RBV, a competitive advantage is the result of a set of resources and competences that an enterprise possesses, as well as the managerial capability to organize and use them (Barney, 1991). While not necessarily committed to competitive advantage, social enterprises seek to build competencies that will help them serve their target market more effectively (Desa and Basu, 2013). Moreover, these companies must compete for the attention and support of stakeholders (i.e., volunteers, government, customers). As indicated, for example, by Meyskens et al. (2010) it is essential for social enterprises to acquire resources and develop the capabilities to achieve social goals and the ability to acquire, organize, and transform a broad set of resources helps increase a social enterprise's ability to create value. There is some evidence in the literature on the existence of capabilities benefits for social enterprises but there is a need for further empirical research in this area. This study focused on three organizational capabilities, namely the capability to engage stakeholders, the capability to earn income and the capability to provide mission-oriented management, which are potential drivers of enhancing social entrepreneurship. They also reflect theoretical advances in this area, which confirm how social entrepreneurship depends on these capabilities. The first of the analyzed capabilities relates to stakeholders engagement and is understood as the ability to effectively communicate and engage donors, beneficiaries, customers and the community. In their pursuit of social change, social enterprises communicate with a variety of stakeholders, which in turn can support them in overcoming various types of barriers to achieving their goals (Montgomery et al., 2012). Previous research in the literature indicates that stakeholder engagement assists social enterprises in gaining resources and legitimacy (Desa and Basu 2013;Di Domenico et al., 2010;Miller and Wesley, 2010). To accelerate change and gain support for their mission, social enterprises use their social networks (Alvord et al., 2004). Domenico et al. (2010) stakeholder engagement is important to building and fostering a strong stakeholder network which, in turn, offers social enterprises the opportunity to increase their outreach and impact. The capability to engage stakeholders has also been shown to support social entrepreneurship in responding to external pressures and incentives set by major stakeholders such as partners or customers. As indicated by Di Given the importance of income streams that enable social enterprises to reduce their dependence on donations in order to remain profitable (Gras and Mendoza-Abarca, 2014;Swanson and Zhang, 2010;Zhang and Swanson, 2013), another capability included in the study was the capability to earn income. Social enterprises, despite the important mission they serve, often have problems with financing their activities. For this reason, some social enterprises seek to generate income streams that reduce their dependence on philanthropy (Zhang and Swanson, 2013). The capability to earn income, recognized as crucial to the development of a strong business model (Dart, 2004), enables social enterprises to generate profit to finance their social activities. The source of earned income can come directly from the beneficiaries, in a "fee-for-service" operation (Ebrahim et al., 2014) or from wealthier customers whose purchases support charitable services to beneficiaries. Previous research has shown that social enterprises that are capable of attracting paying customers by selling their products and services are more likely to receive support for their social cause (Marquis and Park, 2014). As a result, customers of these enterprises create close relations with them, supporting them and contributing to the implementation of their social mission. Therefore, in the context of social entrepreneurship and extended RBV, generating income is an important capability contributing to the effective functioning of social enterprises, enabling them to achieve their social mission. Another analyzed capability is the capability to provide mission-oriented management. As a guiding element of the organizational philosophy, the mission can strengthen a common understanding of the role of the organization in relation to its stakeholders. As such, it presents a strong potential to support social entrepreneurship in creating common value (Grant and Sumanth, 2009). Social entrepreneurship reflects the establishment of new value creation models that have a transformative impact on society, both statically and dynamically. As shown by the research conducted by Flota Rosado and Figueroa (2016), clear and distinct missions help social enterprises to maintain their goals and objectives. The capability to provide mission-oriented management is a compass for making the right decisions in social entrepreneurship (Tate and Bals, 2016). This compass gives direction to all subsequent actions to progress. The following proposition is consistent with this theoretical framework. Proposition: The capability to engage stakeholders, the capability to earn income and the capability to provide mission-oriented management combine to create alternative configurations to social entrepreneurship. Based on the above proposition relating to the set of selected organizational capabilities of significant importance for social entrepreneurship, a research model (Figure 1) was developed to illustrate the complex causal conditions leading to the studied outcomes. The capability to provide mission-oriented management Source: Own study. Methods and Materials The research sample consists of 53 social enterprises based in Poland, i.e., organizations whose main goal is to achieve the social mission through business practices (Dacin et al., 2010). The sample included the following social enterprises, worker cooperatives (28.9%), foundations and associations (21.7%), social integration clubs (13.3%), and sheltered employment establishments (36.1%). The data used in this study were collected through a mail survey. The questionnaire was sent to selected respondents in the first quarter of 2021. The respondents to the survey were owners, managers or employees who had adequate knowledge about the activities and results of their enterprises. Although the sample size was small, it is appropriate for fs/QCA, as suggested by Ragin (2008). All constructs were measured using 5-point Likert-type empirically validated scales. Respondents were asked the extent to which they agreed (1 = strongly disagree; 5 = strongly agree) with the statements. In total, there were four constructs from 19 questions. The list of questions and Cronbach's alpha values for internal consistency are presented in Table 1. The individual reliability of each construct was greater than the minimum acceptable Cronbach's α of 0.7, indicating high reliability (Nunally and Bernstein 1994). (Kraus et al. 2017) Social Risk-taking 1. We are not afraid to take substantial risks when serving our social purpose. 2. Bold action is necessary to achieve our company's social mission. 3. We avoid the cautious line of action if social opportunities might be lost that way. 0.89 Social Proactiveness 4. We aim at being at the forefront at making the world a better place. 5. Our organization has a strong tendency to be ahead of others in addressing its social mission. 6. We typically initiate actions which other social enterprises/social entrepreneurs copy. Social Innovativeness 7. Social innovation is important for our company. 8. We invest heavily in developing new ways to increase our social impact or to serve our beneficiaries. 9. In our company, new ideas to solve social problems come up very frequently. The capability to engage 10. We have been effective at communicating what we do to key constituencies and stakeholders. 0.78 stakeholders (Bloom, Smith 2010;Bacq, Eddleston 2016) 11. We have been successful at informing the individuals we seek to serve about the value of our program for them. 12. We have been successful at informing donors and funders about the value of what we do. 13. We receive cooperative support from main stakeholders. The capability to earn income (Bloom, Smith 2010) 14. We have generated a strong stream of revenues from products and services that we sell for a price. 15. We have found ways to finance our activities that keep us sustainable. 0.79 The capability to provide missionoriented management (Wang 2011) 16. We have clear missions and management philosophy. 17. We are self-motivated for social and environmental advancement. 18. Employees know and are able to interpret missions and management philosophy. 19. Employees can explain missions and management philosophy to external parties if required. Source: Own study. This study applied fuzzy-set Qualitative Comparative Analysis (fs/QCA) using fs/QCA 2.5 software (Ragin and Davey, 2014) to investigate the effect of complex causal conditions (organizational capabilities) on targeted outcomes (high and low social entrepreneurship). The fsQCA method is currently recognized as well documented (Ragin, 2008;Fiss, 2011;Woodside, 2014;Greckhamer et al., 2018;Kwiotkowska, 2018;2020). The first step in fs/QCA is to transform the raw data into fuzzy membership scores ranging from 0 to 1 (Ragin, 2008). For this purpose, the three different anchors that make up the calibration structure need to be pre-defined, the threshold for full membership (indicated by a fuzzy score of 0.95 or higher), the crossover point (indicated by a fuzzy score of 0.50), and the threshold for full membership (indicated by a fuzzy score of 0.05 or less). In the case of three predetermined anchors, the calibration procedure proceeds with the log-odds method (Ragin 2008). In this study, the calibration values for the three thresholds anchors were set at the upper 95th percentile, median and lower 5th percentile. Table 2 lists the descriptive statistics of raw data and calibration values. Results In this study, three conditions were analyzed. The three organizational capabilities (the capability to engage stakeholders, the capability to earn income and the capability to provide mission-oriented management) were used as antecedent conditions, and social entrepreneurship was used as the outcome (Table 3). In the first step, a necessity analysis was carried out. If the consistency score of a condition exceeds the threshold of 0.90, the condition is regarded as a necessary condition (Ragin, 2008). The necessity analysis showed that none of the conditions exceeded consistency score of 0.90, as shown in Table 4. In summary, it can be concluded that there is no necessary condition for high and low social entrepreneurship. Subsequently, a sufficiency analysis was performed where the dataset frequency cutoff was set to one, meaning that each configuration with less than one empirical observation was considered as a remainder and was not included in the analysis. This sufficiency analysis was based on complex solutions. The results of the sufficiency analysis showed two sufficient configurations of conditions for high and one for low social entrepreneurship outcomes. All combinations of conditions had a consistency value greater than 0.75, which was considered sufficient to obtain the expected outcome. The results are presented in Table 5. The solutions represent the high and low levels of each condition, and a "do not care" condition with respect to the outcomes examined (Fiss, 2007). In line with previous fs/QCA studies, these solutions can be interpreted as alternative configurations associated with the outcome (high/low social entrepreneurship). Discusion The necessity analysis showed that there was no single condition that necessarily and solely contributed to a high or low level of social entrepreneurship. In turn, the sufficiency analysis revealed several configurations of conditions that yielded sufficiently high and low expected outcomes. Therefore, this discussion focuses mainly on the results of the sufficiency analysis. As the results show, there are two alternative configurations for a high level of social entrepreneurship and one configuration was derived for a low level of social entrepreneurship. The final solution can be expressed as follows: • configurations to high social entrepreneurship: ES*MD + EI*~MD; (1) • configuration to low social entrepreneurship: ~ES*~EI (2) Note: * logical AND; + logical OR. Solution CPH1 indicates that combination of two capabilities, namely the capability to engage stakeholders and the capability to provide mission-oriented management leads to high social entrepreneurship. Solution CHP2 shows that the presence of the capability to earn income combined with the absence of the capability to provide mission-oriented management gives the same high level of social entrepreneurship as in the solution CHP1. On this basis, it can be argued that the capability of a social enterprise to engage stakeholders in its social mission, coupled with its capability to be mission-oriented management, are critical to achieving exquisite social entrepreneurship. Alternatively, a high level of social entrepreneurship can be achieved through the capability to earn income while absence of the capability to provide mission-oriented management. The following finding can therefore be made that the capability of mission-oriented management participates in the achievement of high social entrepreneurship, but depending on the context, its presence is required in the first configuration (CPH1) and its absence in the second (CHP2), in conjunction with the capability to earn income. Moreover, the capability to provide mission-oriented management was absent in the pathway for low social entrepreneurship. As the solution CPL indicates the combination of the absence of the capability to engage stakeholders with the absence of the capability to earn income leads to low social entrepreneurship. This paper makes the following theoretical and methodological contributions to the literature on entrepreneurship. First, this study confirms the proposition that different organizational capabilities combine to create alternative configurations to social entrepreneurship. Secondly, the findings in Table 5 show that most configurations consisted of multiple antecedent conditions that led to high or low social entrepreneurship. These results confirmed the principles of complexity theory that: (1) "... a simple antecedent condition is seldom sufficient for predicting high or low outcome"; and (2) "... two or more simple conditions is sufficient for a consistently high score in an outcome" (Woodside, 2014). Third, the findings show that the configurations leading to a high level of outcome were asymmetric to those leading to a low level of outcome, which is consistent with the causal asymmetry assumption. These empirical results provide a theoretical justification for the wide variety of social entrepreneurship and a basis for further theory building in the field of social entrepreneurship. The methodological contribution is the introduction of the fs/QCA to research on the social entrepreneurship phenomenon. Previous empirical tests have assumed that individuals conform to a single dominant explanation of the "net effects" of the phenomenon and that any inconsistencies are due to random deviations. The fs/QCA, on the other hand, takes into account within-person (rather than withinsample) relationships between data and interdependence of conditions at the case level, rather than correlations between discrete variables at the aggregate level. Conclusion In summary, social enterprises that are able to scale their social impact rely on specific capabilities, which include the capability to provide mission-oriented management, the capability to gain support and engagement from various stakeholders, and generate income. Methodologically, the analysis conducted in the paper, contributed to the literature, combining the set of organizational capabilities in social entrepreneurship and at the same time simultaneously analyzing their relationships. This approach allows the application of complexity theory, which reveals a better understanding of causal relationships regarding the combination of conditions affecting the test outcome. The study extends RBV to social entrepreneurship, revealing how organizational capabilities combine to lead to the exquisite social entrepreneurship of social enterprises. This study indicates that managers have different combinations to achieving a high level of social entrepreneurship. Importantly, by focusing on the combination of different organizational capabilities, it is possible to help formalize and encourage social entrepreneurship. Several limitations of this study should be noted. Fs/QCA is helpful for exploring causal relationships with numerous interactions, but it is necessary to consider all possible configurations when using it. This in turn means that the data matrices grow exponentially as a function of the number of causal conditions. When considering the generalization of the findings to other contexts, it is important to keep in mind that the respondents in this study came from social enterprises located in Poland. Some of the constructs used in the study may have different meanings in different geographic conditions. Thus, future research should analyze other sectors and contexts across in emerging and developed economies. In addition, further research would also be valuable, e.g. a longitudinal study on a larger sample to trace the relationship between capability and social entrepreneurship in relation to changes over time.
2022-01-19T16:28:49.888Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "2453fe8fdc05e15e4c2055d85d5fe0808297c7c3", "oa_license": null, "oa_url": "https://www.ersj.eu/journal/2842/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f8ed4aa056ef4910714b6068bce4b6995b197320", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
221710824
pes2o/s2orc
v3-fos-license
The Effectiveness of Symbolic Modeling Learning to the Early Children’s Expressive Language Skills Article Info ________________ History Articles Received: 20 February 2020 Accepted: 15 March 2020 Published: 31 May 2020 ________________ INTRODUCTION Language consists of both written, spoken, or sign communication-based on a symbolic system. It is very important for children because they need language to describe their pasts and to plan their futures (Santrock, 2008: 67). Language skill is usually grouped into expressive and receptive skills. Fauzani (2016), in his research, concluded that expressive language skill consists of speaking and writing. It involves a meaning transfer through symbols that are processed and expressed by children. Language skill is important and must be owned by each individual. It is stated by Amalia (2019). In her research, it is stated that speaking and language developments are skills to respond toward what is listened to, to deliver their intention, to follow the rules, and so forth. Thus, the skill is important to master. Language skills should be mastered well by children. In their development process, children apply their cognitive skills to understand the concepts contained in their expressed utterances. It deals with children's skills to transform the concepts in their minds into the language of symbols according to the grammatical rules. Based on the interview results with the Bgroup teacher and the observation result at PGRI Sarimulyo Kindergarten on Monday, January 28, 2019, there were obtained that the language skills of the learners needed to be improved. From the teacher's observation, in the last (three) years, the learners were still difficult to concentrate, to listen to the story or explanation delivered by the teacher, to retell what has been told by the teachers in a simple language, and to ask and answer questions to review the content and meaning of a story delivered by the teacher. The data informed, it could be seen that in the first semester report, 15 learners in B group of Sarimulyo PGRI Kindergarten had not reached the Excellent Development category (BSB) in Core Competency 3.11 (understanding the expressive language) and 4.11 (performing the expressive language skills). In another hand, there were 2 learners reached the Expected Development (BSH) category. 11 learners reached Early Development (MB) category. Then, there were 2 children obtained Undeveloped Category (BB). The expressive language skill criteria are Excellently Developed (ED). It is when the mean is between 51.00 until 60.00. Expected developed (BSH) if the mean is between 41.00 until 50.00. Early Developed (ED) if the mean is between 26.00 until 40.00. Undeveloped (U) if the mean is between 15.00 until 25.00. To improve the language skill, there is a need for an appropriate stimulation with media and the suitable method according to the age and characteristics of 5-6-year-old children. Language development stimulation requires children to provide various media and applied learning methods in learning. The applicable media that facilitates to show what is delivered concretely. Then, the applied media functions to package media delivery joyfully and interestingly for children. Dewi (2016), in her research, states that learning media is anything that could be used to deliver messages so it could trigger the interest, mind, and feeling of learners in the learning process to achieve the learning objective. The applicable media function to describe the characteristics' natures of the story told by the teacher. Dewi (2016), in her research, applied the doll and puppet media as the alternatives in developing the early children's language skills. According to Dewi (2016), the media -promote children to involve -in learning process, so they could be more motivated in learning. In playing the doll, the teacher could show the mimics, expressions, and natures of the characters clearly with different sounds based on the characters' natures and conditions experienced by the characters. Therefore, children would be more interested in listening to the given information from the teacher. -By applying the media, the children's attentionbecome more focused and easy-to understand the information. -It is due to the information that is delivered interestingly and joyfully, so it would be much -smoothly internalized and stored in the children's -memories. The arguments and findings discussed were also proven by Kusdiyati (2010). In her research, it concluded that providing a story with a hand-doll model would influence their Indonesian language skills. Based on the explanations, it could be concluded that children aged 5-6 years old learn more by observing and imitating what they see from the models in front of them. In this research, the investigations were conducted to analyze the effectiveness of the symbolic modeling method assisted by a doll in the expressive language learning for the B-group children in PGRI Sarimulyo Kindergarten. The model provide the joyfull learning process to induce an effective learning an improving the learners' expressive language skills. METHOD A pretest-posttest control group design is applied in this experimental research. The population of this research consisted of all Bclass learners in PGRI Sarimulyo Kindergarten and Sarimulyo 02 PGRI Kindergarten, Winona Regency, Pati Municipality, in the academic year 2018/2019. It consisted of 30 learners. The selection of B-group as the research population was based on several considerations, such as (1) Children aged 5-6 years old can collaborate, are dependent in a positive manner, interactive, communicative, having personal responsibility, and respecting other people; (2) children aged 5-6 years old are students that have met the age requirement to promote preschool activities; (3) learners that are in the transition period from preschool level to the primary school level. The sampling guideline is based on Roscoe. It applies guidelines to determine the sample numbers for simple experimental research. It is done by rigidly controlling the sample size, started from 10 up to 20 elements (Darmawan. 2016: 143). In this research all of the children in the population were taken as the sample. Therefore, there are 30 students as the sample. The research subjects consisted of the B-group of PGRI Sarimulyo 02 Kindergarten that has low speaking skills. There were 15 learners and grouped into the control group. Meanwhile, the learners of PGRI Sarimulyo Kindergarten, that have low speaking skills, were grouped into the experimental group. This research was carried out within eight sessions. In the first session, a pretest was given instead of an intervention for the experimental and control groups. The pretest was given by applying the most frequently used media. It was a modeling media without any media. For the second session until the sixth session, an intervention was given for the experimental group. It was a symbolic modeling method assisted by a doll. Meanwhile, the control group was still taught by modeling method without media. A post-test was given in the eight session in which the experimental group was treated further with a symbolic modeling media assisted by a doll while the control group was still taught by a modeling method without any media. The applied data collection is an observation technique by using observation sheets of the expressive language in the form of a checklist with a Likert scale, 1, 2, 3, and 4. The validity test of the observation sheet instrument uses expert validation to examine the content validity or construct. The instruments are constructed based on the measured aspects under certain theories. Then, the instruments are consulted with experts (Sugiyono, 2017: 173). The experts in this research are the referredlecturers of Early Childhood Children Education. The observation sheet had been consulted and approved by the expressed and deemed valid. The research instruments are taken from The Standard of Early Childhood Children Development Achievement Levels Aged 5-6 Years Old as stated by The Regulation of Educational and Cultural Minister of Republic Indonesia, Number 137, the Year 2014. The observed-expressive language skills in this research are speaking and writing of 5-6year-old children. The descriptors of each indicator are : 1) The speaking skill: communicating orally, answering more complex questions, and having more data to express; 2) writing skill: recognizing the reading and writing preparation symbols, understanding the sounds, and the letter realizations. Each indicator has three descriptor items so in the expressive language observation sheet, it has 12 descriptor items. During the observation process, the observers checked by using check mark ( √ ) on the obtained scores based on the arranged observation guidelines. From the observation, it was found the effectiveness of modeling method assisted by a doll to improve expressive language skills of 5-6-year-old children. The data in this research were analyzed statistically. The data deal with the expressive language of 5-6-year-old children. The applied statistic analysis in this research consists of descriptive and inferential analysis. The findings could be seen from the Wilcoxon test, which shows an influence of the symbolic modeling method assisted by a doll toward the expressive language of 5-6-year-old children. It is then proven from the improvement of the expressive language learning outcome from the pretest and the posttest. It is also proven from the Mann Whitney test. It shows differences in the learning outcomes between the control group students taught by symbolic modeling media without media and the experimental group students taught by symbolic modeling media assisted by a doll. The applied dolls were made of papers. They had various profession figures, such as farmers, teachers, and police officers as shown in Figure1. Figure 1. The Models of the Dolls The teacher became the model by playing the dolls as the symbols of the introduced professions for the learners. The teacher played the dolls by mimicking, expressing, and speaking as the real characters (a teacher, a farmer, and a police officer) while introducing himself, his job, and his job instruments. The expressive language skill development learning for 5-6-years old children by implementing the modeling method assisted by a doll was carried out with 4 stages. The stages were: 1) the teacher paid attention by introducing the applied doll for the children; 2) representing: the teacher showed the dolls to describe the characters played by the dolls, describe the tasks or the affiliation in their jobs, and describe the actors' working places; 3) reproduction: the teacher invited learners to interact by asking and answering questions about the delivered material and gave them chances to describe the characters, such as a teacher; and 4) motivating: the teacher empowered on the learners' learning outcomes. Table 1 shows the pretest result of the expressive language skill for the experimental group shows the highest score 30, the lowest score 17, the mean 23.53, and the deviation standard 3.378. The pretest result shows that the expressive language skill of the children is categorized in Undeveloped (BB). The pretest result of the expressive language skill for the experimental group shows the highest score 30, the lowest score 17, the mean 23.4, and the deviation standard 4.40. The pretest result shows that the expressive language skill of the children is categorized in Undeveloped (BB). RESULTS AND DISCUSSION The pretest results of applying symbolic modeling method without the media, as promoted in the experimental and control groups, proved that either the expressive language skill of PGRI Sarimulyo kindergarten learners, aged 5-6 years old and the expressive language skills of PGRI Sarimulyo 02 Kindergarten learners, aged 5-6 years old, were still low as shown by the mean of the obtained score from Undeveloped category (BB). The pretest result shows that learning by the symbolic modeling method without media could not reach maximum results. The finding is relevant to Dewi (2016). She concludes that modeling the learning process by observing other people causes changes that occur after trying to model. The children's behaviours are influenced by the environment. They tend to model because they learn from what they see. The findings could be defined that children learn maximally by observing and modeling what they see as the role model. It happens since without using a media as the model, learners will not see the original described model and could attract their attention in re-modeling what is done by the teachers based on the instruction. Therefore, children's expressive language skills were not developed as expected. Table 2 describes the expressive language skill of both groups in this research. Table 2 shows the pretest result of 15 learners' expressive language skills in the experimental group. It shows the mean score (M) 23.53 and the SD score 3.378. The post-test result of the expressive language skills shows the mean score (M) 51.80 and SD score 2.455. Meanwhile, in the control group, the pre-test score of the expressive language skill shows the mean score (M) 23.40 and SD score 4.404. Then, the post-test score shows the mean score (M) 27.06 and the SD score 4.41. The post-test result shows that learning in the experimental group, intervened by the symbolic modeling method assisted by a doll, could reach the Excellent Development (BSB) category. It could be seen from the learners' enthusiasm in keeping up with the activity, playing the doll and imitating the examples from the teachers. Thus, learners could express simple matters in playing the dolls. A doll media is made from modified boxes. They are made into several characters, such as farmers, teachers, and police officers. The dolls were equipped with holders for the children to play. The farmer, teacher, and police officer dolls were ever used in a learning activity inside of a classroom. Thus, children would be interested to play. The speaking and communication skills of children orally were improved. It could be seen from the way they played the dolls farmers, teachers, or police officers along with their equipment. The conversations among teachers, farmers, and police officers in introducing themselves, mentioning the required tools, and telling about the jobs of the characters in a simple language. Thus, their vocabulary would be more complex. The children's skills to recognize symbols of reading and writing preparation as well as their understanding about sound and the letter would be improved. It could be observed from their skills to mention the letters and words of the farmers, teachers, and police officers on the dolls. They could mention the shown letter correctly. It is different from the post-test result of the control group that was taught by the symbolic modeling without assisted by the media. It could only reach the Early Development category (MB). The post-test result is relevant to Repita (2015). She concludes that through multiple modeling techniques with the teacher real model and symbolic model in the form of pictorial stories, it could be easily accepted by children. Thus, it could influence children to be better. The argument can be defined that in administering the activity for children to be skilled in speaking and writing, there should be a proper model for them. Thus, they could easily accept and understand the delivered meaning and message. It would also make them easy to model the behaviour, style, or anything the model concretely performed for them. Table 3 shows the Wilcoxon test result of the experimental group's expressive language skills in the research. Table 3 shows a negative rank, 0.00. It means there is no decrease in the pre-test score to the post-test score. Positive Rank N = 15. It means the learners had improving scores from the pre-test to the post-test. Ties = 0. There is no similar score from the pre-test and the post-test. Asymp. Sig. (2-Tailed) = 0.01. It is lesser than 0.05. It means there is a difference between the pre-test learning outcome and the post-test learning outcome. Thus, it could be concluded that the symbolic modeling assisted by a doll influences the learners' expressive language skills. The Mann Whitney test result of the 5-6year-old expressive language skills in the experimental and control groups is described in Table4. Table 4. The Mann Whitney test result of the 5-6 expressive language skills in the experimental and control groups is described in. Table 4 shows the descriptive analysis of the 5-6-year-old children's expressive language skill learning outcomes in the control group. It consists of 15 learners with the highest score of 34, lowest score 19, and a mean score of 8.00. Meanwhile, the experimental group, consisting of 15 learners, obtain the highest score 55, the lowest score 48, and the mean score 23.00. The statistic test output, the Mann Whitney test, of the 5-6-year-old children's expressive language. It shows that Asymp. Sig (2-tailed) is 0.000 < 0.05. It could be concluded that there is a difference in the experimental group's expressive language skill taught by the symbolic modeling method assisted by a doll and the control group taught by the symbolic modeling method without media. The output of the Wilcoxon test shows there is an influence of the symbolic modeling method assisted by a doll toward the expressive language of 5-6-year-old children. It is proven from the improvement of the expressive language learning outcome from the pretest and the posttest. It is also proven from the Mann Whitney test. It shows differences in the learning outcomes between the control group students taught by symbolic modeling media without media and the experimental group students taught by symbolic modeling media assisted by a doll. Thus, it could be concluded that symbolic modeling assisted by a doll is effective to improve the expressive language skills of 5-6year-old children. The hypothesis results are relevant to Kurniawati (2016). In her research, she states that in the kindergarten learning activity, media plays an important role in children because children are in the concrete period. Gerde (2012) in his research shows that teachers in early childhood education should offer various programs, media, and writing skill models (the expressive language) for children. Writing is an activity to deliver notions, arguments, and perceptions in a printed form to communicate. The argument can be defined that in administering the activity for children to be skilled in speaking and writing, there should be a proper model for them. Thus, they could easily accept and understand the delivered meaning and message as well as something to be seen from the model. The effectiveness of symbolic modeling for early children's learning has been proven. For example, a study conducted by Indrawati (2016) that had background based on children's difficulties in their speaking skill and their lowcategory speaking development achievement results. The research concludes that the symbolic modeling technique could improve the B-group children's speaking skills. Table 5 consists of the applied-speaking and writing indicator summary for 5-6-year-old children. Writing Recognizing the symbols of reading and writing preparation (IB 4). Understanding the sounds and the letter realization (IB 5) Based on table 4, the measured aspects of an expressive language for 5-6-year-old children are answering more complex questions, and communicating orally, having more words to express. And the measuring aspects of the written expressive language skills for 5-6-yearold children are recognizing the reading and writing preparation and understanding the sound and the letter realization relationship. Speaking and writing include a composing process. Rosyidah (2012), in her research, argues that speaking is meant to express, communicate an individual's thought, and express notion and feeling that is influenced by the listening skill. It is concluded that speaking for children has wider meanings of speaking. Speaking is defined as the sounds of the language words that could be understood by interlocutors. However, child speech is defined as the sounds produced by children either the language sounds or sounds that do not belong to language and are uttered by children's speech organs. Janawati (2013), in the research, states that, conventionally, learning writing could be defined as children studying to paint something in a certain writing system that could be read by people who understand the system. Writing activity in kindergarten emphasizes more on activities to express their notions, feelings, and ideas through written symbols freely or independently based on formal writing principles. Writing needs fine motor, ocularhand coordination to hold stationeries and basic writing methods of letter perception and printed language. The learning principle for early childhood is to play and learn. That statement is proven by Prahesti (2016). In her research, it has been proven that word-ball bathing and word cards are effective to introduce the initial reading and writing concepts. Kusumawati (2017), in her research, states that playing plasticine is effective to improve the initial writing skill for early childhood. Aisy (2019) also concludes that implementing card media in reading and mathematic content areas is proven effective to improve the writing skills of 4-5-year-old children. Naitili (2019), in her research, concludes that the implementation of Structural Cytetic Analytics is effective to improve the initial writing skills of first-grade students in a primary school. Those studies are relevant because they take similar research subjects and themes, the language skills (the initial reading and writing) of early childhood. The differences deal with the applied media and method in this research. Those findings are relevant to the research Khasanah (2016). She states that in her research that developing the early children's language skill is o allow them to understand the mimics, tones, and the word meaning uttered by other people while explaining or describing something. Thus, children would be able to recognize and respond to other people plus to develop excellent communication with surrounding people. The applied media in the learning activity would be meaningful when it is delivered by the appropriate learning method, based on the learners' characteristics and ages. There are many proven findings of the previous studies, such as a study conducted by Agus (2011) with a storybook; Nurhayati (2016), and Rahmawati (2017) with a picture-word media; Janawati (2013) applying word card learning in playing domino; and Sudarta (2017) applying jolly phonics method. According to Sudarta (2017), the applied method could stimulate the initial writing skills of early childhood. Those studies are relevant because they take similar research subjects and themes, the language skills (the initial reading and writing) of early childhood. The differences with this current research also deal with the applied method and media. A study conducted by Hill (2011) concludes that the valid and practicable ecological procedures in the social-daily lives for the beginning of children's ages are effective to measure their literacy. Meanwhile, Kurniawati (2016), in her research states that a pop-up book based media for a conversational method is effective to improve the 4-5-year-old children's speaking skills. Kustina (2014) and Muyasaroh (2017), in their researches, conclude that using word cards, children's language skills could be improved. Those studies are relevant because the applied topic deals with early childhood children. The differences deal with the applied media and method in the research. Fauzani (2016) and Farikha (2018), in their researches, found the centered-role playing learning model influenced the B-group students' expressive language skills. A similar study was done by Azizah (2013). She concludes that the language speech level for 5-6-year-old children is higher when it is intervened by the macro-role playing method than those taught by applying the micro-role playing method. Those studies are relevant because the topic of the researches deals with the language speaking skill of 5-6-year-old children. However, for the applied learning method in the learning is different. Those three studies implemented a role-playing method to improve the learners' speaking skills. Meanwhile, in this research, the researcher examines the effectiveness of modeling methods to improve language skills. Indrawati (2016), in her study, applied the modeling technique to improve the earlychildhood children's speaking skills. The research aims to find out the modeling technique toward B-group learners' speaking skills. The findings showed that the modeling technique could improve the speaking skill of 5-6-year-old children. Repita (2016), in her study, states that modeling is learning done by observing with adding or reducing the observed behaviors, generalizing various observations, and involving the cognitive process. It is relevant to Indrawati's research (2016) that concludes the modeling technique is a learning process by observing. It allows individuals or several individuals to be examples. They have roles to stimulate thought, attitude, and behavior of the observing objects to be imitated. By referring to this theory and the relevant studies, the researcher selects the symbolic modeling method assisted by a doll to examine its effectiveness in stimulating the earlychildhood children's expressive language, especially the 5-6-year-old children. The argument is proven in this research that the experimental group, taught by the symbolic modeling assisted by a doll, is effective in improving the 5-6-year-old children's expressive language skills than those taught by the symbolic modeling method without assisted by the learning media. In this research, the learners in the experimental group learned from symbols, they were dolls demonstrated by a teacher. CONCLUSION The hypothesis test results show that the applied learning model, the symbolic modeling method, influences significantly (p< 0,05). There is an improvement in the 5-6-year-old children's expressive language skill improvements. It is shown by the mean of the obtained scores from the expressive language skill indicators after being the intervention is in the Excellent Development (BSB) category.
2020-08-27T09:13:26.140Z
2020-05-31T00:00:00.000
{ "year": 2020, "sha1": "46b4060b4f40d2a8240ae014dc8c14be3514a6a5", "oa_license": "CCBY", "oa_url": "https://journal.unnes.ac.id/sju/index.php/jpe/article/download/39747/16480", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "10ac47911651825b1499c7ad0f14e693858512d1", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
259300450
pes2o/s2orc
v3-fos-license
Clinical Characteristics and Genetic Variants of a Large Cohort of Patients with Retinitis Pigmentosa Using Multimodal Imaging and Next Generation Sequencing This retrospective study identifies patients with RP at the Inherited Retinal Disease Clinic at the University of Minnesota (UMN)/M Health System who had genetic testing via next generation sequencing. A database was curated to record history and examination, genetic findings, and ocular imaging. Causative pathogenic and likely pathogenic variants were recorded. Disease status was further characterized by ocular coherence tomography (OCT) and fundus autofluorescence (AF). Our study cohort included a total of 199 patients evaluated between 1 May 2015–5 August 2022. The cohort included 151 patients with non-syndromic RP and 48 with syndromic RP. Presenting symptoms included nyctalopia (85.4%) photosensitivity/hemeralopia (60.5%), and decreased color vision (55.8%). On average, 38.9% had visual acuity of worse than 20/80. Ellipsoid zone band width on OCT scan of less than 1500 μm was noted in 73.6%. Ninety-nine percent had fundus autofluorescence (AF) findings of a hypo- or hyper-fluorescent ring within the macula and/or peripheral hypo-AF. Of the 127 subjects who underwent genetic testing, a diagnostic pathogenic and/or likely pathogenic variant was identified in 67 (52.8%) patients—33.3% of syndromic RP and 66.6% of non-syndromic RP patients had a diagnostic gene variant identified. It was found that 23.6% of the cohort had negative genetic testing results or only variants of uncertain significance identified, which were deemed as non-diagnostic. We concluded that patients with RP often present with advanced disease. In our population, next generation sequencing panels identified a genotype consistent with the exam in just over half the patients. Additional work will be needed to identify the underlying genetic etiology for the remainder. Introduction Retinitis pigmentosa (RP) is the most common inherited retinal dystrophy (IRD) and is associated with progressive night vision loss, visual field constriction, reduced electroretinographic responses, and reduction in visual acuity [1]. The metabolic abnormalities associated with RP affect the rod and cone photoreceptors of the retina. On clinical exam, the characteristic phenotype consists of mid-peripheral retinal pigmentary changes (bone spicules), arteriolar attenuation, and waxy disc pallor, but it is a heterogenous group of disorders with at least 80 causative genes [2]. Although most patients with RP present with isolated eye findings, 20-30% of patients present with syndromic RP with multiorgan involvement [3]. RP can be transmitted by autosomal dominant, autosomal recessive, or X-linked inherence patterns. In the literature, the autosomal dominant inheritance pattern accounts for 20-25% of RP; an autosomal recessive pattern is observed in 15-20%, an X-linked 2 of 13 pattern in 10-15%, and sporadic/simplex traits are observed in 30% [3]. The distribution of gene prevalence varies based on the population studied. For example, a Japanese population study of 68 patients found that one-third of patients with non-syndromic autosomal recessive RP carried pathogenic gene variants in the EYS gene [4], while a similar study from a western European ancestry cohort approximated that the prevalence of EYS variants accounted for only 5% of autosomal recessive RP in a cohort of 245 patients [5]. An estimated 20-30% of those diagnosed with RP are classified as syndromic patients [3]. The most common syndromic form of RP is Usher syndrome, which accounts for about 14% of all RP patients [6]. Usher syndrome belongs to a group of ciliopathies that causes defects in the ciliary protein trafficking. This condition usually affects multiple organ systems because numerous cells in the body, including the photoreceptors, have cilia. Usher syndrome is inherited in an autosomal recessive inheritance pattern and is characterized by congenital deafness and adolescent onset rod-cone dystrophy. There are three types of Usher syndrome. Each type displays a severity of deafness and variable vestibular response. Type 1 is the most severe, while type 2 and 3 are milder forms with a later onset of retinal degeneration. The next most frequent syndromic condition is Bardet Biedl syndrome, at a prevalence of 1/150,000 [7]. This autosomal recessive ciliopathy is characterized by rod-cone dystrophy, obesity, polydactyly, varying degrees of cognitive impairment, and genitourinary and renal abnormalities. Other syndromic RP diseases are subdivided into those that manifest with renal abnormalities, dysmorphic syndromes, metabolic diseases, or neurological diseases. Studying the mechanisms involved in facilitating and maintaining proper protein transport in photoreceptor cells will help to identify the underlying pathology of retinal cell degeneration for many of these conditions [8]. Identifying genotype-phenotype correlations in non-syndromic RP can be challenging because variants in different parts of the same gene can result in different phenotypes. For instance, variants in the ABCA4 gene, which encodes an ATP-binding cassette transported expressed in the disc of photoreceptor outer segments, can have several phenotypes. These include Stargardt disease, fundus flavimaculatus, RP, and cone-rod dystrophy. The different phenotypes reflect the different tissues in which the ABCA4 gene is expressed, including the photoreceptors and the retinal pigment epithelium [9,10]. Another example of a gene causing different phenotypes includes the peripherin-2 (PRPH2) gene, which encodes a photoreceptor-specific tetraspanin protein called peripherin-2. This protein is involved in membrane fusion and is required for the formation and maintenance of the outer segments of rods and cones. Its phenotypes include retinitis pigmentosa and macular degeneration [11]. An important imaging biomarker for monitoring progression of structural damage in RP is the width of the preserved ellipsoid zone (EZ) within the macula [12]. Studies using optical coherence tomography (OCT) to measure the EZ have extended the use of EZ width to distinguishing disease progression in differing inheritance patterns of RP. It has been shown that the mean rate of decline in EZ width of 7% represents a mean rate of change of 13% for the equivalent area of the EZ [13]. This rate of change is similar to findings reported for Goldmann visual fields and full-field electroretinograms [14,15]. Along with fundus autofluorescence (FAF) and Goldmann visual fields, EZ band width measurements are now being incorporated into inclusion/exclusion criteria to limit participants with advanced disease severity in gene therapy clinical trials. FAF has been proposed as an indirect biomarker marker of RPE function and can also be used to assess RP progression. FAF highlights the distribution of fluorophores in the RPE, such as lipofuscin and lipofuscin accumulation. Specific patterns of increased fluorescence are suggestive of oxidative stress and increased metabolic activity. Hyperautofluorescence in the macula indicates RPE stress, whereas hypoautofluorescence can indicate RPE loss. Genotype correlations with FAF phenotypes have been investigated using ultra widefield fundus autofluorescence (UW FAF) in a cohort of patients. Meaningful FAF patterns included a ring of hyperautofluorescence, double ring hyperautofluorescence, and peripheral hypoautofluorescence [16]. There are several therapeutic trials underway for the treatment of IRDs [17]. Patients with biallelic variants in RPE65 now have a commercially available gene-replacement treatment [18]. With more targeted gene therapy treatments likely to be available in the future, it is critical to describe this class of disorders on a genetic level [19]. The IRD service at the University of Minnesota is a referral center for the state and neighboring regions. This study reports the pathogenic gene variants found utilizing next generation sequencing (NGS) in individuals in our population with syndromic and non-syndromic RP. We report patient clinical characteristics along with the gene variants found in this cohort. Demographic Information A total of 199 patients diagnosed with RP were found within the evaluated time frame per our IRB. The patient distribution consisted of 151 patients with non-syndromic RP and 48 with syndromic RP. The syndromic RP patients included Usher syndrome (39), Bardet Biedl syndrome (4), Cohen syndrome (2), nephronophthisis (1), cardiofaciocutaneous syndrome (1), and abetaproteinemia (1). There were 98 males (49.2%) and 100 females (50.2%). One patient with an XY chromosomal arrangement did not identify with a binary gender. The age range of the cohort include the following: before the age of 10 (4), between the ages of 10-19 (21), between the ages of 20-40 (47), and after the age of 40 (127). The following observed symptoms were recorded from the baseline evaluation. In terms of age range for the initial genetic testing: before the age of 10 (11), between the ages of 10-19 (24), between the ages of 20-40 (62), and after the age of 40 (102). Symptoms A total of 61 of the 144 (42.4%) subjects first noted eye symptoms before the age of 10. A total of 68 of the 121 (56.2%) subjects had a known family history of retinal dystrophy (Table 1). A total of 134 of the 157 (85.4%) reported nyctalopia, 52/86 (60.5%) reported photosensitivity and or hemeralopia, 170/184 (92.4%) reported visual field loss, and 53/95 (55.8%) reported color vision impairment measured by the Ishihara test. The varying denominators reflect the number of subjects for whom this information was available through retrospective chart review. Visual Acuity Results A total of 74 of the 198 (37.4%) subjects had visual acuity worse than 20/80 in the right eye and 80/198 (40.4%) for the left eye. There was not a statistically significant difference found between the two eyes. Ellipsoid Zone (EZ) Measurements EZ measurements were taken at the baseline visit. Advanced photoreceptor loss, representing less than 1500 µm, was noted in 134/182 (73.6%) of subjects. EZ band width measurement (see Table 1) on a foveal OCT scan is illustrated ( Figure 1A). Fundus Autofluorescent (FAF) Patterns Almost all patients-191/193 (99.0%)-presented with FAF findings of either a ring of macular hypo/hyper AF or peripheral hypo-AF in at least one eye. Figure 1B illustrates FAF findings in our patient cohort and our criteria to distinguish between imaging findings of macula hypo/hyper AF ring or peripheral hypo-AF. Both EZ band width ranges and FAF findings between left and right eye are reported in the demographic section in Table 1. Genetic Testing Reports A summary of the commercial genetic panels that our patient cohort used, and their corresponding diagnostic yield rate is presented in Table 2. Age at Which the Patient Reported RP Eye Symptoms Total Cases Before the age of 10 61 Between the ages of 10- 19 32 Between the ages of 20-40 34 After the age of 40 17 Unknown 55 Visual Acuity A total of 127/199 (63.8%) patients had genetic testing completed at the time of this study. At least one pathogenic variant was identified in 97/127 (78.0%) patients. Of these, 67/127 (52.8%) had a pathogenic variant that was diagnostic ( Figure 2(1a)). This percentage reflects the diagnostic yield for our RP patient cohort. There were 60/127 (47.2%) patients who underwent genetic testing and did not have a diagnostic pathogenic variant identified ( Figure 2). The majority (22/127 (17.3%)) had only VUS identified, and 8/127 (6.3%) had negative results (Figure 2 (column 2)). For the remaining 30 patients (Figure 2 (columns 1b and 1c)), a pathogenic variant was identified, but was not diagnostic for two reasons. First, for the 15/127 patients ( Figure 2 (column 1b)), only one pathogenic/likely pathogenic variant was found, while the second variant was classified as VUS. As previously mentioned in the methods section, both alleles must be classified as pathogenic or likely pathogenic and confirmed or presumed to be in the trans configuration to be considered for diagnostic criteria of autosomal recessive RP. Second, the other 15/127 patients ( Figure 2 (column 1c)) had only a single pathogenic variant identified in an autosomal recessive gene, indicating carrier status. Figure 2 provides a full genetic summary of the patients that underwent genetic testing. A total of 74 of the 198 (37.4%) subjects had visual acuity worse than 20/80 in the right eye and 80/198 (40.4%) for the left eye. There was not a statistically significant difference found between the two eyes. Ellipsoid Zone (EZ) Measurements EZ measurements were taken at the baseline visit. Advanced photoreceptor loss, representing less than 1500 µm, was noted in 134/182 (73.6%) of subjects. EZ band width measurement (see Table 1) on a foveal OCT scan is illustrated ( Figure 1A). Fundus Autofluorescent (FAF) Patterns Almost all patients-191/193 (99.0%)-presented with FAF findings of either a ring of macular hypo/hyper AF or peripheral hypo-AF in at least one eye. Figure 1B illustrates FAF findings in our patient cohort and our criteria to distinguish between imaging findings of There were 60/127 (47.2%) patients who underwent genetic testing and did not have a diagnostic pathogenic variant identified ( Figure 2). The majority (22/127 (17.3%)) had only VUS identified, and 8/127 (6.3%) had negative results ( Figure 2 (column 2)). For the remaining 30 patients (Figure 2 (columns 1b and 1c)), a pathogenic variant was identified, but was not diagnostic for two reasons. First, for the 15/127 patients (Figure 2 (column 1b)), only one pathogenic/likely pathogenic variant was found, while the second variant was classified as VUS. As previously mentioned in the methods section, both alleles must be classified as pathogenic or likely pathogenic and confirmed or presumed to be in the trans configuration to be considered for diagnostic criteria of autosomal recessive RP. Second, the other 15/127 patients (Figure 2 (column 1c)) had only a single pathogenic variant identified in an autosomal recessive gene, indicating carrier status. Figure 2 provides a full genetic summary of the patients that underwent genetic testing. In our patient cohort, 97 pathogenic/likely pathogenic variants were identified in 32 different genes associated with syndromic and non-syndromic RP phenotypes. The genes identified for each diagnostic syndromic and non-syndromic RP are also listed within The Inheritance Pattern Distribution Our cohort included 12/67 (17.9%) autosomal dominant, 43/67 (64.2%) autosomal recessive, and 12/67 (17.9%) X-linked RP (Figure 2). Within these inheritance patterns, 11/12 (91.7%) patients with X-linked RP had a diagnostic RPGR variant. Nine of those patients were male, and one patient was a heterozygous female with both of her sons having a more severe disease presentation. For the patients with autosomal dominant RP, four had the RP1 variant and a family of three had the PRPH2 variant. Supplemental Data The supplement table contains details on which variants were identified in each individual patient and is written in accordance with the current Human Genome Variation Society (HGVS) nomenclature, the heterozygosity of variants, and the American College of Medical Genetics (ACMG) variant classifications, as assigned by the performing genetic laboratory. The data in this database were extracted from genetic test reports from various Clinical Laboratory Improvement Amendments (CLIA) and accredited by the College of American Pathologists (CAP) certified laboratories. We did not perform specific splice predictions and/or in silico predictions. x FOR PEER REVIEW 7 of 13 (3%) (Figure 3). The variants identified in each individual case, zygosity, and ACMG classification are available in the Table S1. The Inheritance Pattern Distribution Our cohort included 12/67 (17.9%) autosomal dominant, 43/67 (64.2%) autosomal recessive, and 12/67 (17.9%) X-linked RP (Figure 2). Within these inheritance patterns, 11/12 (91.7%) patients with X-linked RP had a diagnostic RPGR variant. Nine of those patients were male, and one patient was a heterozygous female with both of her sons having a more severe disease presentation. For the patients with autosomal dominant RP, four had the RP1 variant and a family of three had the PRPH2 variant. Supplemental Data The supplement table contains details on which variants were identified in each individual patient and is written in accordance with the current Human Genome Variation Society (HGVS) nomenclature, the heterozygosity of variants, and the American College of Medical Genetics (ACMG) variant classifications, as assigned by the performing genetic laboratory. The data in this database were extracted from genetic test reports from various Clinical Laboratory Improvement Amendments (CLIA) and accredited by the College of American Pathologists (CAP) certified laboratories. We did not perform specific splice predictions and/or in silico predictions. Methods Our cohort of individuals with syndromic and non-syndromic RP was studied retrospectively. The patients included were all evaluated at the IRD Clinic at the UMN/M Health System. All patients seen between 1 May 2015 (the date our institution implemented its current electronic medical record system) and 5 Aug 2022, were included according to our IRB STUDY00012478. The collected data did not exclude patients based on Methods Our cohort of individuals with syndromic and non-syndromic RP was studied retrospectively. The patients included were all evaluated at the IRD Clinic at the UMN/M Health System. All patients seen between 1 May 2015 (the date our institution implemented its current electronic medical record system) and 5 Aug 2022, were included according to our IRB STUDY00012478. The collected data did not exclude patients based on age, race, or gender. Patients within our hospital system may opt out of inclusion in retrospective chart reviews at the time of initial consent for service. All patients who opted out were excluded from this analysis. Database Clinical information was collected using our institution's electronic healthcare record system. The REDCap software platform was used to curate a database. An original survey was constructed to facilitate retrospective collection of demographic, history, and exam findings for each patient through EPIC. Each patient received a randomized numerical assignment, accompanied by their medical identification number. Any question addressed in the survey that was not directly found in the patient chart was labeled as 'unknown'. The data entry included questions regarding present ocular history, family history of retinal degeneration, baseline ocular examination, genetic report, and diagnostic imaging. In addition, the age at which the patient first noted eye symptoms was recorded. These ranges include <10, 10-19, 20-40, and >40 years of age. The data entry for baseline ocular examination included the presence of nyctalopia, hemeralopia/photosensitivity, visual acuity, and visual field loss. Visual acuity was classified as being either 20/40 or better, worse than 20/40 but better than or equal to 20/80, or worse than 20/80. The presence of visual field loss was analyzed based on results of the Goldmann visual field testing. Multimodal imaging for patients in our cohort included FAF (Optos ® ) and OCT imaging (Heidelberg-Spectralis ® ). These are the instruments that we have available in the clinic. These devices are commonplace in many retina clinics. The FAF demonstrated the presence of hypo-vs. hyper autofluorescence in the macula and peripheral retina. OCT imaging was used to analyze macular ellipsoid zone (EZ) band width. The marking of the EZ endpoint locations was measured manually. Two graders, including one retina specialist, evaluated EZ band width for all patients. Subjects with an EZ band width of less than 1500 um were considered to have advanced photoreceptor loss. Genetic Testing Genetic testing was offered to all patients during the IRD evaluation. Whether genetic testing was performed was noted along with the genetic variant(s) for each patient. The number of genes analyzed varied from single-gene targeted testing to panels with >300 genes. Single gene targeted testing was often employed when there was a known familial variant previously identified. The majority of patients had genetic testing performed via a next-generation sequencing (NGS) inherited retinal disease panel. The exact number of genes on these panels varied based on when the patient had genetic testing performed. Over the course of the study, the commercial providers of the NGS panels incorporated additional genes so patients evaluated at the end of the study period had more genes tested than those at the beginning. In the past, common reasons for patients choosing not to undergo genetic testing included cost concerns, lack of interest, concern for too much information being requested, or personal preference. Four of the most common genetic testing laboratories that were used included Invitae Laboratory, Blueprint Genetics, PreventionGenetics, and the University of Minnesota Molecular Diagnostic Laboratory. These genetic testing laboratories are accredited College of American Pathologists (CAP) and Clinical Laboratory Improvement Amendments (CLIA) certified and utilize the currently available American College of Medical Genetics and Genomics (ACMG) variant classification guidelines to classify each variant identified. The percentage of patients who utilized each gene panel was considered, and an analysis of the pathogenic/likely pathogenic gene diagnostic yield rate was calculated for each of the listed gene panels. The presence of variants of uncertain significance (VUS) was also recorded for each patient. The use of NGS has been shown to be an effective method for detecting pathogenic gene variants [20]. There are few possible outcomes of genetic testing: pathogenic/likely pathogenic variants for genes known to cause the phenotype in question, pathogenic/likely pathogenic variants for genes associated with phenotypes the patient does not have, variants of uncertain significance (VUS), or no variants. NGS panel reports consist of a list of genetic variants identified in the patient sample that could be associated with an IRD. Our approach for determining whether a patient's genetic testing results were diagnostic of RP aligns with the ACMG standards and guidelines for the interpretation of sequence variants [21]. The majority of patients who obtained genetic testing met with both a genetic counselor (JI) and a vitreoretinal specialist (SM). A clinical history and family history was collected for each patient. Patients were identified as having diagnostic genetic results if (1) the patient had a sufficient number of pathogenic/likely pathogenic genetic variant(s), (2) the genetic variants were consistent with the patient phenotype, and (3) the genetic variants were consistent with the known inheritance pattern. For autosomal dominant inheritance patterns, only one likely pathogenic or pathogenic variant was required. For autosomal recessive conditions, two pathogenic or likely pathogenic variants were needed. When possible, family studies were conducted to confirm the two variants were in the trans configuration (i.e., one variant on each allele). In other instances, the variants were confirmed to be in trans based on sequencing results, homozygosity for the variant, or they were presumed to be in trans based on the patient's clinical phenotype. If a single pathogenic variant was identified but that gene was associated with autosomal recessive inheritance, then the patient was classified as a carrier. There were situations where testing identified a single pathogenic/likely pathogenic variant in addition to a single VUS in the same autosomal recessive gene. These cases were deemed clinically suspicious but not as diagnostic for the purposes of this study. A patient was considered to have negative genetic test results if no genetic variants were identified or if the only variants identified were VUS. Discussion Of the 199 patients diagnosed with a RP phenotype, 127 patients underwent genetic testing, and 32 different genes were identified as the causative gene for the patients' RP ( Figure 3). Using NGS, we achieved a diagnostic yield of 52.8%, in which a pathogenic or likely pathogenic variant(s) was identified that was determined to be causative in the patient's diagnosis. This diagnostic yield is higher than some reports in literature (~30-40%) [22,23], nearly identical with one (53.2%) [24], and below others (~60-70%) [25,26] that utilized NGS. Molecular genetic testing is essential in the phenotypic diagnosis for patients with syndromic and non-syndromic RP. Like many clinical sites, the UMN IRD clinic utilizes a variety of commercial NGS gene laboratories for their patients ( Table 2). The exact decision of which NGS panel or laboratory to use was based on several factors, including but not limited to the number of genes analyzed on a panel, family member testing (i.e., if a known familial variant was being testing, use of the same genetic testing laboratory was preferred), ease of sample collection, insurance coverage for testing, or a patient electing to participate in a sponsored testing program. The most frequently used gene panel for our patient cohort came from Invitae Laboratory. A total of 71 of 127 (55.9%) patients underwent genetic testing through this gene panel with a diagnostic yield rate of 43.7%. Of the 67 patients who had a diagnostic test result, 31 of those patients had testing at Invitae Laboratory. Yet, this is where most of our panels were sent. Other gene panels utilized were from the UMN Molecular Diagnostic Laboratory, PreventionGenetics, and Blueprint Genetics. Any additional genetic panel that was not one of these ones listed came from other medical institutions and their respective gene laboratory panel. The wide range in the diagnostic yield rate from the various genetic testing laboratories (Table 2) may be explained by the difference in the number of genes analyzed at the time or if several tests were sent to one genetic testing laboratory due to a known familial variant being previously identified at that laboratory. However, the diagnostic yield rate for any of these mentioned gene panels was 50% or greater. The wide range in the number of genes tested for different patients somewhat limits the conclusions for our study. The most common causative genes in our patient cohort were RPGR (16%), USH2A (12%), MYO7A (12%), RP1 (6%), EYS (4%), PRPH2 (4%), BBS1 (4%), PDE6B (3%), and VPS13B (3%) (Figure 3). Together, the diagnostic yield rate of 52.8% supports the use of NGS as an effective tool for RP patient diagnosis. Our cohort consisted of 151 non-syndromic and 48 (24.1%) syndromic RP patients. Twenty-three (47.9%) of the 48 syndromic RP patients had a diagnostic pathogenic variant. The distribution of the syndromic RP conditions is in the range of a report from a European cohort, which found a causative variant(s) in 59% of their syndromic cases [27] and higher than that of a Danish cohort, which found causative variant(s) in 28% of syndromic RP patients [28]. In addition, our percentage of total diagnostic Usher syndrome patients (65.2%) was larger than the Danish cohort, which consisted of only 43% of the total syndromic cases. The proportion of syndromic RP patients was additionally higher than reported in a Spanish cohort, which reported 18% syndromic RP patients [29]. Approximately two-thirds of our RP patients with a diagnostic pathogenic/likely pathogenic gene variant were considered non-syndromic. These consisted of 12 autosomal dominant (17.9%), 43 autosomal recessive (64.2%), and 12 X-linked RP (17.9%) (Figure 2). In agreement with the first published series from the United States (a 1978 study of 173 RP patients), the percent of our cohort with X-linked RP is relatively high. This higher percentage of X-linked RP was corroborated in the Denmark study [28] but was higher than other RP studies [30][31][32][33]. It should be noted that some our X-linked RP patients consisted of members of the same family. Patients who had non-diagnostic results were represented in Figure 2(column 1b + 1c and column 2). For the 15 patients with one pathogenic/likely pathogenic variant and one VUS identified who were categorized in column 1b, the results for follow-up family studies were not available to confirm the configuration (cis or trans) of the two variants. The other 15/97 patients (Figure 2(column 1c)) had a single pathogenic variant identified in an autosomal recessive gene. In the future, it may be that the diagnostic yield rate of NGS panels will increase significantly as new variants and/or genes are discovered. However, it is possible that a portion of the non-diagnostic results in our cohort reflect non-genome-encoded epigenetic processes, such as post-transcriptional modification. For example, Donato et al. recently demonstrated significant post-transcriptional RNA editing activity in a model of cultured human-derived retinal pigment epithelial cells subjected to oxidative stress with N-retinylidene-N-retinyl ethanolamine (A2E) [34]. In the future, perhaps assays of RNA sequences or other assays of epigenetic influences may supplement NGS and improve the overall diagnostic yield of testing [35]. Whole exome sequencing (WES) or whole genome sequencing (WGS) may also prove valuable in investigating patients with negative NGS panel results [36]. Evaluating the diagnostic efficacy of this approach for patients with RP may be a direction for future work [37]. In our cohort, 73.6% of patients had advanced photoreceptor loss indicated by an EZ band width on the fovea scan of <1500 µm in at least one eye. Likewise, 99% of patients presented with FAF findings of either ring of macular hypo/hyper-AF or peripheral hypo-AF in at least one eye. Such an advanced-stage disease presentation is concerning as it may exclude patients from ongoing clinical trials. Although 127 patients first reported eye symptoms before the age of 40, 102 patients did not have their initial genetic test until after the age of 40. Age could then be a potential factor in advanced photoreceptor loss, as demonstrated by the OCT EZ band width interpretation. When patients present with a phenotype that is strongly suggestive of RP, obtaining genetic testing at the initial visit may enable them to identify clinical trials and potential future treatments at the earliest possible disease stage. Limitations One limitation of this study is that genetic testing was performed using NGS without utilizing WGS. Additionally, patients in our cohort who had testing early in the study window typically had fewer genes tested than those who were tested more recently. As more genes are discovered and additional genetic testing becomes available, patients should be educated about the potential for future updates of gene panels and additional testing. This same concept also applies to family members of affected individuals who were unavailable or chose not to be tested to provide further interpretation of a VUS via segregation analysis. The need for increased information of current therapeutic treatments should be provided to all patients who are diagnosed with an inherited retinal disease. In addition, these laboratories occasionally provide updated variant classifications on the same genetic testing as they gather more knowledge and evidence, particularly for VUS. This dynamic process makes providing definite conclusions regarding the gene prevalence in a population more challenging. Conclusions In our cohort, a diagnostic pathogenic/likely pathogenic variant was identified in 52.8% of patients who underwent genetic testing. Of these, 33.3% corresponded to syndromic RP and 66.6% to non-syndromic RP patients. The numerous patients with nondiagnostic results (i.e., identified as an asymptomatic carrier of an autosomal recessive IRD or VUS) suggests the need for a more encompassing genetic analysis and to promote testing for other affected family members to determine if the gene variant is pathogenic. A com-prehensive approach involving genetic counselling, clinical evaluation, and appropriate imaging is necessary to properly characterize patients with this heterogenous disease. RP patients evaluated at UMN often present with advanced features of their disease. With potential future treatments for IRD, including gene therapy, stem cell, neuroprotection, retinal implants and optogenetics, prompt diagnosis may help identify subjects that could qualify for currently approved gene replacement therapies and ongoing or future clinical trials. The study sponsors were not involved in the study design, collection, analysis, interpretation of data, writing the report, or the decision to submit the report for publication. Institutional Review Board Statement: The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of University of Minnesota (IRB STUDY00012478 approved 7 May 2021). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable.
2023-07-01T06:13:44.843Z
2023-06-30T00:00:00.000
{ "year": 2023, "sha1": "c1b173675db06790cb1bcddfd770720d2ac79251", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "c1b173675db06790cb1bcddfd770720d2ac79251", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216555268
pes2o/s2orc
v3-fos-license
Multiple Myeloma Incidence and Mortality Around the Globe; Interrelations Between Health Access and Quality, Economic Resources, and Patient Empowerment Abstract Background The interrelation between the worldwide incidence, mortality, and survival of patients with multiple myeloma (MM) and relevant factors such as Health Care Access and Quality (HAQ) index, gross domestic product (GDP), health care expenditures, access to cancer drugs, and patient empowerment has not been addressed before. Material and Methods Epidemiologic data were obtained from the International Agency for Research on Cancer. The mortality‐to‐incidence ratio (expressed as 1‐MIR) was used as proxy for 5‐year survival. Information on health expenditure was obtained from Bloomberg Health‐Care Efficacy ranking, the HAQ Index was used as a measure of available health care. For patient empowerment, visits to the Web site of the International Myeloma Foundation were used as proxy. Data on GDP and population per country were assessed from the International Monetary Fund and the United Nations Population Division, respectively. Possible associations were analyzed using Spearman's rank‐order correlation. Results The worldwide incidence of MM is currently 160,000, and mortality is 106,000. Age‐standardized myeloma incidence varies between 0.54 and 5.3 per 100,000 and correlates with 1‐MIR, patient empowerment, HAQ Index, and access to cancer drugs. The 1‐MIR varies between 9% and 64% and is closely related to myeloma incidence, HAQ Index, patient empowerment, access to cancer drugs, and health care expenditures. Conclusion The global incidence and outcome of MM shows significant disparities, indicating under‐recognition and suboptimal treatment in many parts of the globe. Results also highlight the importance of economic resources, access to and quality of health care, and patient education for improving diagnosis and survival of patients with MM. Implications for Practice Multiple myeloma accounts for 10% of all hematological malignancies and has moved to the forefront of clinical interest because of the significant advances in medical treatment. Diagnosis depends on laboratory tests, imaging, and professional expertise, particularly in patients without a significant M‐component. The present data show a substantial worldwide variation in incidence and mortality, that is mainly due (apart from variations due to ethnicity and lifestyle) to disparities in access to and quality of health care, a parameter strongly related to the economic development of individual countries. Improvement of quality of care and, consequently, in outcome is associated with patient empowerment. INTRODUCTION The International Agency for Research on Cancer (IARC) estimated the worldwide incidence of multiple myeloma (MM) amounted to 160,000 cases and the global myeloma mortality amounted to 106,000 patients for the year 2018 [1]. This translates in a global age-standardized incidence and mortality rate of 2.1 per 100,000 and 1.39 per 100,000, respectively. Incidence and mortality rates vary significantly between individual countries and depend on various factors. As detailed information on the interrelations between epidemiologic data of MM with health care access and quality, economic resources such as gross domestic product (GDP), access to drugs, and information on patient empowerment in different countries is not available, we aimed to evaluate possible relationships. We used the mortality/incidence ratio (1-MIR) as proxy for overall survival [2][3][4], which is generally accepted as a high-level comparative measure to identify disparities in cancer outcomes [5], and the visits to the Web site of the International Myeloma Foundation (IMF) [6] as proxy for patient empowerment. Results should inform about the complex interplay between these factors and provide the necessary basics for improving diagnosis, management, and outcome of patients with multiple myeloma in many areas of the world. MATERIALS AND METHODS Incidence and mortality data were obtained from the IARC [7]. Data on the GDP per capita were accessed from the International Monetary Fund [8], and data on population per country were from United Nations Population Division estimates [9]. The health expenditure per capita ($) for 2015 was obtained from the Bloomberg Health-Care Efficacy Ranking [10]. In addition, the Health Care Access and Quality (HAQ) Index was used as measure of access to and quality of health care (HC) [11]. Because recent survival data are not available for most countries, the mortality-to-incidence-ratio (MIR) was used as proxy for MM outcome [4]. The MIR is usually calculated by dividing crude rates or numbers of deaths by crude incidence rates or numbers of incident cases [5]. Because crude rates or numbers were not publicly available for all selected countries, the MIR for multiple myeloma was calculated by dividing the standardized mortality rate by the standardized incidence rate in a similar calendar period. Results are shown as 1-MIR, expressed as a Abbreviations: MIR, mortality-to-incidence ratio; HAQ, health care access and quality; IMF, International Myeloma Foundation. percentage between 0 and 100. Values approaching 0% represent a poor survival rate, and those approaching 100% represent an excellent survival rate. As a measurement of patients' information about MM and patient empowerment, we used the number of visits per 100,000 inhabitants per country to the Web site of the IMF for 51 countries. Data from these countries, with the exception of Serbia and The Netherlands, were used for correlation studies. For Serbia, national incidence data do not exist, and for The Netherlands, the IACR advised not to use projected data for 2018, as the method used for projection might not have been optimal (Dr. Jacques Ferlay, personal communication). Information on access to cancer drugs for 17 countries was obtained from IQVIA Institute [12]. Correlation analyses were conducted with R using Spearman's ρ correlation coefficient. Global MM Incidence The estimated age-standardized incidence rates of multiple myeloma in 2018 are shown in Figure 1 (with permission of the World Health Organization, International Agency for Research on Cancer). The lowest incidence rates were noted in South Korea (0.54/100,000), Malaysia (0.75/100,000), Philippines (0.86/100,000), and China (0.92/100,000), followed by Saudi Arabia (1.0/100,000) and India (1.0/100,000), and highest in New Zealand (5.3/100,000), followed by Australia (5.0/100,000), the U.K. (4.3/100,000), Israel, and Norway (both 4.2/100,000). MM incidence correlated most strongly with 1-MIR, number of visits to the IFM Web site (patient empowerment), access to cancer drugs, HAQ Index (Fig. 2), and other parameters such as health care expenditures and GDP, as shown in Table 1. Myeloma Incidence and Mortality Myeloma incidence correlated closely with mortality in countries with very low incidence rates (below 1/100,000; ρ = 0.95, p < .0001), indicating very short survival in those states (Fig. 3). The correlation between both parameters diverged progressively, with increasing incidence with corresponding decline of correlation coefficients to 0.58 in countries with incidence rates between 1-3 per 100,000, and to 0.36 in those states with incidence rates greater than 3 per 100,000. Myeloma Mortality/Incidence Ratio as Proxy for Survival As information of up-to-date survival data is not available for most countries, the mortality/incidence ratio expressed as 1-MIR was used as crude proxy for 5-year survival rates [3,4]. The analysis revealed a marked variation between individual countries and varied between 9% and 64% in the 50 countries with data on patient access to myeloma information available (Fig. 4). Poor outcome reflected by a low 1-MIR was observed in Egypt (9%), followed by the Philippines (10%), Thailand (13%), Indonesia, Mexico, South Korea, and United Arab Emirates (UAE) (15% each). The best outcome expressed by a high 1-MIR was observed in New Zealand (64%), followed by Iceland (62%), U.K. (60%), Belgium (59%), and Australia (57%). Access to cancer drugs, patient empowerment, HAQ Index, and health care expenditure were further parameters closely associated with 1-MIR (Table 1; Fig. 5). Economic Resources and Access to Drugs Health care expenditures (ρ = 0.87, p < .0001), HAQ Index (ρ = 0.83, p < .0001), and patient empowerment (ρ = 0.82, p < .0001) were significantly related to GDP (Table 1). Data on access to cancer drugs were available for 17 countries only. Correlation analysis revealed a strong interdependency between economic resources expressed by GDP and access to cancer drugs (ρ = 0.84, p < .0001). Patient Empowerment The frequency patients are visiting a Web site providing expert information on the disease and its complex management was used as surrogate for patient empowerment. The number of visits were strongly correlated with health care expenditure (ρ = 0.89, p < .0001), HAQ Index (ρ = 0.86, p < .0001), access to cancer drugs (ρ = 0.84, p < .0001), and GDP (ρ = 0.82, p < .0001; Fig. 6; Table 1). DISCUSSION One of the notable findings of this study is the significant variation of the global incidence of multiple myeloma, with age-standardized incidence rates varying from 0.1 per 100,000 to 5.3 per 100,000 (Fig. 1). Age [13], male gender [14], familial and ethnic background [15], other genetic variants [16], obesity [17], life style [18], and environmental factors [18] are established risk factors for multiple myeloma. Although these variables likely account for some of the reported disparities, missing diagnosis of multiple myeloma in several regions likely accounts for most of the substantial differences. This also suggests that the presently available incidence and mortality figures for multiple myeloma grossly underestimate the global burden of the disease. Interestingly, incidence correlated closely with survival in countries with low incidence rates of lower than 1 per 100,000, but this relationship decreased progressively with increasing incidence (Fig. 3), a phenomenon that reflects the impact of quality and access to health care, which is substantiated by a close correlation between HAQ Index and incidence (ρ = 0.70, p < .0001). As recent survival data for multiple myeloma are not available for the vast majority of countries, we used the 1-MIR as proxy for 5-year survival [2,3]. This measure has been shown to provide a good approximation of survival in most cancers, but according to one study [4], it was found to underestimate survival of patients with multiple myeloma in two of seven countries by as much as 10%-17%. The calculated data expressed as 1-MIR show a marked disparity in outcome with very short survival in some countries and remarkable survival in countries like New Zealand and Iceland (Fig. 4). Recent research showed a close correlation between the 1-MIR and the 5-year survival rate in patients with cancer in The Netherlands [4], as well as in Peru [3]. Applying these findings to the myeloma population would result in an estimated 5-year survival rate of about 9% in Egypt and 62% and 64% in Iceland and New Zealand, respectively. The 1-MIR was strongly related with health care access and quality and access to cancer drugs, which is obvious and supports international efforts to improve health care [19,20]. It has to be acknowledged that improving health care and access to high quality care is to a large part related to the economic success and to the level of information of the individual societies' processes [21,22], which take time to achieve the desired transformation. Although HAQ Index was closely associated with GDP and HC expenditures, there are exceptions indicating that access to high-quality health care is not strictly related to economic performance and HC expenditures of individual societies, as several countries with lower GDP still seem to offer high diagnostic and treatment standards. As an example, the highest incidence rate (5.3/100,000) was noted in New Zealand with a GPD of about $ 40,000, whereas in the UAE, with a GDP almost twice as high, around $70,000, an incidence rate of 1.2 per 100,000 only was observed. The 1-MIR as proxy for survival showed a similar pattern with an estimated 5-year survival of 64% and 15% in New Zealand and UAE, respectively. The survival estimates, although calculated according to several previous reports [3,4,23], need to be interpreted with caution, as they represent a rough estimation and heavily depend on the quality of data available to the cancer registries. Interestingly, increased health financing alone does not guarantee optimal outcome. Instead, how well health spending translates into heightened access to quality health care is more relevant. Present developments in multiple myeloma indicate an increasing dependence on the economic performance of individual societies to maintain unrestricted access to novel cancer drugs, which come at high costs to the market [24,25]. This idea is strongly supported by the close correlation between access to cancer drugs and GDP (ρ = 0.84, p < .001). For the majority of patients with multiple myeloma, most or all of the recently introduced novel drugs are not available as documented by surveys from several parts of the globe [25,26]. This also applies to several European countries [27] with limited or no access to novel drugs such as daratumumab, isatuximab, pomalidomide, carfilzomib, or selinexor reflecting a marked gap between new treatment options and real world practice, resulting in unnecessary suffering and shorter survival. In them to cooperate with their caregivers and to make informed decisions about diagnostic procedures and treatment selection. Above this, they may acquire significant expertise in the recognition and management of critical situations, thereby reducing the risk for adverse outcome [28,29]. One of the options for self-empowerment is by viewing Web sites of international organizations devoted to improve patient education and empowerment, such as the one of the IMF [6]. Our data show greater patient interest in myelomaspecific information in countries with high incidence rates, which usually are the more affluent countries with better HAQ Index, health care expenditures, and access to cancer drugs. Expectedly, patients from those countries experience better treatment outcome. The study has several limitations. As survival data from national health registries are not available for the vast majority of countries, we used the 1-MIR as proxy for patient outcome. This value depends on the accuracy of cancer statistics of the individual countries, which has been shown to be a measure of the quality of the organization of the health system of the respective countries [30]. Furthermore, the mortality rate does not refer to the same patients as the incidence rate, and health care systems are likely to change and to evolve over time [31]. Survival also changes over time, which likely impacts on the assumed correlation between 1-MIR and 5-year survival rates. Presently, an exact definition for patient empowerment is not available, hence we used the number of visits to the Web site of the IMF, the most frequently accessed information tool for patients with myeloma. The frequency of visits is not only a reflection of patient interest but may also be influenced by the public campaigns of the IMF in individual countries, but interestingly, a very high number of hits was noted in countries such as Switzerland, Denmark, or Israel, without much presence of the IMF other than the Web site. Taken together, these findings show a significant disparity in myeloma incidence and outcome, indicating that myeloma often remains undiagnosed and patients are suboptimally treated in many parts of the globe. Our findings highlight the importance of economic resources, health care spending, access to and quality of health care, access to novel drugs, and patient education for improving diagnosis, management, and survival of patients with MM. CONCLUSION The global age-standardized incidence of multiple myeloma varies between 0.54 and 5.5 per 100,000. Incidence correlates with 1-MIR as surrogate for 5-year survival and varies between 9% and 64%, GDP, health care expenditures, HAQ Index, access to cancer drugs, and patient empowerment. These findings highlight the importance of access to and quality of health care, economic resources, and patient education for improving diagnosis, management, and survival of patients with multiple myeloma.
2020-04-28T13:02:03.512Z
2020-04-25T00:00:00.000
{ "year": 2020, "sha1": "b9e4db47d011a69b11f16c93b2bf79d94cc5d977", "oa_license": "CCBYNCND", "oa_url": "https://theoncologist.onlinelibrary.wiley.com/doi/pdfdirect/10.1634/theoncologist.2020-0141", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6af2fcd370f1b675a98d09c0015c68dedaf6bb27", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
11763887
pes2o/s2orc
v3-fos-license
Global Health Initiatives and aid effectiveness: insights from a Ugandan case study Background The emergence of Global Health Initiatives (GHIs) has been a major feature of the aid environment of the last decade. This paper seeks to examine in depth the behaviour of two prominent GHIs in the early stages of their operation in Uganda as well as the responses of the government. Methods The study adopted a qualitative and case study approach to investigate the governance of aid transactions in Uganda. Data sources included documentary review, in-depth and semi-structured interviews and observation of meetings. Agency theory guided the conceptual framework of the study. Results The Ugandan government had a stated preference for donor funding to be channelled through the general or sectoral budgets. Despite this preference, two large GHIs opted to allocate resources and deliver activities through projects with a disease-specific approach. The mixed motives of contributor country governments, recipient country governments and GHI executives produced incentive regimes in conflict between different aid mechanisms. Conclusion Notwithstanding attempts to align and harmonize donor activities, the interests and motives of the various actors (GHIs and different parts of the government) undermine such efforts. Background Over the past decade, the international aid community has shown greater concern with improving aid effectiveness. In spite of historical gains in health status, challenges still abounded: in 1998, the infant mortality rate (IMR) in Africa was still 91 per thousand, more than four times the rate for Europe [1]; in 2006, over 3.3 billion people worldwide were at risk of malaria transmission contributing to approximately 1 million deaths each year [2]; and the estimated number of individuals living with HIV/AIDS by 2001 in Sub-Saharan Africa was 28.5 million. The failure to effectively deliver available interventions largely accounts for the excess mortality among the poor [3]. The international aid community thus sought for new "ways of doing business" that could tackle the high burden of disease in the low-income world by expanding access to interventions such as vaccines, insecticide treated bed nets, and anti- Over this period the term Global Health Initiatives (GHIs) started to be used. Other terms that appear to label an overlapping set of phenomena are Global Public Private Partnerships [4,5] and Global Health Partnerships [6]. A general definition of GHIs is still subject to discussion [7]. A useful one for the purposes of this paper describes GHIs as a standard model for financing and implementing disease control programs in various countries and in different regions of the world; they can be part of a multilateral or a bilateral program as the case of PEPFAR (the United States President's Emergency Plan for AIDS Relief); alternatively they can be established as a public private partnership like the Global Fund [8]. It is estimated that more than 100 such entities exist [9]. GHIs have tended to support the involvement of nonstate actors, initially the private or commercial sector and later also civil society organizations, thus bringing diversity in the range of stakeholders involved in the health sector. While the majority of these initiatives aim at galvanizing support-financial technical and political-to low-income countries, their remit differs as some focus on advocacy and others operate as funding bodies. An example of an advocacy initiative is the 'Countdown to 2015' working at the global level to track progress made towards the achievement of the MDGs 1 (Eradicate Extreme Poverty & Hunger); 4 (Reduce Child Mortality); and 5 (Improve Maternal Health); to promote the use of evidence in policy making; and to increase health investments at the country level [10]. An example of a GHI operating as a funding body is the Global Alliance for Vaccines and Immunization (GAVI). It provides financial and in-kind support to developing countries in order to increase access to vaccines and to support sustainability of national efforts to control childhood diseases responsible for high mortality [11]. The amount of financial resources provided by GHIs for scaling up specific health interventions in low and middle income countries has been unprecedented. Combined, the Global Fund and PEPFAR have disbursed over US$26 billion since their creation for HIV/AIDS prevention and treatment [12,13]. The United States President's Malaria Initiative (PMI) committed over US $1.25 billion between 2006 and 2010 to 15 countries in Sub-Saharan Africa [14]. These additional resources raised expectations. For example, plans for the eradication of diseases such as malaria and measles are now discussed by the international health community, but were not considered options a decade ago when funding for research, development and expanded access to effective interventions was scarce. However, when resources are earmarked to fund specific health interventions as is the case with the way many GHIs operate, they may create problems at country level, mirroring those faced by the project approach: narrow targets [15], fragmentation and duplication of efforts; and pressure on governments to respond to the separate requirements of different programs and donors [16]. Overall, evidence about the operation of GHIs is still scarce. Some studies have focused on quantitative analyses of disease outcomes [17][18][19][20][21][22] while others have started to shed light on the immediate effects of GHIs on health systems [7,9,15,23]. However, the robustness of the latter studies has been constrained by the lack of use of theoretical frameworks which would have helped in providing more rigorous accounts as to the behaviours of the actors involved in the delivery of aid. This paper presents the results of a study on two GHIs-PEPFAR and the Global Fund-in the early stages of their operation (2003/2004) in Uganda. These two GHIs became very prominent, significantly funding actors in the country. The paper seeks to provide an indepth examination of the behaviour of actors associated with the two GHIs and the responses of the government of Uganda (and its various parts). It adopts agency theory as a conceptual framework to understand these behaviours and to explain the underlying incentive regime of the relationships between these actors. Methods and analytical framework The results reported in this paper form part of a larger study that set out to better understand the relationship between donors-bilateral and multilateral agencies-(including GHIs) and the government of Uganda (and its various parts). The importance of real-life context is captured by qualitative research in general [24] and in particular by a case-study approach [25]. An in-depth qualitative and case study approach thus was required to investigate the complex subject of the governance of aid transactions in Uganda, given the small number of organizations (sample size) involved. The findings presented in this article represent the sub-set of the data collected concerned with GHIs. Data collection took place in Uganda from September 2003 to June 2004. This included: a) A total of 36 in depth and semi-structured interviews were conducted with policy makers and officials from donor agencies based in Uganda at national level. The selection of interviewees was purposive [26] combined with snowball technique [27]. Out of the 36 interviewees, five were key informants. Key Informants provided expert knowledge about the relationship between the parties; they were accessed over the course of the project, and were more reflective than other respondents [27]. Interviews were conducted using both formal/semi-structured guides as well as informal, unstructured conversations. b) Observation of 30 government/donor (including GHIs) meetings took place at national level. These included joint review missions, public expenditure reviews and project evaluations. They covered not only facts but also observations of interactions and behaviours. Although the work focused on the national level interactions between the government and donors, district and civil society views were partially captured through discussions as observed during meetings. c) Various policy documents (e.g. memoranda of understanding between the parties and annual performance reports) were collected and analyzed. Both published and unpublished documents relevant to the research topic were collected. The analytical process involved: familiarization with the data (including data cleaning and checking for consistency), development and application of a coding scheme (or indexing) based on the identification of a thematic framework, charting and interpretation [26][27][28]. Agency theory (see below) guided the conceptual framework of this study and was used to generate the first set of themes to code the data. Amendments were made according to themes revealed by the data. Agency theory can be used to understand economic relationships. The basic model comprises two individuals: a principal and an agent. In this relationship, there is an explicit or implicit contract between the parties, and as in any contract, principals use incentives to guide or to motivate the agent's actions towards agreed desired outcomes. The principal will contract and compensate an agent for the costs or disutilities associated with the agent's implementation of an agreed activity, leading to the advance of the principal's objective function. In the context of international development assistance, the value of such a framework lies in its ability to understand the incentive structure embedded in the aid delivery process [29]. But incentives can only be understood by reference to the motivations of actors. For example, a reassignment of responsibilities can only be understood as punishment or reward (or neither) in the light of understanding of the motivation of the person reassigned. This study sought to understand the motives of the relevant actors in order to identify and analyze the incentives present in implicit and explicit contracts. The interpretation of organizational objective functions relied on the observation of the behaviours of relevant actors throughout the field work and interviews. Two approaches could have been taken: one would be to discover the nature of the agency relationships; another is to use agency framework as a mode of analysis. In this paper we opted for the latter, and used agency theory as a theoretical framework to seek explanations for outcomes observed and the incentive regime that has been put in place; rather than testing the hypothesis of there being or not a principal agency relationship. In order to ensure reliability [30,31], objective and comprehensive records of the data generation and analytical processes were maintained. Respondent validation [32] was sought by presenting the preliminary research findings during a dissemination workshop in Uganda in October 2005. Deviant case analysis [30,33] was incorporated into the analytical process of this research. The triangulation of different data sources (interviews, observation and documentary analysis) was carried out to allow for one source balancing the scope for errors and bias of the other [34]. Ethical clearance was obtained from the London School of Hygiene and Tropical Medicine, the Institute of Public Health/Makerere University and the National Council for Science and Technology in Uganda. Consent for interviews was agreed verbally. An information sheet was given to every interviewee. Confidentiality of data was maintained throughout the research process and no names of individuals interviewed were disclosed. Health development aid in Uganda The government of Uganda stated its preference for donor funding to be channelled through the general or sectoral budgets (instead of project support) on the basis that these should be more efficient, equitable and should allow them greater ownership [35]. Introduced in Uganda in 1998, budget support occurred in two different forms: general contributions to the budget of the government and earmarked contributions to the Poverty Action Fund (PAF)-equivalent to a Poverty Reduction Strategy Paper (PRSP). The number of donors contributing to budget support increased from five in 2000/01 [36] to 12 in 2002/2003 [37]. Sectoral budget support to the health sector in Uganda was launched in 2000 in the form of a SWAp (Sector Wide Approach), under which donors and government pooled resources, and jointly agreed the National Health Policy (NHP) and the Health Sector Strategic Plan (HSSP) and exercised oversight over their implementation. The proportion of funding for the health sector financed through projects decreased from 45% in 1999/00 to 34% in 2002/03 in relation to the overall resource envelope for the health sector [38]. However, project funding started to increase once again from 2003 onwards as Uganda became a recipient of large volumes of funds from GHIs, mainly focused on HIV/AIDS. Over the period covered by this research (from 2003 to 2004), the total approved budget by the Global Fund to Uganda totalled US$160.6 million 1 [39]. PEPFAR's budget for Uganda in 2004 was US$94 million [40]. By February of that year, 40% of the PEPFAR budget had been disbursed [41], indicating fast disbursement. In comparison the government budget for the entire health sector in financial year 2004/2005 was US $136.5 million [42]. This amount included budget support contributions. Structural features of the two GHIs PEPFAR funds could not be provided directly to the government, only to non-governmental and private sector organizations (legal requirements established through the US Congress). In contrast the Global Fund operated as a financial instrument based on proposals being led by the government of Uganda. The Global Fund mechanisms for fund disbursement are somewhat flexible and in countries like Mozambique it used a common basket of pooled funds contributed by various donors and managed by the government [43]. In Uganda, both PEPFAR and the Global Fund opted to create parallel systems of management. Neither of these were seen to have contributed to the health Sector Wide Approach (SWAp), a mechanism which would have earmarked their funds for the health sector, but otherwise left the financial control in the hands of the government, overseen by a collective of bilateral and multilateral agencies. The Global Fund in Uganda used a separate project management unit within the Ministry of Health (MoH), their own monitoring tools (rather than the common mechanisms adopted through the Joint Review Missions, a performance review mechanism under the SWAp) and a parallel system for procurement (although the Global Fund guidelines made provision for the use of a common working arrangement). Therefore neither Global Fund nor PEPFAR participated in the common technical mechanism of aid coordination among health sector stakeholders. Their proposals were not scrutinized by the Sector Working Group, set up by the Ministry of Finance Planning and Economic Development (MoFPED) and MoH to assess projects for value for money and alignment with government policies and plans. PEPFAR also followed their own funding and audit timetable instead of the national schedules for planning and budgeting. Another special requirement set up by the Global Fund was the conditionality of additionality: the funds it provided had to be additional to those budgeted nationally and should not be treated as fungible. However, this condition came into conflict with macroeconomic budget ceilings set by the MoFPED. If these ceilings had been reached, the offer of resources from the Global Fund should in principle have been rejected. While there were discussions to apply the ceiling to Global Fund Round four, these did not materialise in the end [44,45]. During interviews some respondents explained that the rationale that drove GHIs like PEPFAR and the Global Fund to set up these parallel mechanisms were related to the weak capacity of government-particularly in relation to timely disbursement of funds, procurement and monitoring and evaluation. If they had decided to work through the existing government structures, this would have delayed the implementation schedule of their activities. Interviewees also said that separate management structures were used as a mechanism to reduce fiduciary risks. The latter was substantiated to some extent when in 2005, the Global Fund identified serious mismanagement problems in five of its grants to Uganda leading to their suspension [46]. However the suspension was lifted later in the year highlighting some of the complexities related to this issue-further explored elsewhere [47]. Behaviour of GHIs and incentives PEPFAR was argued by a number of key informants and government officials to be detrimentally affecting the health system. Competition for human resources was a particular concern. Often mentioned was the loss of highly qualified staff to PEPFAR funded projects in the face of higher salaries and benefits. This problem was said not to be restricted to government units but also to affect the private not-for-profit sector (which receives financial subsidy and seconded health workers from the government). Staff were said to be moving primarily to two specific organizations receiving support from PEPFAR. One of them received 300 applications for clinical positions advertised in early 2004. Salaries paid by this organization were reported to be three times those paid by the private not-for-profit sector. The view of an interviewee from one of these organizations was that: "We are not poaching staff; applicants are not from government units. But on the other hand, it's a free world". (Private sector representative) It was reported that the targets set by the US government for PEPFAR were not chosen in consultation with local government partners. Furthermore, in contrast to the disclaimer in the Memorandum of Understanding between the government of Uganda and Health donors that "as provided in the Constitution of Uganda, [both parties should] ensure that other marginalized groups of society such as the poor, the displaced and the disabled are specifically addressed" [48], PEPFAR did not outline a clear strategy on how it would reach these particular groups. It did not explicitly mention a focus on the poor, only on orphans. A common critique made in various meetings of health sector stakeholders was that the agencies implementing PEPFAR projects were reaching their targets by focusing on 'easy to reach' population groups such as health workers, teachers, police officers in large urban areas as opposed to the poor and vulnerable in rural parts of the country. Conflicting motives of parties The Global Fund system requires the recipient country to apply for funding. Uganda applied for, and was successful in securing funding in all four rounds within the period of fieldwork. The justification used by those leading the application process in the MoH was the underfunding of the sector. The volume of funding made available by the Global Fund, and its perceived accessibility seemed to make certain members of the government more flexible about its rules and mechanisms leading them to interfere with the existing integrated budgeting processes, it was argued. "People rushed off around the Global Fund but collectively the rest of us [budget support donors] have more money that we are providing to the budget [49] 2 . But that is not seen to be accessible in the same way ... somehow the idea of the Global Fund money even if it's relatively small, excited much more political interest. I don't know ...it's seen as an opportunity that anybody can get something out of it and somehow with budget support money that isn't [the case]". (Donor representative) Various donors reported that they had been willing to increase their contributions via general budget support to the government (and consequently to the health sector) but had been prevented from doing so by the MoFPED on the basis of the country's macroeconomic budget ceilings. Hence it would appear that some members of the government went to considerable lengths to attract additional funding from a source not operating through the mechanism which the government had stated it preferred, while other members rejected funding from sources operating through that mechanism. An explanation of this apparently perverse behaviour was offered: "The Ministry of Finance encourages the use of SWAp, but currently the approach of the Ministry of Finance [with]... sectoral budget ceilings results in threats to the sector, not only because there is insufficient funding to the sector at the moment, but also because it encourages the sector to seek funds elsewhere, off budget. If the sector was getting sufficient or a lot more funds through the budget it would be easier to argue against the GF and other GHIs". (Technical assistant) When disagreements occurred between the government and GHIs, for example because of the lack of alignment with sector plans, the government did not always operate as a single entity. The PEPFAR program was agreed directly with the President's office without much scope for inputs from health sector stakeholders. "If the president has said yes, then [a senior health sector official] saying hang on, not like that, isn't going to get us anywhere, he can't even be confident that his ministers are saying the same thing as him". (Donor representative) The application to Global Fund round four was another indication of non-alignment of motives within government but also between government and different donors. The budget support and SWAp donors criticized government for applying to the Global Fund yet when donors were consulted during a monthly coordination meeting at the time of round four there had been general agreement in favour of the government applying (though no discussion took place at the time of the meeting as to how the funds should be channelledvia the SWAp or through a project management unit). Weak institutional environment A number of institutional issues represented an added layer of complexity in the relationship between the GHIs and the government, most prominently with regard to changes in authority and leadership as well rules and regulation which came into conflict. The problem of lack of alignment with the stated goals of government was said to be related to lack of authority within government combined with a perceived low level of commitment by senior management [which was seen to be detached from the routine management of the technical programmes, lacking knowledge of their activities and not showing ownership (key informants)] and a lack of strong institutions-which instead made the system rely on key individuals. The waning commitment to SWAp and general budget processes and objectives over the period of the fieldwork seems to exemplify these arguments. Some members of government argued that the rules and regulations of the public bureau would be sufficient to align incentives in this environment. The perception was that because government was a bureaucracy, the policies and rules it had effected would be adhered to. "We have clearly said that our preferred mode of financing is budget support, and over the years, budget support has been on an increasing trend and even to provide more incentives for ministries, the issue of integrating projects into the budget is meant to be a trade off. If you have more projects then you have less budget support-as government we are not very much in control of projects so need to think twice if they are worth it....the way government works, [is that] it's a bureaucracy, so there, are no power struggles in that sense". (Government official) However, it seemed that rules and regulations (e.g. the SWAp related structures like the Sector Working Group) put in place by government have not been able to curb project expansion and ensure that technical programs adhered to the budget system. The SWAp appeared to be a major objective of the majority of development agencies during its introduction and first years of implementation in Uganda (between 2000 and 2002/03). However as there was a change of Minister, there was perceived to be a change in which directorate had more authority in the conflict, explaining changing support levels for SWAp vis-à-vis GHIs. Some key informants observed that at the outset the technical programs were the ones seeking or accepting project funding (including from the Global Fund), but after late 2003 it was said that senior management of the Ministry were more actively involved as well. One government official said: "some of the new people seem not to value the [SWAp] partnership to the same extent". Those who remained supportive of the SWAp principles and structures were said to be few by the time of field work. To a certain degree, the observed shift away from the SWAp and towards projects was related to new leaders taking charge in the MoH. However these changes took place not only at the higher political level, but also at the technical level, on the government side. And the on donors' side, there were also changes of staff at different levels (leadership and technical) due to their rotation policies. These changes highlight the volatility of the institutional aid environment where changes in persons (with their different personalities and motives) may impinge on the goals and strategies adopted. Discussion Interpreting motives through the lens of agency theory In trying to understand relationships in the aid environment, agency theory seems to provide some insights into the behaviour and motives of the actors involved. The international development assistance context contains various sets of principal-agent relationships between and within organizations [50], through multiple layers of delegation [29]. Figure 1 shows the main sets of principal-agent relationships in this environment. These are described as follows: "In a standard official bilateral aid setting, the chain of principal agent relationships starts with taxpayers as principals, who wish to transfer part of their income to recipients in other countries. They delegate the implementation of this transfer programme to their representatives (parliamentarians, politicians) who become their agents. These agents, in turn, become the political principals to an aid agency in charge of implementation of aid programmes. Within the aid agency, a hierarchical command chain creates a series of principal-agent relationships. When actual implementation is subcontracted to a private consultant or aid services supplier company, the task manager in the aid agency becomes a principal to the contractor; the latter becomes an agent to the task manager. Depending on the contract, the contractor may also be an agent to the recipient agency or counterpart administrator in the beneficiary country. The contractor may end up being an agent to two principals-a typical joint delegation situation. The recipient agent, in turn, is an agent to political principals and the beneficiary population in the recipient country." [[29]: p. 18] There are a number of variations to the model described above. A bilateral aid agency (like the UK Department for International Development) providing aid directly to a recipient government acts or tries to act as an agent on behalf of its government, and may in some respects act or try to act as the principal towards the recipient government. Another option is for governments (principals) to provide aid via multilateral development agencies (such as the World Bank) (agents). In this context the system of accountability (or the principal's ability to ensure the advance of its objectives) is weakened as principals have to rely on the various layers of international bureaucracy and chains of principal-agent relationships to monitor and adjust penalties and rewards to performance. Incentives serve to align conflicting objective functions. The GHIs appear to have created incentive mechanisms that have realigned Ugandan government objectives. This suggests GHIs operating in the role of principal vis-à-vis the Ugandan government, even if at the same time acting as 'agents' of their own 'principals' (for example their donor constituency-the US government, or President's office in the case of PEPFAR, a mix of bilateral and multilateral funding agencies in the case of the Global Fund). The (vertical) project approach, as preferred by the GHIs examined by this study, has been identified with short term time horizons, the attribution of results directly to investments and greater control over financial management [51]. While fiduciary requirements are usually part of a SWAp or GBS agreement these may be less effective than the detailed scrutiny that can be exercised over projects. Precisely because a project is accountable for a single or limited number of outcomes and can ignore broader development objectives, it can operate in an 'insulated' environment and 'buy out' local constraints and uncertainties regarding disbursement bottlenecks and onerous bureaucratic controls [52]. All these arguably accord with the demands of political processes in donor countries: democratic government mandates usually of between 4 and 5 years [52], and the need to present success (and avoid scandal) to sustain popular consent to aid. SWAp shifts control from donors to recipient governments. This entire figure reflect the sets of relationships in the area of international development assistance at the macro level; at the meso level are the relationships between organisations; the relationships within organisations are those at the micro level. Figure 1 Sets of principal-agent relationships in international development assistance. Source: [47]. Oliveira Cruz and McPake Globalization and Health 2011, 7:20 http://www.globalizationandhealth.com/content/7/1/20 As agents, GHIs' incentives require them to invest and report on programs in a manner consistent with donor government objective functions and as principals, they pass on those incentives. The pursuit of short term and visible achievements may thus lead them to prioritize "visible and uncontroversial forms of assistance with short-run payoffs.....rather than those with longer-run returns, like institutional reform" [ [53], p.20] which would be more compatible with the alignment agenda expressed by the Paris Declaration. In themselves these insights fail to explain the conflict among different aid mechanisms and with the Paris Declaration. The mix of bilateral and multilateral agencies contributing to SWAp, general budgetary support and the Global Fund are significantly overlapping and these together with the US Government that is a signatory of PEPFAR are all signatories to the Paris Declaration. Insight into the motivations of the super-national level actors of those agencies is outside the scope of this research, but other studies have suggested that similar processes to those suggested at national level by which individuals and groups within these agencies with differentiated objective functions have variable degrees of authority over different operations of the agencies apply: "as a whole, these principals, may have for a collective objective the maximization of the same social welfare function as that of a single benevolent regulator. However, each, single principal has only a limited mandate to fulfil" [54]. Agency theory would also suggest that differences in the degree to which ultimate principals-tax payers in donor countries and different constituencies among them exert oversight over different operations will also be reflected in the incentives created. The influence achieved by some HIV/AIDS advocacy groups over a number of donor governments' operations can provide explanation both of the creation of the Global Fund and the particular political exigencies that govern its contract with its funders. The specific rather than general political origins of PEPFAR are similarly more likely to explain the incentive mechanisms it has created. In addition, to advocacy groups, both PEPFAR and the Global Fund have seen private sector and other civil society organisations permeate their values and structures thus also justifying the incentive regime created by these agencies. While the findings of this paper argue that the incentives in place led to parallel structures and fragmentation of actions, there is scope to interpret these incentives as catalysers of diversity in the form of public-private partnerships for example. Agency theory also suggests that incentives have lower power where multiple principals compete for the effort of an agent [55]. The simplified institutional external aid scenario depicted in Figure 1 highlights the complexity of the aid system. There are not only multiple actors such as the UK and US governments but multiple mechanisms operated by the same governments-all competing for the weak institutional capacities of the Ugandan health system. These two governments alone directly operate aid programs through DFID and USAID, contribute to the Global Fund, other GHIs such as GAVI, and multilateral agencies such as the World Bank and the World Health Organization, while the US government operates PEP-FAR through separate mechanisms. The multiplicity of mechanisms add links in the chain of accountability from the ultimate principles in donor and recipient countries (tax payers and intended beneficiaries of aid) and obfuscate accountability through shared and therefore diluted responsibility for the effects of any given mechanism. Conclusions This study seeks to contribute to the international debate on GHIs. While others found similar results in relation to the behaviour of GHIs during the earlier years of their operation [15,23], this paper provides greater explanation and new insights in relation to the motives of GHIs, other actors in aid transactions, and constituencies within the government of Uganda and how those motives are reflected in the incentives created by aid mechanisms and reaction to those incentives. Explanation and insight are the strengths of in-depth case study that have here been supplemented by elements of a historical perspective and the application of agency theory. Agency theory helped to understand the impact of GHIs on the overall health aid scenario in Uganda, rather than specific elements of it by highlighting how they changed the way that incentives realigned the objective functions of principals and agents throughout the agency chain of development aid. But GHIs were not the only explanatory factor in the realignment that took place in Uganda. The acceptance of less integrated and nationally led aid required also a realignment of power between constituencies within the Ugandan government. The two sets of forces acted in tandem to change the aid contract. This insight does not in itself provide solutions. A key problem in this environment is how to provide incentives that are sufficient in minimizing conflicts between the parties and lead to behaviours that result in a maximization of aid effectiveness. Underlying this problem are critical environmental complexities: -Numerous layers of principal-agent relationships and actors and mechanisms simultaneously acting as principals and agents, weakening accountability links. -Multiple, shifting and conflicting objectives that lead to difficulties in predicting how incentives will align and how they will be reacted to; -Weak institutional and governance environments on both sides of the aid contract that result in poor alignment of activity within relevant organizations as well as between them. Further research into the above areas could provide some insights at least at country level on some of the possible ways of mitigating key agency problems encountered in aid relationships. These could include for instance an analysis of how different incentives could work to align or re-align motives. Additional research focusing on institutional and governance environments (e.g. leadership and ownership) could contribute to understanding reforms and eventually bring in new ideas on how to strengthen these. Since fieldwork, further shifts have occurred within GHIs, and at least some of them are supportive of better co-ordination and recipient government leadership and learning from the kinds of problems that have been documented here [7,9]. For instance, the Global Fund increased its use of country level procurement systems from 33% in 2005 to 56% in 2007; and monitoring and evaluation systems from 73% in 2005 to 82% in 2007 [56]. If the new government in the US proves to be more sympathetic to the harmonization and alignment agenda, PEPFAR may also evolve further in these directions. Yet much needs to be done in order to render greater accountability of GHIs to their ultimate principals. To this end, Riddell [57] proposes a new international aid office to oversee all aid disbursements. Such a body would enforce sanctions in case of donors' misbehaviour. However, it is not clear from where the authority to sanction could emanate. Current global health governance is characterized by a large number of powerful actors with different interests, values, and mandates [58] suggesting that the creation of a regulatory body for aid is likely to be a politically elusive goal. In its absence, current proposals such as the International Health Partnership + and the peer review system established by the DAC/OECD could serve to provide more independent feedback to ultimate principals than is currently available and thus increase pressure on donors to honour commitments, if the political impetus to establish and empower such mechanisms can be mobilized. Endnotes 1 This amount includes four projects with duration of three years each. 2 Total donor support to the general budget was US $275.1 million in 2003/4 [49]. In comparison donor support to the health sector was US$136.5 million in 2004/5 and the Global Fund average annual contribution was US$53.6 million -calculated on the basis of US $160.6 million allocated to four projects signed between 2003 and 2004 with duration of three years each, as reported earlier.
2014-10-01T00:00:00.000Z
2011-07-04T00:00:00.000
{ "year": 2011, "sha1": "375370a8a5a9eb51aad84d8348e69244b2b123d8", "oa_license": "CCBY", "oa_url": "https://globalizationandhealth.biomedcentral.com/track/pdf/10.1186/1744-8603-7-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9da2dd0afea0e0e092eb6074e595c6daadc9ebe7", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Sociology", "Medicine" ] }
263729633
pes2o/s2orc
v3-fos-license
Long COVID coping and recovery (LCCR): Developing a novel recovery-oriented treatment for veterans with long COVID Background Long COVID has affected 13.5% of Veterans Affairs (VA) Healthcare System users during the first pandemic year. With 700,000+ United States Veterans diagnosed with COVID-19, addressing the impact of Long COVID on this population is crucial. Since empirically-based mental health interventions for Long COVID are lacking, a vital need exists for a tailored recovery-oriented intervention for this population. This study intends to assess the feasibility and acceptability of a novel recovery-oriented intervention, Long COVID Coping and Recovery (LCCR), for Veterans with Long COVID, aiming to support symptom management and quality of life. LCCR is an adaptation of Continuous Identity Cognitive Therapy (CI-CT), a suicide recovery-oriented treatment for Veterans. Methods In a two-year open-label pilot, three single-arm treatment trials will be conducted with 18 Veterans suffering from Long COVID. Each trial includes 16 weekly 60-min sessions delivered via VA Video Connect (VVC) and/or VA WebEx. Primary objectives include optimizing LCCR for Veterans with Long COVID and assessing the acceptability and feasibility of the intervention, using attendance and retention rates, drop-out statistics, and client satisfaction levels. Additionally, potential benefits of LCCR will be explored by evaluating alterations in quality of life, resilience, mental health status (anxiety, depression, suicide risk/behavior), and personal identity. The protocol has been tailored based on Veterans' needs assessment interviews and stakeholder feedback. Conclusion If the LCCR intervention proves feasible and acceptable, a manualized version will be created and a randomized controlled trial planned to examine its efficacy in the broader Veteran population. Introduction Recent epidemiological estimates indicate that 10% of over 651 million documented COVID-19 cases globally have experienced Post-Acute Sequelae of COVID-19 (PASC) [1].PASC, colloquially known as Long COVID, is a condition characterized by "signs and symptoms following initial SARS-CoV-2 infection, that persist for more than one month (in mild cases), and more than three months (in cases severe enough to warrant oxygen support), which have a disproportionately severe effect on a patient's quality of life, far beyond what is expected from their initial infection" [2].Recent data suggest that one in five adults in the United States (U.S.) experience Long COVID symptoms post-infection ( [39]).Although estimates and impacts of Long COVID vary, 13.5% of Veteran Health Administration (VHA) users were effected by Long COVID during the first pandemic year [43].Current outcomes have implications for the over 700,000 Veterans diagnosed with COVID-19 across the U.S. [3]. A study of 236,379 Long COVID patients found that approximately 34% developed psychiatric or neurological diagnoses six months postinfection, in addition to negative physiological outcomes [4].Similarly, a study of VHA users (N = 73,435) found an increase in mental health challenges including sleep disorders, anxiety disorders, and trauma, six months after COVID-19 diagnosis [5].Further, a report on one-year longitudinal follow-up data on hospitalized COVID-19 patients [6] noted statistically significant increases in anxiety or depression from the six-month to 12-month time points.These findings are crucial in estimating the impact of Long COVID on Veterans, who have unique vulnerabilities due to increased incidence of trauma, physical, and psychiatric risk factors [7][8][9]. Although Long COVID symptoms can result in significant mental health and functional impairment [10,11,[41], there are few empirically supported treatment approaches incorporating both dimensions.Additionally, many in the field contend that given the complexity and variability of Long COVID manifestations, successful treatment cannot be considered from a single-organ point of view, instead requiring a multidisciplinary approach [12,13].Consequently, although addressing symptom relief is crucial [1], there is a gap in Long COVID interventions that center on recovery-oriented treatments for the mental health impacts, even as symptoms persist. In recent years, mental health recovery has shifted its focus from merely achieving symptom remission to a recovery orientation, which involves attaining a fulfilling life despite the presence of mental health symptoms [14].Multiple frameworks have been developed to identify the core factors of personal recovery, including the widespread CHIME model [15], which identifies five essential recovery processes: Connectedness, Hope and optimism about the future, Identity, Meaning in life, and Empowerment [15].Studies indicate that recovery-oriented mental health care improves the lives of those with persistent symptoms [16]. The CHIME model's recovery-oriented approach is particularly relevant to Long COVID treatment, as it emphasizes personal growth, resilience, and well-being, even amidst ongoing symptoms.Long COVID Coping and Recovery (LCCR) is a novel, manualized mental health intervention for Veterans with Long COVID, focusing on improving functional status, identity growth, psychological adjustment, coping, resilience, and life satisfaction.This approach targets the five CHIME recovery processes, utilizing psychotherapeutic techniques such as skills training, acceptance, mindfulness, narrative identity, and future selfcontinuity principles.By applying the CHIME framework to the LCCR intervention, Veterans with Long COVID may be able to concentrate on developing a personally meaningful approach to addressing, and if need be, living with, the symptoms of Long COVID to improve function and move towards recovery.This approach addresses the complex nature of Long COVID and its impact on mental health. LCCR was developed by adapting existing recovery-based group interventions piloted at the VA, including Continuous Identity Cognitive Therapy (CI-CT).CI-CT, specifically designed for Veterans experiencing post-acute suicidal episodes (PASE), aims to nurture a positive sense of identity, life narrative, and personal agency.With demonstrated acceptability, feasibility, and preliminary efficacy, CI-CT is a valuable treatment option [17]. CI-CT emerged as a response to limitations of prevalent suicide prevention treatments.In contrast to conventional treatments that target acute aspects of suicidal cognitions and risk factors [18], CI-CT emphasizes reconstructing a positive identity, enhancing clients' perception of their present-to-future self-narrative (known as Future Self Continuity (FSC)) [42], and fostering hope and optimism for their future selves [17].CI-CT's distinctive psychotherapeutic intervention, which prioritizes identity growth, purpose, meaning, and recovery even amidst ongoing suicidal ideation, was well suited to be adapted for a recovery-oriented treatment designed to address Long COVID recovery. To effectively adapt CI-CT for the Long COVID population's recovery needs, members of the study team reviewed the CI-CT treatment manual and identified key therapy components along with recovery-oriented concepts that relate to Long COVID needs, such as finding meaning and purpose, viewing the present as one part of a life story, and understanding how the present affects the kind of future that can be created.These extracted concepts were then developed to pivot away from the original suicide focus toward Long COVID.Additionally, sections of the treatment materials more focused on suicide-specific recovery werereplaced with sections focusing on recovery in the context of Long COVID's physical and psychological symptoms.Finally, we organized the session content into two modules: Module 1 focuses on addressing immediate coping issues as a result of Long COVID, including 'pacing,' and Module 2 focuses on longer-term recovery, with greater emphasis on identity and creating a sense of meaning and purpose in life. LCCR's initial intervention framework was designed to be flexible and personalized to meet the complex needs of Long COVID patients.The initial proposal included a core curriculum of 12 90-min group sessions supplemented by optional sessions to address specific Long COVID issues.Weekly topics were to include presentations from specialist providers, such as pulmonologists and nutritionists, to address the physiological needs of Long COVID patients. Between November 2021 and July 2022, our team conducted semistructured interviews with 22 Long COVID Veteran participants from the James J. Peters Department of Veteran Affairs Medical Center (JJP VAMC) Long COVID Clinic to understand their experiences and needs.Many Veterans reported feeling isolated, misunderstood, and struggled with their identity from having Long COVID.They also reported physical impairments including brain fog and fatigue.In discussing these findings, our team and stakeholder advisory board raised concerns that the initially proposed treatment length (12 sessions) and duration (90 min per session) might mentally and physically overwhelm the participants, hindering skill acquisition, sharing experiences, grasping abstract concepts, and achieving personal growth. To address these concerns, the session duration was shortened to 60 min, and the number of sessions was increased to 16, organized into two modules (module 1: coping and module 2: recovery/identity-based).This allowed additional time for developing connections and processing treatment content.The revised treatment manual was reviewed by a panel of clinical experts familiar with Long COVID Veteran care.Through three meetings over three months, their feedback on feasibility and the modules' content was incorporated into iterative revisions of the manual's structure and session sequence. Module 1, comprising the first eight sessions, concentrates on building community support, fostering hope, and improving coping with Long COVID symptoms and resulting functional limitations to enable the Veteran to move forward in their recovery journey.Module 2, the next eight sessions, emphasizes achieving long-term recovery through positive self-development, a valued life story, and a meaningful future.Veterans progress from skill-building in Module 1 to an identity-focused approach in Module 2, aiming for agency, personal growth, and a meaningful life narrative despite dealing with a chronic condition like Long COVID.Table 1 summarizes the brief descriptions of each session within the modules. Utilizing this revised treatment structure, three open-label trials (n = 4-6 Long COVID Veterans/trial) will be conducted to use an iterative approach for psychotherapy development and optimization.Veteran feedback and acceptability and feasibility data will be gathered before, midway (between Module 1 and 2), and after treatment.Data will be analyzed and the treatment will be further refined with input from a stakeholder advisory board and Veteran feedback.The results will inform the development and implementation of a large-scale randomized controlled trial (RCT) of LCCR, and a final version of the facilitator manual and Veteran workbook will be produced based on the findings. Study design This study will involve three separate cycles of one-arm treatment development trials, with iterative adjustments made to the treatment manual between each cycle based on Veteran and stakeholder feedback.The 16-session treatment will be delivered through VA-approved telehealth platforms VA Video Connect (VVC) and VA WebEx to accommodate the difficulty individuals with Long COVID may have with inperson treatment attendance.VVC is the VA's HIPAA-compliant video-conferencing application that allows Veterans to securely interface with providers remotely.VA WebEx is another VA-approved telehealth software that also allows secure telehealth visits.Although VVC is considered the best practice for VA telehealth care, VA WebEx will be used as an approved backup option in the event of technical difficulties. Three assessment time points (TP) including pre-(TP-1), mid-(TP-2), and post-treatment (TP-3) will be used to collect preliminary evidence of LCCR's ability to improve psychological adjustment to Long COVID symptoms, promote resilience, facilitate coping, and improve functioning in Veterans with Long COVID. Participants and recruitment To participate in the LCCR telehealth treatment, Veterans will be referred through clinicians at the JJP VAMC Long COVID Clinic.This clinic caters to Veterans who have been diagnosed with an initial COVID-19 infection through diagnostic assessments (such as polymerase chain reaction (PCR) or antibody tests) or physician evaluations and are currently experiencing prolonged symptoms.Prior to being enrolled in the study, individuals' medical records will be reviewed to determine preliminary eligibility and suitability for this group treatment.To be pre-screened for participation, individuals will be required to provide information about their COVID-19 diagnosis (e.g., method of diagnosis and the duration of persisting symptoms), as well as basic information (e.g., access to a device with internet and webcam).Following the prescreen, participants will be assessed for the full inclusion criteria (see section 2.3). All participants will be required to provide informed consent before enrolling, which will be recorded in their VA medical records.At TP-1, baseline assessments will be conducted to collect clinical characteristics of patients, using the Mini-Mental State Examination (MMSE) [19] to assess cognitive function, and the Modified COVID-19 Yorkshire Rehabilitation Scale (C19-YRSm) [20] to assess Long COVID symptom severity and subsequent functional status.The World Health Organization Disability Assessment Schedule 2nd Version (WHODAS 2.0) [21] will be used to evaluate functional status related to health.Eligible participants (see section 2.3 for eligibility criteria) will be asked to complete a variety of baseline measures.All consent and baseline measures will be collected in person or via telephone. The initial recruitment aimed for a total of approximately 36 participants (10-12 veterans per group/development trial).However, Veteran feedback indicated interest and potential benefits of group discussion, which a large group size may inhibit if it were to fit within the time limit for each session.The study now intends to have a total sample size of approximately 18 participants, with 4-6 veterans per group to address this. Inclusion and exclusion criteria The inclusion criteria include a positive screen for Long COVID (i.e., a positive COVID-19 diagnosis via PCR and/or an antibodies blood test, and symptoms lasting at least one month after the initial infection).All individuals will meet a mild Long COVID designation (where signs and symptoms persist for more than one month following initial SARS-CoV-2 infection), and some individuals may also meet the severe designation (where symptoms last for more than three months (in cases severe enough to warrant oxygen support)) [2].A specific mental health diagnosis is not required for participation.For complete inclusion and exclusion criteria, please refer to Table 2. Outcome measures While the primary objectives of this study are treatment development, feasibility, and accessibility, preliminary data will also be collected to identify potential benefits of LCCR.The assessed constructs were divided into primary and secondary outcome measures.Given the focus of LCCR to promote personal recovery while coping with a chronic health condition, the primary outcome is self-assessed functional status.Secondary outcomes include constructs related to quality of life and well-being, resilience, coping, mental health status (such as anxiety, depression, and suicide risk/behavior), and identity concepts. Table 1 LCCR module and session descriptions with brief objectives.Finding Joy and Self-Care Note: Each bullet point represents the overall objectives and topics discussed in each session within the modules. Y. Sokol et al. Primary Outcome Measures 1 World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0) [21].The WHODAS 2.0 will be used to assess functional status.The WHODAS 2.0 is a 36-item self-report questionnaire that assesses six domains of functioning across physical and mental health disorders in clinical and non-clinical populations: cognition, mobility, self-care, getting along, life activities, and participation.Items are scored on a Likert scale ranging from 1 (None) to 5 (Extreme or cannot do) and are summed to create total and domain scores. Modified COVID-19 Yorkshire Rehabilitation Scale (C19-YRSm) [20].The C19-YRSm is a 17-item self-report scale adapted from the original 22-item COVID-19 Yorkshire Rehabilitation Scale [22] that will be used to assess Long COVID symptom severity and subsequent functional status.Items are rated on a scale from 0 (none of this symptom) to 3 (extremely severe level or impact).The C19-YRSm is divided into four subscales: symptom severity, functional disability, other symptoms, and overall health.Subscales are scored by summing the highest scores for each item, with higher scores indicating greater severity. Secondary outcome measures Measure of Current Status(MOCS) [23].The MOCS is a two-part measure assessing the impact of intervention-related factors on various aspects of an individual's current status or well-being.Part A, which assesses participants' perceived level of skill for responding to challenges of everyday life (e.g., ability to relax), will be used to measure resilience.It consists of 13 items, with each item rated on a Likert scale from 0 (I cannot do this at all) scale to 4 (I can do this extremely well).Item scores are averaged for the final score. The Quality of Life Scale (QOLS) [24].The QOLS measures quality of life relevant to diverse patient groups with chronic illness across 6 domains: material and physical well-being, relationships with other people, social, community, and civic activities, personal development and recreation, and independence.There are 16 items with a response scale ranging from 1 (terrible) to 7 (delighted) to indicate levels of satisfaction among the domains.Items are summed for a total score, with higher scores indicating greater quality of life. Future Self Continuity Questionnaire (FSCQ) [25].The FSCQ will be used to assess individuals' temporal sense of personal identity from the present to the future in three areas: vividness, similarity, and positivity.There are 10 items with a 6-point response metric.The total FSCQ score is averaged from all items and subscale scores from the mean of associated items.Higher scores indicate increased FSC.The total FSCQ and the FSCQ components have demonstrated high levels of reliability and validity and have been used in clinical and nonclinical populations [25,26]. Suicidal Behaviors Questionnaire-Revised (SBQ-R) [27].The SBQ-R is a 4-item measure that will be used to measure suicide risk.Each item taps into a different dimension of suicidality: (1) lifetime suicidal ideation and/or attempt (2) frequency of suicidal ideation over the past 12 months (3) threat of suicide attempt and (4) self-reported likelihood of future suicidal behavior.Items are summed for a total score (range from 3 to 18), with higher scores indicating increased severity and risk. Patient Health Questionnaire-9 (PHQ-9) [28].The PHQ-9 is a 9-item depression module from the full PHQ with each item representing a depressive symptom.Items are scored on a scale ranging from 0 (not at all) to 3 (nearly every day) to assess the frequency of each symptom over a two-week period.Items are summed for a total score (range from 0 to 27), with higher scores indicating increased depression severity. Generalized Anxiety Disorder-7 (GAD-7) [29].The GAD-7 is a brief self-report measure to assess Generalized Anxiety Disorder symptoms and severity over the course of the last two weeks.The scale has 7 items on a Likert scale from 0 (never) scale to 3 (almost every day).Items are summed for a total score (ranging from 0 to 21).The GAD-7 has excellent reliability and validity [29]. Protocol and procedure Participants will begin treatment within approximately one month of completing informed consent and baseline procedures.All self-report follow-up data will be collected on Qualtrics immediately following treatment completion (< week between finishing Module 1 and starting Module 2, and <1 month after finishing Module 2).Compensation of up to $300 will be provided for assessment completion (TP-1 = $75, TP-2 = $75, TP-3 = $75).Participants will receive an additional $75 for participating in qualitative interviews conducted after Module 2 (TP-3) and a $50 bonus for completing 75% of the sessions (Module 1 = $50, Module 2 = $50), for a total of up to $400. LCCR intervention LCCR individual groups will meet through either the VVC or VA WebEx telehealth platform for 60-min weekly meetings over a 16-week course with 4-6 participants per group (approximately 18 participants in total; refer to section 2.2 for additional information).Before the first session, participants will receive the LCCR workbook containing all LCCR therapy content and optional individual work to be completed between sessions (please refer to Table 1 for additional information about treatment content and structure).The study coordinator will review the Group Telehealth Agreement with each participant, prior to the first session, following the VA Office of Connected Care (OCC) guidelines [40].At the initial meeting, the facilitator will introduce themselves to the participants and review group rules and guidelines (e.g., respect for their peers and facilitator) and the treatment purpose and format.Participants will be provided with session meeting information (date/time and link), and facilitator contact information.LCCR groups will be led by a minimum of two facilitators, including licensed psychologists and/or doctoral-level psychology fellows, and/or bachelor's level psychology technicians supervised by a licensed clinician.The facilitators will receive guidance on handling issues that may arise during the sessions, including off-topic discussion and inflammatory comments, and will be encouraged to use supportive language to foster engagement among the participants. Feasibility and acceptability Feasibility data will be evaluated on 1) ease of implementation, 2) 1 As LCCR primarily targets personal recovery constructs and psychological manifestations of Long COVID, future iterations of this protocol will be updated to emphasize more recovery-centered outcomes rather than physiological outcomes. Y. Sokol et al. recruitment, and 3) attendance/retention rates.Implementation ease will be measured by the number of hours clinicians spend in preparation, delivery, and supervision, with a target of 60 hours per cycle.Recruitment will be measured by rates of successful referrals to LCCR from Long COVID Clinic providers.Attendance and retention rates will be monitored for the total number and specific sessions each Veteran attends.Adequate retention will be determined by a minimum of 70% attendance for at least five of the first eight sessions (Module 1), five of the second eight sessions (Module 2), and 10 of the total 16 sessions (Module 1 and Module 2).Recruitment rates will be considered "adequate" if at least 65% of Veterans approached for participation agree to participate and follow-up response rates of 70% will be considered feasible.These benchmarks align with VA study protocol standards from previous studies [17,30]. To assess acceptability, we will use attendance, satisfaction, and participant feedback.At the end of each session, participants will be asked, "In what ways did you find this session helpful?"; "In what ways could we improve this session?";and "In what ways could we improve future sessions?"Participants will also complete the Acceptability of Intervention Measure (AIM), Intervention Appropriateness Measure (IAM), and Feasibility of Intervention Measure (FIM) [31] at each assessment point.These four-item measures are considered "leading indicators" of implementation success (acceptability and feasibility of intervention and intervention appropriateness) [32].At the end of the treatment, (during assessment TP-3) additional qualitative questions will be asked to obtain participants' feedback and analyzed for future development purposes. Facilitators and adherence scale development Group facilitators will receive weekly supervision from the principal investigator (PI) and participate in audio-taping and session review to refine the LCCR manual.An adherence scale will be created to evaluate the fidelity of the LCCR treatment based on key aspects such as the treatment's structure, content, and principles, and the facilitator's clinical competence (e.g., building rapport with participants).This scale will comprise two elements, covering adherence to (1) general LCCR requirements and (2) session-specific requirements.Scale items will be scored on a Likert scale ranging from 1 (not at all adherent) to 6 (completely adherent). Data analytic plan The de-identified data from the three one-arm trials will be securely stored in a password protected folder accessible only over the VA intranet.The data will then be entered into a Statistical Package for the Social Sciences (SPSS) database at the JJP VAMC.Access to the datacontaining computers will be restricted to authorized study personnel only.Preliminary analyses will use descriptive statistics, and outcome measures will be compared across time points using repeated-measures mixed models.As the study aims to develop LCCR and generate hypotheses (rather than test them), conclusions about LCCR's effectiveness won't be drawn from these pilot studies.In line with this, sample size determination followed best practices for clinical pilot studies rather than statistical power [33]. Stakeholder feedback The results of the AIM, IAM, and FIM measures and qualitative interviews will be reviewed and shared with stakeholders between each treatment cycle, allowing iterative changes to be made to the treatment approach before starting the next cycle.A written summary of the feasibility and acceptability data will be provided to stakeholders after all three groups have completed the study, with the aim of identifying any changes needed to the LCCR treatment materials or design before submitting a Merit Grant for a large-scale RCT. Discussion The development of LCCR as a novel mental health intervention for Veterans with Long COVID is an important step forward in addressing the complex multisystemic nature of this condition.Its unique twopronged treatment approach that targets both building coping skills and recovery-based and identity work toward creating a purposeful life may benefit other chronic illnesses and health conditions beyond Long COVID.Feasibility and acceptability trials are critical in determining its potential benefits, including improved psychological functioning, quality of life, and resilience.Successful implementation of LCCR could have significant implications for improving the quality of life for Veterans with Long COVID and other populations with chronic health conditions. This pilot study has limitations to consider, such as being conducted in an urban northeast VAMC that may not generalize to other areas of the U.S. Additionally, the recruitment pool consists of a VA population with a high prevalence of military-connected and pre-existing health conditions, which may overlap with Long COVID symptoms.Furthermore, due to the skewed gender ratio of clients seeking care in the VA, the majority of participants may be male, which could limit the generalizability of the findings to female veterans [34].However, this could also contribute to the literature given the higher risk of Long COVID in females [35,36].The study's inclusion criteria, allowing for participants who experienced COVID-19 symptoms for one month or longer, is another source of limitation.While this is in line with a universal definition of Long COVID [2], it leads to limitations.Individuals' symptoms could resolve within the first few months after infection, which could inflate the efficacy of the treatment.This is a challenge inherent to Long COVID, as we are still learning more about this condition.In this line, due to the growing understanding of Long COVID [44], future revisions of our criteria may be necessary for the next steps of this research, such as in the planned RCT (randomized controlled trial) follow-up.For instance, we may consider restricting enrollment to those whose symptoms have persisted for a longer duration should more research and/or the one-arm trials indicate that this is appropriate.Finally, the sample sizes selected are within the range considered sufficient for early-stage studies, but they do not allow for testing and drawing conclusions [33,37,38].By establishing a manualized version of LCCR and conducting further research, we expect to refine this intervention, with our ultimate goal to help give Veterans suffering from Long COVID hope, connection, and a path to recovery.We hope this recovery and identity-focused psychotherapy will provide a model that can be adapted to other chronic health conditions. Trial status Currently, the second of three pilot development trials is being administered, and data on feasibility and acceptability is being collected as outlined above. Module 1 : 1 Introduction• 2 • Laying the Foundation for Recovery Session Introducing the therapy and the concept of recovery • Discussing current Long COVID symptoms and experiences • Reviewing emergency coping skills Session Energy Conservation An introduction to Pacing -"reserving the oil in your tank" • Addressing barriers to Pacing and Long COVID coping skills Session 3 This work is supported by an RR&D Small Projects in Rehabilitation Research (SPiRE) Grant for the Veteran Affairs Research Department (1I21RX004092-01A1).Contribution statement Yosef Sokol: Conceptualization; Funding acquisition; Writingoriginal draft; Writingreview & editing.Chana Silver: Writingoriginal draft; Writingreview and editing.Sofie Glatt: Writingoriginal draft; Writingreview and editing.Lakshmi Chennapragada: Writingreview and editing.Sarah Andrusier: Writingreview and editing.Cameron Padgett: Writingreview and editing.Ariana Dichiara: Conceptualization; Funding acquisition; Writingreview & editing.Marianne Goodman: Conceptualization; Funding acquisition; Table 2 Inclusion and exclusion criteria for the development LCCR trials.Positive screen for Long COVID (e.g.COVID-19 positive, diagnosed with a PCR test, an antibodies blood test, and or a diagnosis by a physician at the JJP VAMC Long COVID Clinic and symptoms lasting 1 month or longer after infection) 4 Participation in medical/mental health services at the JJP VAMC 5 Sufficient clinical stability and readiness to participate in group therapy as deemed by their VA service provider Exclusion criteria 1 Active alcohol or opiate dependence requiring medically supervised withdrawal 2 Active psychosis 3 MINI Mental Status <23 or inability to function in a group setting 4 Unable to operate telehealth platforms or other electronic devices 5 Non-English speaking 6 Lack of capacity to consent 7 Unable or unwilling to provide at least one contact for emergency purposes
2023-06-15T13:05:57.898Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "d14d2ae49b5726281d3dffc6bb6d448949ab0424", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.conctc.2023.101217", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a5544e7f1c3698bb1dac75de0f0bee41977ad6e", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
255056829
pes2o/s2orc
v3-fos-license
Legal Policy of Anti-Corruption Supervisor Design: A New Anti-Corruption Model in Indonesia ABSTRACT Introduction In terms of concept, developing countries, however that corruption is a component of even a portion of the system itself degrades. As a result, there are those who are skeptical. Assert that integrated reaction is to enhance the system which does exist 1 . The majority of people believe that good governance is an absolute necessity. The establishment of a political system of government that is more favorable to the people's interests with universal democratic principles. Governance plays an important role in the achievement of sustainable development; a common consensus of the definition must be achieved to realize development. After a closer examination of the definitions presented in the tougher due to increased legal support and public trust. This is referred to as obstructing justice 7 . Indonesia is a country divided into 34 provinces, 416 regencies, and 98 cities. Corruption cases are spread throughout Indonesia, both in the central and regional governments, according to the data above. The region with the most corruption cases was Java, with 341 cases, and the region with the fewest corruption cases was Nusa Tenggara Islands, with 17 cases. The possibility of large-scale corruption cases throughout Indonesia necessitates a collaborative effort among stakeholders. Money laundering is a common form of corruption in Indonesia. The focus of a wellestablished strand of literature on AML regulation is the rule of law, an effective regulatory framework and judicial system, structures to prevent corruption, government effectiveness, and a deeply ingrained culture of compliance. While existing research convincingly argues that any deficiencies in these areas can jeopardize the development of an effective AML framework, evidence on the impact of AML action on financial crimes remains limited. Furthermore, there are no in-depth analyses of the best design for an AML institutional framework in general or for FIUs in particular 8 . Four factors influence law enforcement efforts as one of the pillars of democracy. First, there is the law itself, both in terms of the substance of statutory regulations and the formal law used to enforce material law. Second, law enforcement officers' professionalism. Third, adequate facilities and infrastructure are required; fourth, the public's perception of the law itself is required 9 . In order to address the various challenges and problems of the Indonesian bureaucracy, we must consider ways or solutions to improve the bureaucratic administration system based on State Administrative Law. The effort to eradicate BESTUUR ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) corruption through the implementation of laws prohibiting obstruction of justice should begin with a reform of corruption laws. There is a need to amend the laws dealing with criminal acts, particularly corruption, due to flaws in the formulation of obstruction of justice in Article 21, as well as other special criminal laws such as the law combating terrorism and the law combating human trafficking. This lack of adaptation not only leads to confusion and contradiction between laws, as previously stated, but also to conflicts of authority among law enforcement institutions. The laws mentioned above must be clearly formulated in order to specify their subjects and jurisdiction in plain legal language. In addition to the clear formulation of laws, the issue of investigative authority must be considered, because many institutions in the Indonesian criminal justice system today are authorized to investigate and prosecute corruption, including obstruction of justice in corruption cases. Although these are domestic issues, seeking international advice could be very beneficial. The derived meaning of corruption in Indonesia represents the manifestation of the entire moral values, customs, and culture that live in society 10 . The source of religious morality has served as the fundamental spirit above customary and cultural values in society. The derived meaning of corruption should not lean back-to-back with religion as the source of moral values in Indonesia within the frame of the principles of wisdom and unanimity. The derived meaning of corruption should be seen from more than just the perspective of conflicting political interests among powers. This derived meaning may set boundaries that erode the cultural and customary values ethically. The malice and good intention will no longer serve as ethical boundaries in society authentically. The derived meaning of corruption represents the manifestation of political power where political interests are derived collectively and partially. The derivation of political interests partially determines 11 the scope of malicious intention of corruption in the legislation. the meaning of this intention, as intended in the legislation, represents a pragmatic intention of politics that rules. The norm of corruption that should serve as clear boundaries between primary morality and political interests has partially shifted to the state of being permissive toward the pathological behavior of elites with power 12 . The derivation of corruption has found the lexical identity according to its time 13 . This identity experiences the process of internalization of latent elements in political power. The situation and certain periods of a lexical identity of the meaning of corruption manifest the brand and slogan of power in its time 14 ISSN 2722-4708 Vol. 10, No. 2, December 2022 is no longer authentic, reflecting the ethical nature and morality of the people and it represents a limitless reflection of dominant politics over other powers. The lexical identity of the meaning of corruption becomes the contest stage between political power and modus of pseudo justification of development, social and justice interests, and the interests of economic and social rights and prosperity 15 . The jargon malicious intention is attached to the lexical identity of political powers, one over another, creating the master status of position and opposition in the state governance within a legal framework. This study is focused on the derived meaning of corruption and the identity of the lexical meaning of corruption in the legislation in the old order, new order, and reform era in Indonesia. It cannot be denied that law administration has become a necessity not only for the government, but also for a pluralistic society that is undergoing necessary development and is served by the government. This has happened to public officials who have been "caught" in corruption cases 16 . In terms of administrative functions, efforts to realize a democratic, clean, and authoritative government system are a top priority for the people and government of Indonesia in the current reform era. Bureaucratic reform in the form of public service has become the beginning of the emergence of awareness of the mechanism of public service delivery and has become a pillar of the government's awareness to organize its government system. In order to strengthen the Indonesian bureaucracy and governance, good governance principles must be implemented and used as a guideline for state administrators. In order to achieve good governance, regulations governing criminal acts of corruption must be strengthened. The derivation of corruption meaning in the philosophy and the legislation in all three orders in Indonesia serve as the guidance to the lexical identity of the meaning of corruption in its time. The source of morality becomes the standard of meaning in the derivation of the meaning in the three orders. Various efforts must be realized within the bureaucracy by strengthening state administrative law, namely, the application of the principles of good governance and a closed system of bureaucracy in the practice of administering the state; uphold the principles of good governance in carrying out duties and responsibilities; legal strengthening; improvement of state institutions; improvement of the integrity and ethics of state administration; formation of community awareness. Research Method This study employs normative legal research with a conceptual and case study approach. To comprehend concepts related to bureaucracy and the corruption monitoring system, a conceptual approach is used. The case approach is used to investigate and solve problems in real-world situations. The research proposal is descriptive. The primary data source is secondary data derived from literature reviews. The collected data was analyzed using a qualitative descriptive analysis 17 . 15 A New Anti-Corruption Model in Indonesia The derived meaning of corruption in the old order positioned the return of state finance over other interests of legal certainty and justice. The return of state finance was a determining factor in deriving the meaning of malicious intention of corruption. The following table presents normative elaboration indicating the derived meaning of corruption is reduced to the meaning of the return of state finance. The corruption was no longer seen from its malicious intention, but it was rather reduced to transactions 18 of civil matters that set the boundaries between good and bad faith of the corruptors individually. Corruptor's incapable of returning the money or unwilling were reduced in terms of their status as debtors with bad faith or intention (te kwader throuw), while corruptors willing or capable of returning the money were regarded as debtors with good faith (te gouder trouw). The state was positioned as a creditor and the corruptors as debtors. Law Number 24 of 1960 represents a justification for erasing the malicious intention of corruption derivatively. Law Number 24 of 1960 derived the meaning of malicious intention of corruption as a pure debt-related matter within the scope of civil matters. Law Number 24 of 1960 proves that privatization of public law took place, meaning that the public values were reduced to the interest of private and civil matters. The role and responsibility of the state in maintaining and realizing public interest in corruption cases was delegated to debt collectors as the committee. Economic sustainability is one of the indicators of sustainable development that emphasizes that the work of government institutions must have longterm objectives to protect the lives of future generations from the negative effects of overspending of public funds by some of these institutions, and to achieve the best use of economic resources to meet societal needs 19 . The derived meaning of corruption as in Law Number 24 of 1960 is elaborated in the following corruption is defined as the failure to pay off debts to the state, debts are collected with force by the committee formed by the state, corruption is defined as inappropriately using debts, corruption is defined as failing to pay or pay off the debts to the state, malicious intention of corruption remains unless an agreement of full payment is made, in agreement of full payment of debts is comparable to the decision made by a judge, this agreement serves as a basis for confiscation, auction, and hostage of a debtor and the debtor concerned is not allowed to hire a lawyer. The derived meaning of corruption as in Law Number 24 of 1960 has destructed the malicious intention of corruption that gives rise to lexical-semantic identity highlighting the state as a creditor and the corruptors as debtors. Law Number 24 of 1960 serves as the basis for the disruption of the literal meaning of the state of law. The lexical-semantic identity of the state as the creditor and corruptors as debtors not only disrupts the malicious intention but also disrupts the meaning of the state constitutionally. BESTUUR ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) KKN Practices (Corruption, Collusion, and Nepotism) are common in the New Order government 20 . Law Number 3 of 1971 gives meaning that corruption was committed by state apparatuses by inappropriately using authority, opportunities, or infrastructure embedded in an official and public position. A state apparatus that committed corruption gained a lexical identity of hampering the development and harming the finance or economy of the state. The derived meaning of corruption committed by a state apparatus was regarded as an act contravening the vigor of Korps (KORPRI). The lexical identity of a state apparatus as "abdi negara" (the servant of the state) was reduced to the meaning of serving and supporting economic development determined by authorities. State apparatuses committing corruption were linked to disruptors of development as a single meaning lexically established by state authorities. The derived meaning of corruption as in Law Number 3 of 1971 is referred to as the basis for maintaining the stability of security of the state (the militarization doctrine as often preferred in the government). Thus, the state apparatuses committing corruption gained the lexical identity as disruptors of the stability of the security of the state. This security stability served as an ideology of development that must be obeyed and performed by all state authorities of all scopes. Law Number 3 of 1971 also serves as a basis to justify all state apparatuses with either malice or good intention despite their existence under a single meaning established by executive powers (politicization and governance of state apparatuses). The single interpretation of "abuse of existing authority, opportunities, or all infrastructure embedded in a person with public position" of state apparatuses is set forth in the norm of disciplinary rules of KORPRI. The single authority of the state officials of all levels became the doctrine of obedience and loyalty among civil servants just to avoid to be lexically labeled as corruptors. The justification of the derived meaning of corruption served as the authority of state officials, leaving no public space to justify the meaning of corruption committed by civil servants and/or state officials. The lexical identity of a corruptor marked the demarcation between "insider" (in-group) in Korps and "outsider" (out-group). The state officials had enough space to draw this dividing line between the "in-group" and "out-group". The authority of the derived meaning of corruption as in Law Number 3 of 1971 gave rise to the phenomenon of "latent corruption" and "manifest corruption". The former is understood as the conduct of "abuse of power, opportunities, and infrastructure that came with the official position a person was entitled to", done by the state apparatuses (the boss and/or the subordinates), and it harmed the state finance or economy and national development, but those conducting this wrongdoing were "in-group" people. Thus, it carries the lexical identity as "pseudo state servants". The latter, however, carries the meaning implying that "abuse of authority, opportunities, and infrastructure due to official position" did or did not take place, but they were personally categorized as "out-group" certainly representing the lexical identity of "corruptor". The derived meaning of corruption as in Law Number 3 of 1971 can be summarized as follows the malicious intention of corruption is embedded in the conduct of abusing authority, opportunities, and 20 infrastructure, receiving gifts and promising something performed due to the official or public position of a person; the standard of obedience and loyalty as pseudo "servants of the state"; hampering economic development; disrupting stability; and the pseudo doctrine of policies of promotion and demotion. The Indonesian transformation process, known as the Era Reformasi, which began in 1999, was a significant institutional change in the social, political, and economic spheres that had the potential to impact income inequalities 21 . The spirit of freedom and will to retaliate was within the interest of the public following the enactment of Law Number 31 of 1999. The derived meaning of malicious intention of corruption leaned more toward the narration of the spirit of freedom and will to revenge on the new order. The details of the conduct of corruption derivatively describe the will and the element of the reform agenda. The numbness felt in the new order was regarded as an authoritarian and corruptive system that was seemingly correctible by the articles in Law Number 31 of 1999. The enforcement of Law Number 31 of 1999 tended to correct, not improve, the system of corruptive government into a clean and corruption-free government. The corrective spirit of the narration in the formulation of the meaning of corruption as in Law Number 31 of 1999 derivatively explains the will and a vital element of the freedom from corruptive new order government era. Corruption eradication was the intended target that negated the meaning of prevention to realize a clean and corruptionfree government. The derived meaning of corruption in the formulation of the norm and open interpretation of law enforcers in giving the meaning of the element of corruption factually became a determinant factor. The law enforcers of all levels and structures regarding the awareness that corruption took place in a particular portion. Each law enforcer can neatly file in "a drawer" several documents that record corruption according to the interpretation and understanding of each enforcer. Dedication and achievement of law enforcers were based on the capabilities and the number of derivative interpretation results. The derived meaning of corruption in texts 22 was not always in line with that of context. The gap between textual and contextual spheres was intended to fulfill the target and submission as mentioned above not within the meaning of eradicating and preventing the malicious intention of corruption. The lexical identity of corruption as in Law Number 31 of 1999 tended to justify the spirit of freedom inappropriately. The lexical identity of the meaning of corruption was regarded as the identity that impeded development, was deceiving, was as the enemy of society, was anti-Pancasila, and the 1945 Constitution of Indonesia in a pseudo way 23 public officials who did not do what they promised in the political campaigns after they ruled. The derived meaning of corruption of Law Number 31 of 1999 is explained as follows corruption impedes development and harms state finance and the national economy; corruption breaks the oath and promise of the state administrators; and corruption is categorized in the scope of obstruction of justice. The derived meaning of corruption as in Law Number 20 of 2001 widens the scope of meaning of malicious intention existing in Law Number 31 of 1999, including social and economic rights. The malicious intention of corruption defined as conduct harming the state finance/national economy is further extended in meaning to the conduct that also violates the constitutional right of the people of economic and social scopes. Meaning corruption in non-governmental organization (NGO) Transparency International, corruption is the abuse of entrusted power for private gain. Such abuse may happen on the level of day-to-day administration and public service (petty corruption) or on the high level of political office (grand corruption) 24 . On one hand, it is restricted to state finance/national economy. On the other hand, the restriction of economic and social rights of the people does not apply, and there is no concept of formulation of the articles regarding the restrictions and scopes of economic and social rights. ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) corruption in the public sphere is reduced to the civil meaning of debt-related matters. The malicious intention of corruption is no longer measured according to common values in the context of the core value of the people that strongly adhere to the principle of value judgment. The malicious intention of corruption is regarded as not more than a bad faith of corruptors individually by overlooking the main principle of the malicious intention of corruption that can lead to the instability of state finance and development. The derived meaning of corruption during the era of the old order referred to the open-tolerant principle by accommodating the will of corruptors after they were caught having committed corruption. The meaning of malicious intention and non-malicious intention of corruption lies in the capacity of the corruptors to willingly return the embezzled money to the state through the committee of debt collectors. The open-tolerant principle in the malicious intention of corruption seems to have lost its power of certainty and justice although it is impossible to ensure whether the corruptors have the good faith to return the money. There is no time limit regarding when corruptors should return the money and what legal consequences may arise if the corruptors fail to do the good faith. The derived meaning of malicious intention of corruption in the era of the new order stood on the principles of the development of the universe. The meaning of corruption was seen as hampering development, harming state finance, and spoiling stability. Corruptors were restricted to state administrators of all levels and scopes of power. The doctrine of servants of the state labeled all state apparatuses and it became the symbol of integrity of the state apparatuses. The corruptors and the malicious intention of corruption were only addressed to weak state apparatuses. Political, cultural, economic, and social factors determined the justification of the malicious intention of corruption in the era of the new order. The culture of submission of the state apparatuses was no longer measured with the standard of primary tasks or functions, but rather with the standard of symbolic benefits of the materials to the superior. The procedures of appointing and dismissing state apparatuses were not measured by achievement standards and prestige of dedication and loyalty to the state, but rather with the standard of "inscribed tribute" submitted to the superior. The derived meaning of malicious intention of corruption referred to the principle of the size of the derivation of "inscribed tribute". The principle of the integrity of state apparatuses in the doctrine of "achievement, dedication, loyalty, blameless act (PDLT)" became a murky limit of assessment of the integrity of state apparatuses symbolically. Another reason for emphasizing a broad and complex integrity framework (rather than a narrower spectrum of corruption) is the diversity and complexity of corruption. A broader framework is also useful for considering what contributes to the protection of integrity and the prevention of integrity violations, such as corruption 26 . The derived meaning of the malicious intention of corruption in the era of reform follows the era of the new order. In the reform era, the subjects of corruptors are no longer limited to state apparatuses but they have also included non-state apparatuses if it involves the loss of state finance/ national economy. The malice act in a position as regulated in the Penal Code and the violations of socio-economic rights of the people in the era of the old order are not regulated, while those of the reform era are. The expansion of individual subjects and private corporates also serves as a distinguishing feature of the meaning of BESTUUR ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) malicious intention of corruption during the reform era. The spirit of freedom transcends the limit, becoming an open space for whoever expects to give an interpretation of the malicious intention of corruption. The derived meaning of the malicious intention of corruption in written laws is often overshadowed by the era of freedom that transcends the limit and it sometimes goes "wild" in giving justification of which action is considered to carry the malicious intention and which one is not. The derived meaning of the malicious intention of corruption is often formulated "in an open space" with the "embedded message" of the power of certain parties partially. The trial process with the authority to give the meaning of malicious nature is constitutionally "snatched" by extra judicial partial power, making the formulation and the consideration to justify the meaning of malicious nature no longer reflect the crystallization of values and ideology of the state, but it tends to represent a murky reflection of will and the aspect of the power of certain parties. The "embedded message" of the parties that hold power and authority replaces the "embedded message" of the value and ideology of the state in formulating and determining the meaning of the malicious nature of corruption in the reform era. The connection of the derived meaning of the malicious intention of corruption in the old order, new order, and reform lies in the interpretation of authority and particular power (state apparatuses and/or person and/or private corporates). The written norm (das sollen) that should serve as the basis to determine the derived meaning of malicious intention of corruption is controlled by the power and authority of the state and/or a person or private corporates (das sein). The written norms that serve as the fundamental of the derived meaning of the malicious intention of corruption have lost their substantive spirit. Just legal certainty for the derived meaning of malicious intention of corruption in the era of the old order, new order, and reform seems to work as a narrative illusion that is banal and it only creates the sensation of corruption eradication. The lexical identity of the meaning of corruption in the era of the old order erased the meaning of malicious intention of corruption. Corruption is supposed to represent lexical identity as a crime contravening positive values and public morality that are reduced to bad faith in the scope of civil individual matters. Corruptors are regarded as debtors owing money to the state and they are required to pay off the debts to the state. The lexical identity implying that corruptors represent and are aware of the malicious intention of their conduct that causes the financial loss of the state was erased by the will and capacity to return the money to the state according to good faith. The intention of corruptors (mens rea) is not within the context of bad faith, but corruptors clearly carry malicious intention that is against the moralistic values of the public. The sanction may involve the hostage of a corruptor who fails to return the stolen money within a certain period (gijzeling), and this sanction is purely related to civil matters. The lexical identity of the meaning of corruption of new order considered that the malicious nature of corruption contravened the ideology of the development of the universe. Corruptors were seen as interrupters of economic development. State apparatuses committing corruption should no longer be regarded as the "servants of the state". The integrity of state apparatuses is embedded in state apparatuses skillfully "serve" their boss. The state apparatuses that were previously defined as "public servants" are reduced in meaning to "the servants of the boss". The state apparatuses not capable or not having any will to serve their boss may be lexically labeled as "standing against the boss". The lexical identity of the meaning of corruption is often linked to standing against and disobeying the superior. The malicious nature of corruption that is against the law is compared to the conduct of standing against the boss. The lexical identity of corruption has been enacted in the will of the doctrine of the development of the universe that fits the taste of the boss. BESTUUR ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) The principle of "we know it" in the jargon of "as long as the boss is happy" has been referred to as the standard in determining the lexical identity of the meaning of corruption in the era of the new order. The lexical identity of corruption in the reform era is seen as an enemy of the society, hampering the development and a tendency of anti-Pancasila and anti-Constitution. The moral values of the people, Pancasila, and the 1945 Constitution have been reduced and inappropriately referred to by elites of the state and/or private power outside the state to establish a lexical identity of corruption with whoever is expected. The misuse of identity and the pure identity of the state by authorities and certain power in every person and/or corporates as corruptors has become the standard of new ways in the era of reform. The meaning of reform of lexical identity of the meaning of corruption has transformed into a new shape of structured, systematic, and massive distribution of corruption practices. The lexical identity of the meaning of corruption is not restricted but structured through the justification in the systematic formulation of norms and these norms can be performed by all people, including persons or corporates. The reform is like a toll highway to help blur the lexical identity of the meaning of corruption within the establishment of permissive culture. The reform that expects to see the existence of a clean and corruption-free government has been impeded by the erosion of a permissive culture that has been crystalized into the shield protecting from the malicious intention of corruption from socioeconomic, legal, and social perspectives. The connection of the lexical identity of the meaning of corruption in the era of the old order, new order, and reform era meets at the point where power, elite authorities of the state, and/or private entities exist to negatively label individuals and/or corporates that are deemed to "interrupt" their will and interest. The lexical identity of the meaning of corruption shifts to prerogative rights of power and authority of the state intertwined with private power. The lexical-semantic of meaning of corruption in the era of the old order, new order, and reform has been misused to produce the label of malice to the persons concerned. The lexical identity of the meaning of corruption is distributed at a particular "cost" in a free market where authorized elites and certain powers act as "brokers" that determine the value of "stocks as tribute" that must be included in the transaction of the "exchange" of freedom. Legal Policy of Anti-Corruption Supervisor Design The establishment of administrative regulations for the performance of state functions and the provision of public services is another important anti-corruption administrative and legal form. These legal acts are used in executive authorities to detail the procedure for carrying out actions and decisions made by executive authorities and their officials in the course of carrying out state functions. Certain administrative regulations are directly related to the exercise of executive authorities' anti-corruption powers. The most important category of administrative law is the form of public administration. When considering the activities of executive authorities, the category of administrative and legal form of public administration, which has its own distinguishing features and can also be classified for a variety of reasons, must be applied. Understanding the content of management forms in the practical activities of executive authorities enables public administration to be more BESTUUR ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) efficient and cases of corruption on the part of officials to be eliminated 27 . This includes the commitment decision acts of corruption or refrain from corruption by cultivating the desire to do so anti-corruption values such as honesty, discipline, responsibility, hard work, simplicity, independence, fairness, and courage must be applied as well as concern. A person's intention not to commit corrupt acts by implementing anti-corruption values can be influenced by his or her perception of happiness and satisfaction with life or welfare 28 . As a result, one method of preventing and eliminating corruption in Indonesia is to implement the principles of accountability and transparency in governance administration. The lack of post-reform government commitment to bureaucratic reform is comparable to the lack of government commitment to eradicating corruption, which has become an acute disease in the Indonesian government bureaucracy thus far. 29 Poorly implemented governance leads to ineffective decisions, whether they are related to costs, resources, or budget allocations. Of course, this will have an impact on the economy and development. As a result, governance is a critical issue that influences economic development and various anti-corruption measures 30 . As a result, oversight of governance implementation is required. Legal supervision is the supervision exercised by the judiciary. Supervision is intended to determine whether or not the government's legal actions are in accordance with the applicable legal norms (rechtmatigheid or on rechtmatigheid). Furthermore, there is political oversight, which is carried out by the people's representative bodies against the government in the exercise of government power. In this case, supervision is meant to determine whether the use of government authority is consistent with the will of the people. Furthermore, citizens can exercise oversight over the administration of government. 31 The failure or weakening of the State Administrative Law allows the bureaucracy and state officials to engage in corrupt practices. As can be seen, the Corruption Perceptions Index (IPK) released recently by Transparency International (TI) states that Indonesia's GPA is 37 in 2020. It has dropped by three points since 2019. In this case, a higher IPK score indicates that the country is free of corruption. On the other hand, the lower a country's CPI score, the worse the country's handling of corruption. Indonesia is currently ranked 102 out of 180 nations. We can see that throughout 2020, Indonesia will be experiencing a corruption crisis within the bureaucracy 32 . According to these figures, China's corruption index fell from 41 to 39 in 2018, but then began to rise again from 2018 to 2021. In 2018, it was 39, in 2019, it was 41, in 2020 it was 42, and in 2021 it will be 45, a significant increase. Corruption is also an important cause of poverty 33 . Corruption is also one of the gravest challenges to Chinese society 34 ISSN 2722-4708 Vol. 10, No. 2, December 2022 new anti-corruption requirements were issued by the Communist Party of China (CPC), leading to significant anti-corruption achievements. One of the key measures is the system of central discipline inspections that aims to crack down on "tigers and flies," as corrupt officials are typically called 35 . Corruption among Chinese officials is also thought to be a major issue for the country's long-term prosperity, social stability, and the legitimacy of the ruling Party 36 . The Central Commission for Discipline Inspection (CCDI) began dispatching central discipline inspection teams to government entities across the country in 2013. As a natural experiment, consider China's large-scale anti-corruption actions since 2013 and define a binary variable (Policy) to represent policy shock. Since then, the central discipline inspection system has become increasingly important in the fight against corruption. The CPC's focus on the capital market is indicated by its inspection of the China Securities Regulatory Commission (CSRC) by the seventh central inspection steam in October 2015. China's anti-corruption campaign has increased profitability and investment in innovation, particularly among firms that are vulnerable to expropriation 37 . points out that since anti-corruption strengthens the enforcement of formal rules and higher-level directives, the "clean" officials become afraid of doing their daily jobs sthrough informal practices that would otherwise help overcome the pathology of formal procedures. Particularly vulnerable to anti-corruption investigations is the work of public good provision, which frequently involves frequent state-business collaboration. The anticorruption campaign should be closely linked to environmental performance. China's promotion system (a tournament-style promotion system) appears to facilitate corruption 38 . This study uses China, the world's largest developing country, to demonstrate the critical role of anti-corruption in corporate environmental performance, emphasizing the practical importance of political construction in ecological environmental governance. vice vers 39 . Corruption persists in the majority of the world's countries today. Corruption exists in all countries, regardless of how advanced their social and economic systems are, and it is also a significant impediment to democratisation and good governance 40 . Corruption is BESTUUR ISSN 2722-4708 Vol. 10, No. 2, December 2022 becoming more prevalent in Indonesia. Following the Soeharto era, the anti-corruption program aimed to improve transparency and governance 41 . According to Law No. 30 of 2022, the Corruption Eradication Commission is an independent state institution that is not influenced by other powers. KPK has been given authority to prevent and prosecute criminal acts of corruption. For a long time, the Corruption Eradication Commission has been involved in the prevention and eradication of corruption cases. According to the 2012-2018 KPK annual report, there is a new pattern of behavior in which the most corrupt institutions tend to be Ministries and Institutions in 2009-2016, while the most corrupt institutions have moved to become district and city governments in 2017 to 2020. One of the changes in this pattern is the central government's widespread implementation of fiscal decentralization in order to maximize the role of regional autonomy and the independence of regional development. Meanwhile, based on the type of case, bribery cases were the most frequently handled by the KPK from 2010 to 2020. This is closely related to fiscal decentralization, which causes regions to manage the 41 ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) procurement of government goods and services independently. The proclivity for bribery to occur is extremely vulnerable to winning government project tenders. Corruption thrives in places where institutions are of poor quality. This is due to the fact that poorly designed or poorly implemented institutions influence a society's incentive system and, as a result, shape individuals' behaviors, leading them to engage in corrupt activities. Corruption increases firms' costs both directly through the payment of bribes and indirectly through the induced uncertainty, which is a major source of transaction costs 42 . Corruption imposes enormous costs on businesses and the societies in which they operate; it is also a complex problem that defies simple solutions. Because most corruption crimes are committed by bureaucrats, the role of local government bureaucracy is very strategic. As the quality of government bureaucrats improves, so will public awareness. Corruption and public mismanagement have been identified as relevant factors that may reduce growth, foreign investment, government legitimacy, and even political stability 43 . Adoption of anti-corruption business models, in particular, is tightly linked to each national legislation and integrated into their corporate governance system via a specific compliance program involving both internal actors, such as the board of directors (BoD) and internal auditors, and external actors, such as stakeholders. In order to avoid bribery and establish a good corporate governance model, companies and their key internal actors must first avoid bribery, are tasked with proactively managing anti-corruption principles and disclosure, ensuring corporate sustainability 44 . Thus, results should serve as a useful integrated sustainable corporate governance model to prevent corruption on the basis of a corporate sustainability system and a specific compliance program 45 . Good governance principles such as accountability, transparency, and law enforcement can limit opportunities for corruption, making anti-corruption efforts more effective. a lot of people around the world, including the world of the world of the world of the world of the world of the world of the world of the world. The rule of law is required for good governance, and good governance is required for the rule of law 46 . Corporate governance had some major issues, such as paying attention to national legislation systems, adopting best practices codes, developing corporate governance models, identifying the role of the board and management, and paying attention to corporate sustainability 47 . Corruption must be combated holistically, involving all relevant parties -including government officials, the private sector, and the community -and employing both preventive and punitive measures. According to the OECD Principles of Corporate Governance (2015), corporate governance is aligned with the company's strategic direction, with a focus on "effective monitoring of management by the board, and the board's accountability to the company and the shareholders." The OECD defines corporate governance as "promoting transparent and fair markets and the efficient allocation of resources," while also emphasizing the role of stakeholders in corporate governance 48 . Corporate governance and national anti-corruption legislation are the two main pillars of modern businesses operating in a complex environment. On the one hand, sustainable corporate governance aims to achieve a well-functioning Board of Directors in order to facilitate decision-making in the company regarding sustainable external changes and corporate sustainability. Of course, the paradigm of legal positivism still dominates law enforcement today, and in order to implement the values of justice, it is necessary to fix the paradigm in law enforcement, which is not only a paradigm of positive law in law enforcement. 49 Corporate anti-corruption efforts can be viewed as a subset of corporate social responsibility (CSR). Some aspects of corporate governance, such as the CSR board committee and government ownership, have a positive impact on CSR reporting. The presence of a committee demonstrates a company's concern for its social and environmental actions, as well as its reputation 50 . Corporate governance has a moderating effect on some cultural influences, such as limiting the negative effects of power distance (significantly lower quality reporting). These findings have implications for developing and implementing global reporting standards (for example, the GRI standards), as they highlight the significance of cultural differences that may impact cross-country comparability. Culture may influence how these guidelines and standards are implemented in practice, while good corporate governance, particularly from CSR committees, may mitigate some of the negative cultural effects 51 . To achieve good governance, each stakeholder must work together to eliminate corruption as effectively as possible. Coordination and collaboration in the fight against corruption in Indonesia can take the form of a penta helix collaboration. Penta helix Collaboration is a model of collaboration that includes elements of government, the private sector, academia, the community, and the BESTUUR ISSN 2722-4708 Vol. 10, No. 2, December 2022 Sidik Sunaryo et.al (Legal Policy of Anti-Corruption Supervisor Design…) media. In order to eradicate corruption, each stakeholder's role and collaboration must be expanded once more. Conclusion The derived meaning of malicious intention of corruption in the old order, new order, and reform era gradually reduces the meaning of malicious nature from the perspective of public morality and the ideology of the state. This shift of meaning departed from "privatization" of the meaning of public morality of corruption to the event of civil and private matters in the era of the old order. Furthermore, the derived meaning of malicious intention of corruption that involved the doctrine of "nationalization" during the new order was to justify the integration of "obedience" of state apparatuses. In the reform era, the meaning of corruption has been reduced through the spirit of "liberalization" of public will to cover the promises made. The lexical identity of the meaning of corruption in the era of the old order, new order, and reform is made to "tie" power and the authority of opposition. In the era of the old order, the lexical identity of the meaning of corruption was to give tolerance to the opposition committing corruption (debtors). In the era of the new order, the lexical identity of the meaning of corruption was referred to as an effort to apply repression to stop the opposition, while the reform era indicates that the lexical identity of the meaning of corruption is more used as the propaganda of illusion that embraces opposition.
2022-12-24T16:29:22.110Z
2022-12-15T00:00:00.000
{ "year": 2022, "sha1": "fab29001f26e4150f58fb2ed6714393ff9b2f7be", "oa_license": "CCBY", "oa_url": "https://jurnal.uns.ac.id/bestuur/article/download/65105/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "279b2cc812fc5b94e763ceca837980e2fe44a0d3", "s2fieldsofstudy": [ "Law", "Political Science" ], "extfieldsofstudy": [] }
264765180
pes2o/s2orc
v3-fos-license
Effective bounds for Faltings's delta function We obtain bounds for the Faltings's delta function for any Riemann surface of genus greater than one. The bounds are in terms of the genus of the surface and two basic quantities coming from hyperbolic geometry: The length of the shortest closed geodesic, and the smallest non-zero eigenvalue of the Laplacian which acts on smooth functions. In the case when the surface in question is a finite degree cover of a fixed base surface, then bounds are given in terms of the degree of the curve and data associated to the base surface. Introduction In [1] and [2], S. J. Arakelov introduced Green's functions on compact Riemann surfaces in order to define an intersection theory on arithmetic surfaces, thus initiating a far-reaching mathematical program which bears his name.G. Faltings extended the pioneering work of Arakelov in [8] by defining metrics on determinant line bundles arising from the cohomology of algebraic curves, from which he derived arithmetic versions of the Riemann-Roch theorem, Noether's formula, and the Hodge index theorem.Although Faltings employed the classical Riemann theta function to define metrics on these determinant line bundles, he does refer to the emerging idea of D. Quillen to use Ray-Singer analytic torsion to define the metrics on these determinant line bundles as being "more direct".One of the many aspects of the mathematical legacy of Christophe Soulé is the central role he played in developing higher dimensional Arakelov theory where Quillen metrics are fully utilized; see [22] and the references therein.From Faltings's theory [8], there appears a naturally defined analytic quantity associated to any compact Riemann surface X.The new invariant in [8] became known as Faltings's delta function, which we denote by δ Fal (X).Many of the fundamental arithmetic theorems and formulas in [8], such as those listed above, amount to statements which involve δ Fal (X).By comparing the Riemann-Roch theorem from [8] and the arithmetic Riemann-Roch theorem, Soulé expressed in [21] the Faltings's delta function in terms of the analytic torsion of the trivial line bundle on X when given the Arakelov metric; see equation (2) below, as well as [21] and, more recently, [24].The Polyakov formula allows one to relate values of the analytic torsion for conformally equivalent metrics.As a result, one can use Soulé's formula for the Faltings's delta function and obtain an identity which expresses δ Fal (X) in terms of the hyperbolic geometry of X; see Theorem 3.2 below, which comes from [14].In [14], we used the relation between the Faltings's delta function and the hyperbolic geometry in order to study δ Fal (X) through covers.As an arithmetic application of the analytic bounds obtained for δ Fal (X), we derived in [14] an improved estimate for the Faltings height of the Jacobian of the modular curve X 0 (N ) for square-free N which is not divisible by 6.After the completion of [14], A. N. Parshin posed the following question to the second named author: Can one derive an effective bound for the Faltings's delta function δ Fal (X) in terms of basic information associated to the hyperbolic geometry of X?The purpose of the present article is to provide an affirmative answer to Parshin's question.More specifically, our main results, given in Theorem 6.1 and Corollaries 6.3, 6.4, explicitly bound δ Fal (X) in the case when X is a finite degree covering of a compact Riemann surface X 0 of genus bigger than 1, where the bound for δ Fal (X) is effectively computable once knowing the genera of X 0 and X, the smallest non-zero eigenvalues of the hyperbolic Laplacian acting on X 0 and X, and the length of the shortest closed geodesic on X 0 (as well as some ramification data in case the covering is ramified).An important ingredient in the analysis of the present paper is the algorithm from [9], which provides effective means by which one can bound the Huber constant on X, a quantity associated to the error term in the prime geodesic theorem; see [9] and the references therein.As with the main result in [9], it is possible that the effective bound we obtain here may not be optimal, perhaps even far from it.However, the existence of an effective bound for the Faltings's delta function δ Fal (X), albeit a sub-optimal bound, may be a tool by which one can further investigate the application of Arakelov theory to diophantine problems, as originally intended.The paper is organized as follows: After recalling basic notations in section 2, we express Faltings's delta function δ Fal (X) in hyperbolic terms of X in section 3. Section 4 is devoted to derive effective bounds for the ratio µ can (z)/µ hyp (z) of the canonical by the hyperbolic metric on X and section 5 gives effective bounds for the Huber constant C Hub,X on X.In section 6, we combine the results of the sections 3, 4, 5 to derive effective bounds for δ Fal (X).In section 7, we discuss an application of our results to an idea of A. N. Parshin for an attempt giving effective bounds for the height of rational points on smooth projective curves defined over number fields.Acknowledgements: We would like to use this opportunity to thank Christophe Soulé for having introduced us into the theory of arithmetic intersections by generously sharing his broad knowledge and deep insights on the subject with us.Furthermore, we would like to thank Alexei Parshin for his interest in our results and for having pointed out to us an application to his work.Finally, we would like to thank the referee for some of his/her comments. 2 Basic notations 2.1.Hyperbolic and canonical metrics.In this note X will denote a compact Riemann surface of genus g X > 1.By the uniformization theorem, X is isomorphic to the quotient space Γ\H, where Γ is a cocompact and torsionfree Fuchsian subgroup of the first kind of PSL 2 (R) acting by fractional linear transformations on the upper half-plane H = {z ∈ C | z = x + iy, y > 0}.In the sequel, we will identify X locally with its universal cover H.We denote by µ hyp the (1, 1)-form corresponding to the hyperbolic metric on X, which is compatible with the complex structure of X and has constant negative curvature equal to −1.Locally, we have We write vol hyp (X) for the hyperbolic volume of X; recall that vol hyp (X) is given by 4π(g X −1).By µ shyp , we denote the (1, 1)-form corresponding to the rescaled hyperbolic metric, which measures the volume of X to be 1.We write dist hyp (z, w) for the hyperbolic distance between two points z, w ∈ H.We recall the formula . We denote the hyperbolic Laplacian on X by ∆ hyp ; locally, we have The discrete spectrum of ∆ hyp is given by the increasing sequence of eigenvalues The (1, 1)-form µ can associated to the canonical metric is defined as follows.Let {ω 1 , . . ., ω gX } denote an orthonormal basis of the space Γ(X, Ω 1 X ) of holomorphic 1-forms on X.Then, µ can is locally given by We recall that the Arakelov metric on X is induced by means of the residual canonical metric • Ar on Ω 1 X , which turns the residue map into an isometry. 2.2.Hyperbolic heat kernel for functions.The hyperbolic heat kernel K H (t; z, w) (t ∈ R >0 ; z, w ∈ H) for functions on H is given by the formula where ρ = dist hyp (z, w).The hyperbolic heat kernel K X (t; z, w) (t ∈ R >0 ; z, w ∈ X) for functions on X is obtained by averaging over the elements of Γ, namely The heat kernel K X (t; z, w) satisfies the equations for all C ∞ -functions f on X.As a shorthand, we use in the sequel the notation Selberg zeta function. Let H(Γ) denote the set of conjugacy classes of primitive, hyperbolic elements in Γ.We denote by ℓ γ the hyperbolic length of the closed geodesic determined by γ ∈ H(Γ) on X; it is well-known that the equality holds. For s ∈ C, Re(s) > 1, the Selberg zeta function Z X (s) associated to X is defined via the Euler product expansion where the local factors Z γ (s) are given by 1 − e −(s+n)ℓγ . The Selberg zeta function Z X (s) is known to have a meromorphic continuation to all of C with zeros and poles characterized by the spectral theory of the hyperbolic Laplacian; furthermore, Z X (s) satisfies a functional equation.For our purposes, it suffices to know that the Selberg zeta function Z X (s) has a simple zero at s = 1, so that the quantity 2.4.Prime geodesic theorem.For any small eigenvalue λ X,j ∈ [0, 1/4), we define and note that 1/2 < s X,j ≤ 1.For u ∈ R >1 , we recall the prime geodesic counting function Introducing the logarithmic integral li(u) := u 2 dξ log(ξ) , the prime geodesic theorem states for u > 1, where the implied constant for all u > 1, not just asymptotically, depends solely on X.We call the infimum of all possible implied constants the Huber constant and denote it by C Hub,X . 3 Faltings's delta function in hyperbolic terms 3.1.Faltings's delta function.Faltings's delta function δ Fal (X) was introduced in [8], where also some of its basic properties were given.In [10], Faltings's delta function is expressed in terms of Riemann theta functions, and its asymptotic behavior is investigated; see also [23].As a by-product of the analytic part of the arithmetic Riemann-Roch theorem for arithmetic surfaces, C. Soulé has shown in [21] that where with det * (∆ Ar ) the regularized determinant of the Laplacian, vol Ar (X) the volume with respect to the Arakelov metric • Ar , and It has been shown in [14] how Faltings's delta function can be expressed solely in hyperbolic terms.Theorem 3.8 therein states: 3.2.Theorem.For X with genus g X > 1, let dt. Then, we have where with a(g X ) as above and b(g X ) given by b(g Proof.The proof is given in [14].Here we present only a short outline of the proof, which consists of the following three main ingredients: (i) One starts by using the Polyakov formula to relate the regularized determinants with respect to the Arakelov and the hyperbolic metric, namely where φ Ar (z) is the conformal factor describing the change from the Arakelov to the hyperbolic metric. (ii) In a second step, one uses the result [20] by P. Sarnak describing the hyperbolic regularized determinant in terms of the Selberg zeta function, namely (iii) In order to express the conformal factor φ Ar (z) and the canonical metric form µ can (z) in hyperbolic terms, we make use of the fundamental relation which has been proven in Appendix 1 of [14]. 3.3.Remark.We note that formula (4) has meanwhile been generalized to cofinite Fuchsian subgroups of the first kind of PSL 2 (R) without torsion elements in [16], and, as a relation of (1, 1)currents, to cofinite Fuchsian subgroups of the first kind of PSL 2 (R) allowing torsion elements in [3]. Based on formula (3), the following bound can be derived for δ Fal (X) in terms of basic hyperbolic invariants of X.For this we introduce the following notations where λ X,1 is the smallest non-zero eigenvalue of ∆ hyp , and we recall that ℓ X denotes the length of the shortest closed geodesic on X and C Hub,X is the Huber constant introduced in subsection 2.4. Corollary. With the above notations, we have the bound with an absolute constant D 1 > 0, which can be taken to be 10 3 . Proof.The proof is straightforward using Theorem 3.2 in combination with the estimates given in Propositions 4.1, 4.2, 4.3, and Lemma 4.4 in [14].For the convenience of the reader, we give now a more detailed derivation of the proof. Using Proposition 4.1 of [14] in combination with the inequalities λ X,1 ≥ λ X and vol hyp (X) ≤ 4πg X , the integral in (3) can be bounded as In order to bound the absolute value of the second summand in (3), we first observe that we have to take the second bound in Proposition 4.3 of [14], since the first one being logarithmic in g X is too small; choosing ε = λ X , we obtain Using Lemma 4.4 (i) of [14], we derive from this the bound Finally, in order to bound the absolute value of the third summand in (3), we again observe that we have to take the second bound in Proposition 4.2 of [14], since the first one being logarithmic in g X is too small; choosing again ε = λ X , we obtain Using Lemma 4.4 (ii) of [14], we derive from this the bound The quantity c(g X ) in ( 3) is easily bounded as Adding up the bounds ( 5)-( 8), using that g X > 1, and by crudely estimating the arising integral constants by D 1 = 10 3 , yields the claimed bound.Note that, estimating more rigorously, D 1 can in fact be taken to be 876. 4 Effective bounds for the sup-norm 4.1.Hyperbolic heat kernel for forms.In addition to the hyperbolic heat kernel on H, resp.X, introduced in subsection 2.2, we also need the hyperbolic heat kernel for forms of weight 1 on H, resp.X.The hyperbolic heat kernel for forms of weight 1 on H is defined as in [13], namely we have where T 2 is the Chebyshev polynomial given by T 2 (r) := 2r 2 − 1.The hyperbolic heat kernel for forms of weight 1 on X on the diagonal is then given as where c(γ, z) for γ = a b c d is defined as We note that |c(γ, z)| = 1.From [13], we recall the crucial relation 4.2.Lemma.With the above notations, we have the bound for any t > 0 and ρ > 0. Proof.Starting with the defining formula we decompose the integral under consideration as . . . In order to complete the proof, we will further estimate the integral in (13).Keeping in mind that we finally have to multiply (13) by the factor e −t/4 , we estimate the quantity Multiplying by the remaining factor Adding up the bounds ( 12) and ( 14) yields the claimed upper bound for K (1) 4.3.Lemma.Let X −→ X 0 be an unramified covering of finite degree with X 0 := Γ 0 \H a compact Riemann surface of genus g 0 > 1, and let ℓ X0 denote the length of the shortest closed geodesic on X 0 .Then, the quantity S X can be bounded as for any t 0 > 0. Proof.From the spectral expansion, one immediately sees that the function K X (t; z) is monotone decreasing in t.Using relation (9) together with the triangle inequality, we then obtain for any t 0 > 0, the bound Using the counting function we can express the latter bound in terms of the Stieltjes integral With the notation of Lemma 4.6 of [11], we put u := ρ, a := ℓ X0 /4, and further where r := ℓ X0 /4 and ρ 0 := 3ℓ X0 /4.By the latter choices for r and ρ 0 , the inequalities 2r < ℓ X0 , 2r < ρ 0 < ℓ X0 hold, which enables us to apply Lemma 2.3 (a) of [17] to derive the inequality This in turn allows us to apply Lemma 4.6 of [11], in particular taking into account that K (1) Using the above notation, we get Furthermore, we compute ) . Inserting all of the above into (16), we arrive at the bound Observing the inequality proves the claimed bound. 4.4.Proposition.Let X −→ X 0 be an unramified covering of finite degree with X 0 := Γ 0 \H a compact Riemann surface of genus g 0 > 1, and let ℓ X0 denote the length of the shortest closed geodesic on X 0 .Then, the quantity S X can be bounded as with an absolute constant D 2 > 0, which can be taken to be 1.2 • 10 3 . Proof.We work from the estimate (15) for S X given in Lemma 4.3 and insert therein the bound (10) for K (1) H (t 0 ; ρ) obtained in Lemma 4.2, which we rewrite as With this notation and keeping in mind that our bounds are valid for all t 0 > 0, we can rewrite (15) in the form where In order to obtain a precise, effective upper bound for S X , we will evaluate the expression under consideration at t 0 = 10; there is no particular reason for this choice of t 0 except to derive an explicit bound for S X . For the first summand of B 1 (t 0 ; ℓ X0 ) involving the integral, since sinh(ρ + ℓ X0 /2) ≤ e ρ+ℓX 0 /2 and 1 for ρ ≥ ℓ X0 /4, we have the bound hence we obtain for t 0 = 10 We thus get the bound For the first summand of B 2 (t 0 ; ℓ X0 ) involving the integral, we have the bound For the first summand of B 3 (t 0 ; ℓ X0 ) involving the integral, we have the bound hence we obtain for t 0 = 10 Adding up the bounds ( 17) -( 19), we obtain 352 π e ℓX 0 /2 (1 − e −ℓX 0 /4 ) 5/2 , which proves the claim.4.5.Remark.In addition to the cartesian coordinates x, y, we introduce the euclidean polar coordinates ρ = ρ(z), θ = θ(z) of the point z centered at the origin.These are related to x, y by the formulae x := e ρ cos(θ) , y := e ρ sin(θ). ( Given γ ∈ H(Γ), then there exists σ γ ∈ PSL 2 (R) such that For s ∈ C, Re(s) > 1, the hyperbolic Eisenstein series E hyp,γ (z, s) associated to γ is defined by the series using the polar coordinates (20).The hyperbolic Eisenstein series ( 21) is absolutely and locally uniformly convergent for z ∈ H and s ∈ C with Re(s) > 1; it is invariant under the action of Γ and satisfies the differential equation For proofs of theses facts and further details, we refer to [18].By means of the hyperbolic Eisenstein series the following alternative bound for the quantity S X , namely has been obtained in [15].This upper bound for S X involves special values of hyperbolic Eisenstein series in the half-plane of convergence of the series.As such, it is possible to use various counting function arguments, as above, to complete this approach to obtaining an upper bound for the quantity S X analogous to the one given in Proposition 4.4. 5 Effective bounds for the Huber constant 5.1.Remark.In Table 2 of the recent joint work [9] with J. S. Friedman, an algorithm was given to bound the Huber constant C Hub,X for X effectively in terms of our basic quantities g X , d X , ℓ X , λ X,1 , and N [0,1/4) ev,X ; here the newly introduced quantity d X denotes the diameter of X.In the subsequent proposition, we will summarize the result of this algorithm by utilizing convenient yet possibly crude estimates. Proposition. The Huber constant C Hub,X for X can be bounded as here ℓ X denotes the length of the shortest closed geodesic on X, with λ X,1 denoting the smallest non-zero eigenvalue of ∆ hyp , and D 3 > 0 is an absolute constant, which can be taken to be 10 11 . Proof.As mentioned in 5.1, we follow the algorithm given in Table 2 of [9].In the sequel we also use the definitions of the quantities A, B, C, C j (j = 1, 2, 3, . ..) therein. Recalling from [6] the bound for N [0,1/4) ev,X , we obtain for the quantity A estimate Using the inequality (2) from the main theorem of [7], namely we obtain the following bound for the diameter d Hence, the quantity B can be estimated by For the quantity C, we have Next, we have and From this we derive and ≤ 114 349 g X + 42 614 061 e 8πgX/ℓX . For notational convenience, let us keep the constant C 16 without replacing it with the above bound for the next few computations.We further have The constant c must satisfy 1 < c < e ℓX , so we may take c := e ℓX /2 , and hence µ := ℓ X /2.With this choice, we find Observing that Thus, we obtain For the quantity C 21 , we find the estimate At this point, we have to correct the statement about the constant C 22 , which comes from Lemma 4.14 in [9].The correct assertion is that (2) . In fact, C 22 has to be such that for any r ≥ 2, we have the inequality li(r) ≤ C 22 r log(r) . For a proof we consider the function for some positive constant d, which we determine such that f (r) is negative for r ≥ 2. Obviously, f (2) < 0, so we have to determine d such that f (r) becomes a decreasing function.We have , hence, we need to have giving the claimed value of C 22 .(Note that the error in the proof of Lemma 4.14 of [9] arose by dividing by a constant which is negative, so then the inequality has to change directions.)Continuing with this value of C 22 , we have Finally, we are in a position to compute C u ; we have Employing finally the bound for C 16 yields the estimate This completes the proof of the proposition. Effective bounds for Faltings's delta function The main result proven in this paper consists in simplifying the bound obtained in Corollary 3.4 and making it effective. 6.1.Theorem.Let X −→ X 0 be an unramified covering of finite degree with X 0 := Γ 0 \H a compact Riemann surface of genus g X0 > 1.Let ℓ X0 denote the length of the shortest closed geodesic on X 0 and λ X,1 , λ X0,1 the smallest non-zero eigenvalues of ∆ hyp on X, X 0 , respectively, and Then, we have the effective bound with an absolute constant D 4 > 0, which can be taken to be 10 15 . Proof.We work from the bound obtained in the proof of Corollary 3.4 (using the notation therein).We will next bound the quantities in terms of the underlying compact Riemann surface X 0 . (i) We start by observing that the trivial inequality holds true for the lengths of the shortest closed geodesics on X, X 0 , respectively. (ii) In order to estimate ev,X , we recall as in the proof of Proposition 5.2 from [6] the bound (iii) From Proposition 4.4, we recall the bound with ℓ X0 as in the statement of the theorem. (iv) Next, we have to estimate C Hub,X .We start by citing Theorem 3.4 of [12] and use the Artin formalism for the covering X −→ X 0 , to derive the bound From the Riemann-Hurwitz formula we now easily derive the bound from which we get where the proof of Proposition 5.2 shows C Hub,X0 ≤ 39 512 073 856 g X0 e 8πgX 0 /ℓX 0 +ℓX 0 /2 (1 with ℓ X0 and s X0,1 as in the statement of the theorem. (v) Finally, we need to bound N (0,5) geo,X .With the above notation, using arguments from the proof of Theorem 4.11 in [11] (as well as the notation r Γ0,Γ therein), we find (as above) Applying the prime geodesic theorem (1) to X 0 and recalling the monotonicity of the logarithmic integral for u > 0, we find log log(5) where C Hub,X0 can be effectively bounded using Proposition 5.2. (ii) On the other hand, if X 0 can be covered by a modular curve Γ(N )\H for the full congruence subgroup Γ(N ) for some N ∈ N, a result of R. Brooks in [5] shows that λ X0,1 ≥ 5/36, which gives the estimate In addition, assuming that X 0 has a model defined over some number field, case (i) above also applies and the bound (30) simplifies to ≤ 10 18 g X0 e 10 gX 0 +ℓX 0 . 6.3.Corollary.Let X be a compact Riemann surface of genus g X > 1.Let ℓ X denote the length of the shortest closed geodesic on X, λ X,1 the smallest non-zero eigenvalue of ∆ hyp on X, and Then, we have the effective bound with an absolute constant D 4 > 0, which can be taken to be 10 15 . Proof.The proof follows immediately from an analysis of the proof of Theorem 6.1 for the trivial covering X 0 = X. Using Corollary 6.3, we can now also give a variant of the bound (22) in the case that X is a ramified covering of finite degree of a compact Riemann surface X 0 of genus g X0 > 1.For this, we let Ram(X/X 0 ) ⊂ X 0 denote the ramification locus of the given covering. 6.4.Corollary.Let X −→ X 0 be a ramified covering of finite degree of compact Riemann surfaces of genera g X , g X0 > 1, respectively.With ℓ X0 denoting the length of the shortest closed geodesic on X 0 , put Furthermore, with λ X,1 denoting the smallest non-zero eigenvalue of ∆ hyp on X, put Then, we have the effective bound with an absolute constant D 4 > 0, which can be taken to be 10 15 . Proof.We work from the effective bound obtained in Corollary 6.3 and estimate the length of the shortest closed geodesic ℓ X from below and above by quantities depending on the base X 0 .In order to estimate ℓ X from below, we observe that the length of closed geodesics on X, which do not pass through ramification points, can be bounded from below by ℓ X0 ; the same estimate holds true, if the closed geodesic passes through a single ramification point.However, if the closed geodesic happens to pass through at least two ramification points lying above two distinct points of Ram(X/X 0 ), we additionally have to take into account the distances between mutually distinct points of Ram(X/X 0 ) in our estimate.All in all this leads to the lower bound Similarly, we find that the length of closed geodesics on X, which do not pass through ramification points, can be bounded from above by deg(X/X 0 ) ℓ X0 , and the same estimate holds true, if the closed geodesic passes through a single ramification point.Again, if the closed geodesic happens to pass through at least two ramification points lying above two distinct points of Ram(X/X 0 ), we additionally have to take into account the distances between mutually distinct points of Ram(X/X 0 ) in our estimate.This leads to the upper bound Inserting the bounds (32) and (33) into the estimate (31) completes the proof of the corollary. 7 Application to Parshin's covering construction 7.1.The set-up.Let K be a number field with ring of integers O K and S := Spec (O K ).In contrast to the previous sections, let X denote a smooth projective curve defined over K of genus g X > 1, and let X /S be a minimal regular model of X/K, which is semistable.Denote by ω X /S the relative dualizing sheaf of X /S equipped with the Arakelov metric.For p ∈ S, we let δ p be the number of singular points in the fiber above p.For an archimedean place v, we put whose complex points X v (C) constitute a compact Riemann surface of genus equal to g X .In order to simplify our notation, we allow ourselves subsequently to write X v instead of X v (C). In his quest for an arithmetic version of the van de Ven-Bogomolov-Miyaoka-Yau inequality, A. N. Parshin proposed the following inequality (see [19]) here c j are positive constants depending solely on K (j = 1, 2, 3), N K/Q (p) denotes the absolute norm of p, and disc(K/Q) is the discriminant of the field extension K/Q.As is well known by subsequent work of J.-B.Bost, J.-F.Mestre, and L. Moret-Bailly (see [4]), the inequality (34) does not hold true in general. 7.2.The covering construction.Assuming the validity of the inequality (34), A. N. Parshin proposed in [19], how to bound the height of K-rational points P ∈ X(K) as effective as possible using the following ramified covering construction.Given the smooth projective curve X/K of genus g X > 1, and P ∈ X(K) a K-rational point, there exists a finite covering X P /K P over X with the following properties: (i) The field extension K P /K is a finite extension of degree effectively bounded as O(g X ) with prescribed ramification. (ii) The covering X P /X is finite of degree effectively bounded as O(g X ) and ramified only at P of ramification index effectively bounded as O(g X ); by the Riemann-Hurwitz formula, the genus g XP of X P is then also effectively bounded as O(g X ). (iii) For each archimedean place v of K and each archimedean place v ′ of K P lying above v, there exists a smooth projective complex surface Y v together with a smooth morphism ϕ Denoting by O KP the ring of integers of K P , setting S P := Spec (O KP ), letting X P /S P be a minimal regular model of X P /K P , which is semistable, and denoting by ω XP /SP the relative dualizing sheaf of X P /S P equipped with the Arakelov metric, the height h(P ) of P can be bounded by the arithmetic self-intersection number which, in turn, can then be bounded using (34), after replacing ω X /S by ω XP /SP .In [19], the quantities δ P (P ∈ S P ), disc(K P /Q), and [K P : Q] are then effectively bounded in terms of the genus g X of X.The contribution from Faltings's delta function δ Fal X P,v ′ (v ′ |v) is bounded in terms of X by arguing that, as P is moving through the set of K-rational points X(K), the function δ Fal X P,v ′ can be viewed as the restriction of a real-analytic function on X v , which takes its maximum on the compact Riemann surface X v .with ℓ Xv denoting the length of the shortest closed geodesic on X v and λ X P,v ′ ,1 denoting the smallest non-zero eigenvalue of ∆ hyp on X P,v ′ .As P is moving through the set of K-rational points X(K) or, more generally, through the compact Riemann surface X v , the Riemann surfaces X P,v ′ (or, rather their isomorphism classes) cover a compact region D in the moduli space M gX P of curves of genus g XP .While P is ranging over X v , the function takes its minimum on D, which we denote by λ v,min .Keeping in mind that X v is defined over a number field, Remark 6.2 (i) allows us to simplify the bound (36) to δ Fal X P,v ′ ≤ 10 17 g XP e 10gX P +gX P ℓX v λ v,min ; here we recall that the genus g XP can be effectively bounded in terms of the genus g X .We conclude by emphasizing that our results do not lead to an effective bound for the height h(P ) of K-rational points P ∈ X(K), since the bound (35) as well as the determination of the minimum λ v,min are not effective. In the notation of Lemma A.1, we then have, using integration by parts, K We now apply the Leibniz rule of differentiation to write ∂ ∂ρ K Using integration by parts on the first term once again, yields the identity ∂ ∂ρ K From Lemma A.1, we conclude that ∂/∂ ρ K H (t; ρ) < 0 for ρ > 0, which proves the claim. 7.3.Parshin's question.After having presented our estimate (23) for Faltings's delta function obtained in Corollary 3.4, Parshin proposed to apply our bound to δ Fal X P,v ′ in order to obtain a more explicit bound than his.Indeed, applying the bound obtained in Corollary 6.4 to the ramified covering X P,v ′ −→ X v of finite degree, observing that the ramification locus Ram(X P,v ′ /X v ) consists of only one point, we are led to the bound δ Fal X P,v ′ ≤ D 4 g XP e 8πgX P /ℓX v +gX P ℓX v (1 − e −ℓX v /4 ) 5 1 λ X P,v ′ (1 − s X P,v ′ ,1 )
2013-12-09T22:10:11.000Z
2013-12-09T00:00:00.000
{ "year": 2013, "sha1": "41d959e6d465ab87633dcd72d9ba263045ecd003", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "41d959e6d465ab87633dcd72d9ba263045ecd003", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244909424
pes2o/s2orc
v3-fos-license
Impulse Response Characterization of a Commercial Multimode Fiber using Superconducting Nanowire Single-Photon Detectors Time-of-flight measurements are key to study distributed mode coupling and differential mode group delay (DMDG) in multimode fibers (MMFs). However, current approaches using regular photodetectors with limited sensitivity preclude the detection of weak modal interactions in such fibers masking interesting physical effects. Here we demonstrate the use of high-sensitivity superconducting nanowire single-photon detectors (SNSPDs) to measure the mode transfer matrix of a commercial graded-index multimode fiber. Two high performance 45-mode multi-plane light conversion (MPLC) devices served as the mode multiplexer/demultiplexer. Distributed mode coupling and DMDG among different mode groups are accurately quantified from the impulse response measurement. We also observed cladding modes of the MMF as a pedestal of the pulse in the measurement. This work paves way for applications such as quantum communications using many spatial modes of the fiber. Introduction Mode-division multiplexing (MDM) in multimode fibers (MMFs) as a candidate for next generation highcapacity optical transmission systems, has attracted great research interest in the last decade [1,2]. The mode dynamics in MMFs are complex: spatial modes in these fibers couple with each other due to perturbations from the environment, imperfect geometry or refractive index fluctuations, and optical pulses propagating in MMFs suffer from modal dispersion, with each mode traveling at a different group velocity. The arrival time difference of two modes in a fiber is defined as differential mode group delay (DMGD) and it has been extensively studied in optical communications as it directly determines the number of equalizers taps necessary to compensate for the mode coupling [3]. Measuring the intensity impulse response is an important method to study DMGD and mode coupling. Current methods to measure the impulse response of MMFs -including time-of-flight techniques [4,5], swept-wavelength interferometry (SWI) [6], optical time-domain reflectometry (OTDR) [7], and digital signal processing (DSP) [8] -fail to measure long fiber spans and very weak inter-modal interactions due to the limited sensitivity of regular photodetectors. The capability of detecting light at the single-photon level has enabled breakthroughs in many fields, such as reconfigurable photonics [9], light detection and ranging (LiDAR) [10], and optical quantum information applications [11] including quantum computing [12,13] and quantum key distribution [14]. Recently, using ultrasensitive single-photon detectors to study fiber modes has attracted some attention. In 2019, Johnson et al. measured the temporal evolution of orbital angular momentum (OAM) modes at telecommunication wavelengths using a single-pixel camera with a singlephoton avalanche diode (SPAD) detector and a digital micro-mirror device (DMD) [15]. In 2020, Chandrasekharan et al. measured the group velocity dispersion, DMGD, and the effective refractive index difference of different spatial modes in a 6-mode fiber using a two-dimensional (2D) SPAD array, over a broad bandwidth (500 nm -610 nm) in the visible regime [16]. However, 2D SPAD arrays are not available at telecommunications wavelengths because of the high cost of fabricating pixelated arrays. Further superconducting nanowire single-photon detectors (SNSPDs) show significantly better performance at these wavelengths. At 1550nm, commercially available SPADs have a timing jitter of ∼150 ps, quantum efficiencies of ∼20-30 %, and dark count rates of ∼50 Hz [17]. In contrast, the commercial SNSPD (Single Quantum Eos) in our group has a timing jitter of ∼10 ps, a quantum efficiency of ∼85%, and dark count rates of >10 Hz. Notably, SNSPDs with sub-3 ps temporal resolution have been demonstrated [18]. In 2021, Mazur et al. did a proof-of-principle impulse response measurement of a few-mode fiber using the SNSPD and a 3-mode photonic lantern as the mode demultiplexer [19]. In this paper, we measure the impulse response of the dual-polarization 45×45 transfer matrix of a commercial OM3 multimode fiber. With the high sensitivity of SNSPDs, we build the histogram of the impulse response at a single-photon level, enabling us to study very weak modal interactions. The very low dark count rate used (∼200 Hz) determines a noise floor of -136 dBm, giving us the capability to observe distributed mode crosstalk and cladding modes directly at the foot of the pulse. Using the high timing accuracy of ∼ 10 ps, the DMGD can be accurately measured. Two high performance 45-mode multiplexer/demultiplexer based on MPLC technology and two optical MEMS (micro-electromechanical system) switches are used to excite and receive one specific spatial mode of the MMF at every single step. This paper is structured into four sections. Section II describes the experimental setup and some critical components used in the measurement. Section III analyzes the results of two cases: the back-to-back (BTB) transmission and transmission with the MMF inserted. DMDG and distributed crosstalk plateau among different mode groups are accurately quantified. Finally, section IV concludes the paper. Experimental Setup The experimental setup is outlined in Fig. 1. A swept-signal generator (Agilent 83650B) provided a sinusoidal clock signal (8000 MHz, 125 ps) for the pulse pattern generator (PPG, Anritsu MP1763B). The PPG divided the clock signal by 8000 (1MHz, 1µs) and generated square pulses to drive a directly modulated laser (DML). This DML is a distributed feedback (DFB) laser with a measured wavelength of 1545 nm. The swept-signal generator also provided an external clock signal for the quTAG (qutools company) for synchronization. The DML has a pulse repetition rate of 1 MHz, and each pulse has a measured full width at half maximum (FWHM) of 70 ps. We chose a repetition rate of 1 MHz as the detectors have peak efficiency (∼85%) at this rate, higher rates are measurable at the expense of lower detection efficiency. The average output power of the DML was measured to be -36.0 dBm. To fully characterize the dual-polarization 45×45 transfer matrix, a manually controlled polarization switch, shown in Fig. 1(c), was used in front of the mode multiplexer, and a fiber-based polarization beam splitter (PBS) was used after the mode demultiplexer to realize a polarization-diverse measurement. The polarization switch, as shown in Fig. 1(c), consists of a polarization controller (PC), a 50:50 fiber coupler, a fiber-based PBS (Thorlabs, PBC1550SM-APC) and a power meter (HP 8153A). A pair of optical switches (DiCon MEMS 64-channel model) and MPLCs constitute the function of mode-diverse measurement (green background in Fig. 1(a)). Outputs from the second PBS were connected to two independent channels of the SNSPD system. Since the meandering nanowire inside the SNSPD is polarization sensitive, two PCs were added between the second PBS and the SNSPD. Any path length mismatch (either in fiber or RF cable) induced a relative time shift for pulses in different channels, which was corrected during the post-processing. To launch the x-polarization state into the system with the polarization switch shown in Fig. 1(c), we first adjusted the PC manually until reading the maximum power (-39.6 dBm) on the power meter. Then we scanned all 45 channels of switch 1 and switch 2 sequentially, with the received x and y polarization states recorded simultaneously under x-polarization launch. Following, we adjusted the PC until reading the minimum power (-70.0 dBm), to launch y-polarization state and repeated the above mode-diverse measurement. In this way, we make sure x-and y-polarization launch states are orthogonal. Those two optical switches and the quTAG were controlled by one single computer to record data. For every cell of the 45×45 matrix, 30 s of raw data was saved to accumulate enough data points to plot the histogram. Measuring a complete dualpolarization transfer matrix needed roughly 34 hours. We measured the 45×45 transfer matrix in a back-toback (BTB) experiment first (splicing the pigtail fibers of two MPLCs directly [20]). After taking data of the entire transfer matrix of the BTB, we inserted a spool of commercial MMF and repeated the measurement. The VOA helped to attenuate light to single-photon level. It had an attenuation of 33 dB for the BTB measurement and was adjusted to 30 dB after inserting the MMF. In this way, the SNSPD reads a similar photon count rate for BTB and MMF measurement. In the BTB when both switches were switched to channel 1 (HG 00 mode to HG 00 mode coupling), the total power reaching the SNSPD was estimated to be -95 dBm. Since such a low power cannot be measured directly with a normal photo detector, we set the VOA to 0 dB attenuation first, and measured the power before the second PBS being -62 dBm. The total insertion loss of these two switches and MPLCs was measured to be 18 dB. Other losses in the link are from the 50:50 fiber coupler, connectors, and some single mode fiber (SMF) connecting components between two labs. The SNSPD was cooled down by helium to 2.6 K and the bias current of each individual detector was set just below the critical current of the nanowires, leading to a dark count rate of about 200 counts/s, corresponding to a sensitivity of -136 dBm. The detector would transit from a superconducting state to a resistive state upon absorption of a single photon. In our experiment, all lights in the lab were turned off when calibrating the dark count rate, and most of the setup including the MMF fiber spool and MPLCs were covered with aluminum foil to reduce the influence from surrounding light. Covering the setup is important to reach a low dark count rate because background light may couple into the fiber due to blackbody radiation at room temperature [21]. This SNSPD system has 4 independent channels, of which channel 3 and channel 4 were used in the experiment to register the received light with x and y polarizations separately. At 1550 nm, channel 3 and 4 has a system detection efficiency of 81% and 85%, respectively. The calibrated timing jitter of channels 3 and 4 are 19.0 ps and 13.2 ps FWHM, respectively. The measured relaxation time (full recovery time after each detection event) is about 100 ns, limiting the maximum count rate to roughly 10 MHz. The detection speed therefore, is much lower than the regular balanced photodetectors (BPDs) commonly used in coherent optical communications, which can have a bandwidth up to 100 GHz. The quTAG is a fast time-to-digital converter and time tagging device for time correlated single-photon counting (TCSPC). The quTAG has a measured single channel timing jitter of 5.94 ps RMS and 14.00 ps FWHM. A spool of commercially available OM3 graded-index multimode fiber (GI-MMF, OFS LaserWave FLEX 300) was measured in the experiment. The fiber has a total length of 8851m, and it was believed to support 9 mode groups (45 spatial modes) at 1550 nm. In this experiment however, we found it supports 10 mode groups (55 spatial modes), and the last two mode groups behave differently from the first 8 mode groups, as shown in Fig. 7(b). The same fiber has been used in a mode-division multiplexed transmission experiment in 2014 by Ryf et al. [22]. According to [22], this OM3 MMF has a measured attenuation loss of 0.34 dB/km for the LP 01 and LP 11 modes at 1550 nm, and a chromatic dispersion parameter of ∼20-24 ps/nm/km. However, accurate measurements of the distributed crosstalk, DMGD, and the cladding modes of this kind of fiber remained elusive, as they were beyond the sensitivity and temporal resolution of previously used detectors. We use two MPLCs as the mode multiplexer/demultiplexer. The 45-mode MPLC is a free space device consisting of a collimated linear SMF array, a phase mask plane, a dielectric mirror, and a collimated GI-MMF at the output, as shown in Fig. 2(a). The device maps every input Gaussian spot from the SMF array into a specific Hermite-Gaussian (HG) mode through a unitary transformation [20,23]. As a reciprocal device, this device can be used either as a mode multiplexer (input through SMF end) or a mode demultiplexer (input through GI-MMF end). The performance of the two MPLCs are qualitatively shown by imaging the free space output modes using an InGaAs camera, as shown in Fig. 2 (b) and (c). The HG modes in Fig. 2(c) are not as clear as those in Fig. 2 (b), indicating the 2nd MPLC that used as the mode demultiplexer has a compromised performance compared to the first one. This was due to an imperfect packaging after assembling, leading to a relatively large insertion loss (IL) and mode dependent loss (MDL). 45 HG modes are grouped into 9 mode groups, and labeled in an order as shown in TABLE I. Modes within the same group are degenerate (having nearly equal propagation constant, and thus similar phase velocity and group velocity) and tend to couple strongly during propagation in a fiber. Actually, the 3-m pigtail fibers at the output end of two MPLCs are already long enough to induce mode coupling within the same mode group [20]. is as large as 60 dB. These subplots are grouped into different blocks, with cells of the pink background color represent mode coupling of the same mode group and yellow color represent mode coupling among different mode groups. Similar to Fig. 3, Fig. 4 shows the histograms of the 6×6 matrix on the up-left corner of the entire 45×45 transfer matrix with the MMF inserted. Distributed mode coupling and DMGD can be observed by the zoom-in view in the time window between 691 and 705 ns. Modes within the same group (e.g., HG 01 and HG 10 ) arrive at the same time due to degeneracy and thus have overlapped peaks in time. The crosstalk plateau between individual mode peaks is a clear sign of mode mixing. The plateau between the peaks is due to photons that experienced coupling from one mode to another mode somewhere along the fiber during propagation, with an arrival time between that of the photons that did not experience mode coupling. Same as Fig. 3, we use pink and yellow background color to represent mode coupling of the same mode group and among different mode groups, respectively. Results and Discussion In Fig. 5, all polarizations and mode coupling curves within the same colored block in Fig. 4 are grouped in one subplot. In this way we group the original 45×45 matrix into a 9×9 matrix, considering degeneracy of modes in the same group. Fig. 6 shows some selected cells with an enlarged view. In the measurements with the MMF, the dynamic range is about 35∼45 dB, limited by a pedestal at the foot of the pulse. As shown in Fig. 5 and Fig. 6, the value of the pedestal changes for different elements in the 45×45 matrix. The general trend is that diagonal elements, which have larger count rates, also possess a higher pedestal level, such as in Fig. 6(a) and (b). Some off-diagonal elements, e.g., Fig. 6 (c) and (d), the pedestal is below the noise floor determined by the dark count rate (∼200 counts/s, -136 dBm). Histograms of the BTB data in Fig. 3 lack the pedestal, indicating that the pedestal is not due to the limited extinction ratio of the DML [19], but most likely to the cladding modes of the MMF. The silica cladding of the fiber can support hundreds of cladding modes and bending or other environment perturbations may violate the total internal reflection condition and cause mode energy to leave the core mode and couple to cladding modes [24]. The coupling back-andforth between core modes and cladding modes during propagation and coupling among different coils of the fiber spool are responsible for the presence of cladding mode. For some cells (e.g. 41-41 in Fig. 6(b)) at the bottom-right corner of the 45×45 MMF matrix (see supplementary figure 2), a "bump" appears at about 690 ns, 3 ns to the left of mode group 1. Further measurements confirm the "bump" is actually energy coupled to mode group 10, as discussed below in Fig. 7(b). Fig. 7(a) plots all curves of the 45×45 cells of the MMF measurement together, with all curves well aligned in time. However, only 8 mode group peaks, instead of 9, are distinguishable in the plot. Note the 8 small peaks following the 8 main peaks are due to stray reflection in the system, as discussed in the supplementary part. To investigate the reason we measured the same MMF with a swept-wavelength interferometer from 1450 nm to 1650 nm [25], shown as a spectrogram in Fig. 7(b). 10 bright lines represent 10 mode groups supported by the MMF in this wavelength range. The blue dash line at 1545 nm is the wavelength used in the SNSPD measurement. Clearly, mode group 9 (G9) lies in the crosstalk plateau between G6 and G7 at 1545 nm. This explains why only 8 mode groups peaks appear in Fig. 7(a). And note that mode group 10 (G10) arises as a "bump" about 3 ns to the left of mode group 1 (G1). The cut-off frequency of G10 is about 1555 nm. Lines corresponding to higher order mode groups are broader than lower order mode groups due to the existence of more vector modes with slightly different propagation constants in those mode groups [25]. The slope of the lines shows chromatic dispersion, with modes in G9 and G10 having opposite dispersion sign than that those in groups G1 to G8 because they are less confined in the fiber core. The DMGD values can be easily read out from the impulse response graph in Fig. 7(a) or the spectrogram in Fig. 7(b). The relative delay of G2, G3, G4, G5, G6, G7 and G8 to G1 are 1.97 ns, 3.35 ns, 4.73 ns, 6.31 ns, 7.88 ns, 9.07 ns and 10.06 ns, respectively, close to the DMGD numbers reported in a previous publication using the same fiber [22]. Mode coupling can occur both in the transmission fibers or in the mode (de)multiplexers. The distributed mode coupling, however, only reflects the coupling properties of the MMF. To study the distributed coupling strength in the commercial MMF, we analyze the mode crosstalk plateau between peaks in Fig. 5. The plateau values in counts/s, averaged over all polarizations and degenerate modes within each cell for the first 8 mode groups, are shown in Fig. 8(b), and the normalized distributed crosstalk coefficients are shown in Fig. 8(c). The details of this normalization are described below. Fig. 8(a) schematically shows an impulse response histogram with multiple mode peaks and the distributed crosstalk plateau in between. The y-axis of the histogram has a unit of counts/s, corresponding to optical power. Thus the integral over time gives total photon counts, corresponding to the total energy of a specific time window. The red box in Fig. 8(a) indicates the integral area for energy in the crosstalk plateau E plateau . DMDG values measured from Fig. 7 are used to determine the start and end time of the plateau when calculating the integral area. If we denote the x-x polarization curve in cell (i, j) of the original 45×45 matrix as P ij,xx (t), the total energy of the curve is given by with i = 1, 2, . . . , 45, j = 1, 2, . . . , 45. t 1 and t 2 are start and end time of the selected time window, respectively. Then we average over all four polarization curves in each cell, and sum up all columns in each row of the matrix to get pulse energy of each input channel with a specific received polarization state. Finally, we average the pulse energy of 45 input channels and use that number to normalize the distributed crosstalk plateau. We use t 1 = 677 ns and t 2 = 717 ns, as shown in Fig. 6, which encompasses all relevant counts resulting from the pulse. The normalized distributed crosstalk coefficients are shown in Fig. 8(c). The diagonal structure of Fig. 8(b) and (c) indicates that adjacent mode groups tend to couple more strongly than groups that are far away. The complete 45×45 mode transfer matrix of BTB and MMF, averaged over all polarizations, are shown in Fig. 9(a) and (b) respectively. For Fig. 9(a), photon count rates between 16 ns and 30 ns are integrated using Eq. (1), while for Fig. 9(b), photon count rates between 691 ns and 705 ns are integrated. The result of counts summation in this 14 ns time window represents the energy transfer matrix of the entire system, giving mode coupling information of both the MPLC mode (de)multiplexer and the MMF. Fig. 9(b) is smoother than Fig. 9(a) due to the mode mixing and modal dispersion in the 8.85km MMF. Note that the mode mixing in Fig. 9(a) is partly due to the short pigtail fiber at the output of each MPLC, and partly due to the imperfection of the packaged MPLC mode demultiplexer, as shown in Fig. 2(c). Conclusion In conclusion, SNSPDs offer unprecedented opportunity to study weak modal interactions in multimode optical fibers. Using SNSPD and two 45-mode MPLCs, we measure the mode transfer matrix of a commercial OM3 multimode fiber with the time-of-flight method. High sensitivity (-136 dBm) and high timing accuracy (∼ 10 ps) give us unprecedented capability to accurately measure very weak modal dynamics, including distributed mode coupling, differential mode group delay, and cladding modes of multimode fibers. This research is important for applications like quantum key distribution in multimode fibers.
2021-10-01T01:16:11.875Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "42c5a8996cd7576ae3f81fccaa10b38fbe155ae2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "42c5a8996cd7576ae3f81fccaa10b38fbe155ae2", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
52902758
pes2o/s2orc
v3-fos-license
Hierarchical Structure and Catalytic Activity of Flower-Like CeO2 Spheres Prepared Via a Hydrothermal Method Hierarchical CeO2 particles were synthesized by a hydrothermal method based on the reaction between CeCl3·7H2O and PVP at 270 °C. The flower-like CeO2 with an average diameter of about 1 μm is composed of compact nanosheets with thicknesses of about 15 nm and have a surface area of 36.8 m2/g, a large pore volume of 0.109 cm3/g, and a narrow pore size distribution (14.9 nm in diameter). The possible formation mechanism of the hierarchical CeO2 nanoparticles has been illustrated. The 3D hierarchical structured CeO2 exhibited a higher catalytic activity toward CO oxidation compared with commercial CeO2. Introduction CeO 2 is playing important roles in various fields such as promoters for three-way catalysts [1], fuel cells [2], hydrogen storage materials [3], and oxygen sensors [4]. Although the utilization of ceria is based on its intrinsic chemical properties, the structures and morphologies of CeO 2 also have a significant influence on its properties and applications [5,6]. So far, CeO 2 with different sizes and morphologies such as nanorods [7], nanospheres [8], nanotubes [9], and nanocubes [10] have been synthesized in the last decade. It was proved that CeO 2 nanoparticles with different sizes and morphologies have better properties than general CeO 2 does. CeO 2 nanoparticles afford more active sites because of their high specific surface areas and novel structures [11]. Preparation of CeO 2 with different structures and morphologies provides the basic groundwork for its advanced applications. Hierarchical structured CeO 2 with unique properties and novel functionalities has attracted the attention of many researchers in recent years. Zhong et al. synthesized a three-dimensional (3D) flower-like CeO 2 micro/nanocomposite structure using cerium chloride as a reactant by a simple and economical route based on an ethylene glycol-mediated process [12]. Li et al. synthesized mesoporous Ce(OH)CO 3 microspheres with flower-like 3D hierarchical structures via different hydrothermal systems, including glucose/acylic acid, fructose/acrylic acid, glucose/propanoic acid, and glucose/n-butylamine systems. Calcination of the Ce(OH)CO 3 microspheres yielded mesoporous CeO 2 microspheres with the same flower-like morphology as that of Ce(OH)CO 3 microspheres [13]. Ouyang et al. reported a facile electrochemical method to prepare hierarchical porous CeO 2 nanospheres and applied them as highly efficient absorbents to remove organic dyes [14]. However, 3D hierarchical structured CeO 2 is commonly synthesized with relatively miscellaneous process, which limited the extensive usage of the prepared ceria. In this paper, we report a facile one-pot hydrothermal route to synthesize 3D hierarchical structured CeO 2 . The present hydrothermal route is low cost and can be easily scaled-up. The fabricated 3D hierarchical structured CeO 2 could be used as a catalyst for CO oxidation and a support for noble metal catalysts. Preparation of Hierarchical Structured CeO 2 Cerium (III) chloride heptahydrate (CeCl 3 ·7H 2 O), polyvinyl pyrrolidone (PVP), and ethanol were purchased from Beijing Yili Chemical Reagent Co. Ltd. (Beijing, China). All materials were used without any further purification. In a typical synthetic procedure of the hierarchical structured CeO 2 , 0.5 mmol CeCl 3 ·7H 2 O was dissolved in 30 mL deionized water, and then 1 mmol PVP and 20 mL deionized water were added to the solution. After 15 min of magnetic stirring, the homogenous solution was transferred into the Teflon vessel of a hydrothermal bomb, which was then placed in an oven and maintained at 270 • C for 24 h. Then, the solution was cooled to room temperature, and the products were separated by centrifugation and washed with absolute ethanol and distilled water. Characterization Techniques The crystal phases of the products were characterized by X-ray diffraction (XRD) using Philips X'pert PRO analyzer (Philips, Amsterdam, The Netherlands) equipped with a Cu K α radiation source (λ = 0.154187 nm) and operated at an X-ray tube (Philips, Amsterdam, The Netherlands) voltage and current of 40 KV and 30 mA, respectively. The morphology of the products was examined by scanning electron microscopy (SEM) using a JEOL JSM 67OOF system (JEOL, Tokyo, Japan) and transmission electron microscopy (TEM) using a JEM-2100 system (JEOL, Tokyo, Japan) operated at 200 kV. Surface composition was determined by X-ray photoelectron spectroscopy (XPS) using an ESCALab220i-XL electron spectrometer (VG Scientific, Waltham, MA, USA) with monochromatic Al K α radiation. Nitrogen adsorption-desorption isotherms were analyzed using an automatic adsorption system (Autosorb-1, Quantachrome, Boynton Beach, FL, USA) at the temperature of liquid nitrogen. 3D Hierarchical Structured CeO 2 Prepared via Hydrothermal Method The powder XRD pattern of the as-prepared sample is shown in Figure 1. As can be seen, the as-prepared sample can be indexed to the cubic phase of CeO 2 (JCPDS No. 34-0394). The average crystallite size calculated by the Scherrer equation is 26.8 nm. The SEM images of the as-synthesized CeO 2 particles are shown in Figure 2. It can be seen from Figure 2a The nitrogen adsorption and desorption isotherms of the as-prepared samples and the corresponding pore size distribution curve calculated by the Barret-Joyner-Halenda (BJH) method are shown in Figure 3. The nitrogen adsorption and desorption isotherms exhibit a slim hysteresis loop at a relative pressure of >0.2, which is the type-II curve. The calculated Brunauer-Emmett-Teller (BET) surface area of the as-synthesized CeO 2 is about 36.8 m 2 g -1 . The average pore size calculated by the BJH method is 14.9 nm. corresponding pore size distribution curve calculated by the Barret-Joyner-Halenda (BJH) method are shown in Figure 3. The nitrogen adsorption and desorption isotherms exhibit a slim hysteresis loop at a relative pressure of >0.2, which is the type-Ⅱcurve. The calculated Brunauer-Emmett-Teller (BET) surface area of the as-synthesized CeO2 is about 36.8 m 2 g -1 . The average pore size calculated by the BJH method is 14.9 nm. Effects of Synthesis Conditions on the Formation of 3D Hierarchical Structured CeO2 and the Possible Formation Mechanism To investigate the evolution of flower-like CeO2 particles, the samples obtained after different reaction times were characterized by SEM (Figure 4). The reaction temperature and the dosages of CeCl3·7H2O and PVP were kept constant (270 °C, 0.01 M, and 0.02 M, respectively). As we can see in Figure 4a, spherical particles were obtained in the early stage. After 12 h of hydrothermal treatment, the sample (Figure 4b) evolved into spheres on which many scrappy grains grew. We speculate that PVP at the surface of the spheres decomposed gradually at such a high temperature and pressure, and simultaneously, tiny nanoparticles on the surface of the spheres began to grow into nanosheets. As seen in Figure 4c, all spheres have transformed into flower-like CeO2 particles. Based on these observations, the possible formation mechanism of the 3D hierarchical structured CeO2 can be speculated. The schematic mechanism for the 3D hierarchical structured CeO2 obtained during different hydrothermal stages is illustrated in Figure 5. At an early stage, Ce 3+ ions were oxidized gradually by O2 present in the aqueous solution to form small CeO2 nanocrystals. Then, the small CeO2 nanocrystals interacted with PVP and self-assembled as building blocks into spherical particles. As the temperature of the hydrothermal system increased, the PVP at the surface of the spherical particles began to decompose and small nanoparticles began to grow into nanosheets via Ostwald ripening. Due to Ostwald ripening, more were nanosheets formed, and after 24 h of hydrothermal treatment, the PVP completely decomposed and 3D hierarchical structured CeO2 particles were formed. Effects of Synthesis Conditions on the Formation of 3D Hierarchical Structured CeO 2 and the Possible Formation Mechanism To investigate the evolution of flower-like CeO 2 particles, the samples obtained after different reaction times were characterized by SEM (Figure 4). The reaction temperature and the dosages of CeCl 3 ·7H 2 O and PVP were kept constant (270 • C, 0.01 M, and 0.02 M, respectively). As we can see in Figure 4a, spherical particles were obtained in the early stage. After 12 h of hydrothermal treatment, the sample (Figure 4b) evolved into spheres on which many scrappy grains grew. We speculate that PVP at the surface of the spheres decomposed gradually at such a high temperature and pressure, and simultaneously, tiny nanoparticles on the surface of the spheres began to grow into nanosheets. As seen in Figure 4c, all spheres have transformed into flower-like CeO 2 particles. Based on these observations, the possible formation mechanism of the 3D hierarchical structured CeO 2 can be speculated. The schematic mechanism for the 3D hierarchical structured CeO 2 obtained during different hydrothermal stages is illustrated in Figure 5. At an early stage, Ce 3+ ions were oxidized gradually by O 2 present in the aqueous solution to form small CeO 2 nanocrystals. Then, the small CeO 2 nanocrystals interacted with PVP and self-assembled as building blocks into spherical particles. As the temperature of the hydrothermal system increased, the PVP at the surface of the spherical particles began to decompose and small nanoparticles began to grow into nanosheets via Ostwald ripening. Due to Ostwald ripening, more were nanosheets formed, and after 24 h of hydrothermal treatment, the PVP completely decomposed and 3D hierarchical structured CeO 2 particles were formed. 5 of 8 Catalytic Performance of 3D Hierarchical Structured CeO2 for CO Combustion Catalytic application is an important direction for CeO2 researches because the oxygen storage capacity of CeO2 is associated with its ability to undergo a facile conversion between Ce(Ⅲ) and Ce(Ⅳ). Therefore, the catalytic activity of the as-prepared 3D hierarchical structured CeO2 was tested by CO oxidation. As shown in Figure 6, the 3D hierarchical structured CeO2 exhibits better activity toward CO oxidation than commercial CeO2 (purchased from Beijing Yili Chemical Reagent Co. Ltd., Beijing, China) does. The 50% conversion temperature of the 3D hierarchical structured CeO2 is about 320 °C, while that of the commercial CeO2 is higher than 380 °C. Catalytic Performance of 3D Hierarchical Structured CeO 2 for CO Combustion Catalytic application is an important direction for CeO 2 researches because the oxygen storage capacity of CeO 2 is associated with its ability to undergo a facile conversion between Ce(III) and Ce(IV). Therefore, the catalytic activity of the as-prepared 3D hierarchical structured CeO 2 was tested by CO oxidation. As shown in Figure 6, the 3D hierarchical structured CeO 2 exhibits better activity toward CO oxidation than commercial CeO 2 (purchased from Beijing Yili Chemical Reagent Co. Ltd., Beijing, China) does. The 50% conversion temperature of the 3D hierarchical structured CeO 2 is about 320 • C, while that of the commercial CeO 2 is higher than 380 • C. Catalytic Performance of 3D Hierarchical Structured CeO2 for CO Combustion Catalytic application is an important direction for CeO2 researches because the oxygen storage capacity of CeO2 is associated with its ability to undergo a facile conversion between Ce(Ⅲ) and Ce(Ⅳ). Therefore, the catalytic activity of the as-prepared 3D hierarchical structured CeO2 was tested by CO oxidation. As shown in Figure 6, the 3D hierarchical structured CeO2 exhibits better activity toward CO oxidation than commercial CeO2 (purchased from Beijing Yili Chemical Reagent Co. Ltd., Beijing, China) does. The 50% conversion temperature of the 3D hierarchical structured CeO2 is about 320 °C, while that of the commercial CeO2 is higher than 380 °C. The sample was further characterized by XPS and the Ce 3d electron core level spectra are shown in Figure 7. The four main 3d 5/2 features at 882.7, 885.2, 888.5, and 898.3 eV correspond to V, V', V", and V"' components, respectively. The 3d 3/2 features at 901.3, 903.4, 907.3, and 916.9 eV correspond to U, U', U", and U"' components [15], respectively. The signals V' and U' are characteristic of Ce(III) [16]. According to the ratio of the area for Ce 3+ peaks to the whole peak area in Ce 3d region, the amount of Ce 3+ of 3D hierarchical structured CeO 2 is 51.8%. The amount of Ce 3+ of commercial CeO 2 is 13.2%. The 3D hierarchical structured CeO 2 has a much higher Ce 3+ concentration, which implies a much higher concentration of oxygen defects compared with commercial CeO 2 . A large amount of oxygen defects enhances the conversion between Ce(III) and Ce(IV), thereby supplying more reactive oxygen. Thus, the special structure of 3D hierarchical structured CeO 2 provides more active sites for CO oxidation. shown in Figure 7. The four main 3d5/2 features at 882.7, 885.2, 888.5, and 898.3 eV correspond to V, V', V'', and V''' components, respectively. The 3d3/2 features at 901.3, 903.4, 907.3, and 916.9 eV correspond to U, U', U'', and U''' components [15], respectively. The signals V' and U' are characteristic of Ce(III) [16]. According to the ratio of the area for Ce 3+ peaks to the whole peak area in Ce 3d region, the amount of Ce 3+ of 3D hierarchical structured CeO2 is 51.8%. The amount of Ce 3+ of commercial CeO2 is 13.2%. The 3D hierarchical structured CeO2 has a much higher Ce 3+ concentration, which implies a much higher concentration of oxygen defects compared with commercial CeO2. A large amount of oxygen defects enhances the conversion between Ce(III) and Ce(IV), thereby supplying more reactive oxygen. Thus, the special structure of 3D hierarchical structured CeO2 provides more active sites for CO oxidation. Conclusions In summary, a simple and economical hydrothermal route was presented to synthesize 3D hierarchical structured CeO2 using CeCl3·7H2O and PVP. The 3D hierarchical structured CeO2 has a beautiful flower-like structure, which consists of many nanosheets. A two-stage growth process was identified for the formation of 3D hierarchical structured CeO2, and Ostwald ripening was found to play an important role in the transformation of the nanoparticles into nanosheets. The 3D Conclusions In summary, a simple and economical hydrothermal route was presented to synthesize 3D hierarchical structured CeO 2 using CeCl 3 ·7H 2 O and PVP. The 3D hierarchical structured CeO 2 has a beautiful flower-like structure, which consists of many nanosheets. A two-stage growth process was identified for the formation of 3D hierarchical structured CeO 2 , and Ostwald ripening was found to play an important role in the transformation of the nanoparticles into nanosheets. The 3D hierarchical structured CeO 2 exhibited a higher catalytic activity toward CO oxidation compared with commercial CeO 2 .
2018-10-14T17:56:42.217Z
2018-09-29T00:00:00.000
{ "year": 2018, "sha1": "c16797e42d22ea1c3d1f15f1cc928287720c58f2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/8/10/773/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c16797e42d22ea1c3d1f15f1cc928287720c58f2", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
44080675
pes2o/s2orc
v3-fos-license
In-vitro Antioxidant Activities of the Ethanolic Extracts of Some Contained-Allantoin Plants. It has been investigated the in-vitro antioxidant properties of ethanol extracts of the contained-allantoin plants in this study. Contained-allantoined plant samples Plantago lanceolata, Plantago major, Robinia pseudoacacia, Platanus orientalis and Aesculushippocastanum were tested at different concentrations. The antioxidant activities of plant samples were analysed by 1,1- diphenyl-2-picrylhydrazyl (DPPH) radical scavenging method, cupric reducing antioxidant capacity (CUPRAC), reducing power assay and β-carotene bleaching method. Plantago major plant showed the highest antioxidant capacity compared to other plant extracts in results of the in-vitro assays including 1,1- diphenyl-2-picrylhydrazyl (DPPH) radical scavenging method with 90.25 %, cupric reducing antioxidant capacity (CUPRAC) with 1.789 %, reducing power assay (FRAP) with 1.321 % and β-carotene bleaching method with 78.01 % in 1 mg/mL. The lowest antioxidant activity was determined in Robinia pseudoacacia plant. In conclusion, allantoin shows antioxidant properties and it has the positive effect on total antioxidant capacity. Introduction Uric acid is an enzymatic end product of endogenous and dietary purine nucleotide metabolism and a potent antioxidant and scavenger of singlet oxygen and radicals in humans. Uric acid is converted to allantoin by enzymatic and electrochemical oxidation in-vitro and in-vivo. During increased oxidative stress, reactive oxygen species (ROS) can contribute to the formation of allantoin from uric acid. Allantoin is one from a number of uric acid oxidation products. Oxidation of urate to allantoin implies that urate is a scavenger of ROS. Allantoin, the predominant product of free radical-induced oxidation of uric acid, is efficiently excreted in the urine and has the potential as a biomarker of oxidative stress. Allantoin is the final product of purine catabolism and its chemical structure is formulated C 4 H 6 N 4 O 3 and is also called 5-ureidohydantoin or glyoxyldiureide. It contains high levels of urea. Allantoin is a pharmacologically active compound (1-4). Allantoin is active in skin-soothing and rapid regeneration of skin cells. It removes corneocytes by loosening the intercellular kit or the desmosomes (protein bridges) that maintain the adhesion of corneocytes to each other. It then exfoliates dry and damaged cells and boosts the radiant appearance of the skin, whose surface becomes smoother and softer. Due to these properties, allantoin has been used in cosmetic industry in several forms (e.g. lotions, creams, suntan products, shampoos, lipsticks, and various aerosol preparations), as well as in topical pharmaceutical preparations for treatment of skin diseases for many years (5). Serum allantoin and/or the allantoin/uric acid ratio is also elevated in various chronic diseases and has been suggested to be a biomarker for superoxide anion-associated oxidative stress. Allantoin is significant in nitrogen metabolism for plant growth and development (6). It is a common constituent of plants being a component of the pathway of purine catabolism (7). The final two reactions of its production catalyzing the conversion of hypoxanthine to xanthine and the latter to uric acid are catalysed by the enzyme xanthine oxidoreductase, which may attain two inter-convertible forms, namely xanthine dehydrogenase or xanthine oxidase. The latter uses molecular oxygen as the electron acceptor and generates superoxide anion and other reactive oxygen products. The role of uric acid in conditions associated with oxidative stress is not entirely clear. Evidence mainly based on epidemiological studies suggests that increased serum levels of uric acid are a risk factor for some diseases where oxidative stress plays an important pathophysiological role. Also, allopurinol, a xanthine oxidoreductase inhibitor that lowers serum levels of uric acid exerts protective effects in situations associated with oxidative stress. There is increasing experimental and clinical evidence showing that uric acid has an important role in-vivo as an antioxidant. Unlike many antioxidants, the reaction of uric acid with an oxidant results in its stepwise degradation into a number of end products, and uric acid cannot be renewed once degraded (1-4). Generation of free radicals and ROS causes oxidative stress. Over production of ROS by endogenous or external sources, such as smoking, pollutants, inflammation, radiation, organic solvents or pesticides, causes to oxidative stress in human (8). Oxidative stress plays an important role in chronic diseases, age-related degenerative diseases, heart disease, cancer and in the aging process (9). Endogenous antioxidant system such as superoxide dismutase, hydrogen peroxide and catalase may eliminate the free radicals. Antioxidants can be found in wide various fruits and vegetables (10). Antioxidants have been shown to reduce the risk of chronic diseases including cancer and cardiovascular system by some scientific studies (8,9,11). Allantoin may have antioxidant properties (12). Five plant samples which contain different levels of allantoin, Plantago lanceolata, Plantago major, Robinia pseudoacacia, Platanus orientalis and Aesculus hippocastanum were determined. The aim of the present study is to investigate the antioxidant capacities of the ethanolic extracts of some contained-allantoin plants in Turkey. Plant extracts The plants were obtained from the Campus of Gaziantep University. Collected plants were dried at room temperature in a well-ventilated room and powdered with a blender to achieve a mean particle size less than 2 mm. Plant samples were weighed as 30 g and inserted into a soxhlet extractor that connected to a flask with 500 mL volum containing 250 mL of ethanol. The extraction was conducted at the boiling temperature of ethanol for 6 h. After extraction, the solvent was evaporated at its boiling point until complete removal of ethanol for determining the extract weight certainly. DPPH free radical scavenging activity assay The free radical-scavenging activity of plant extracts was measured using the method described by Brand-Williams et al. (1995), with some modifications (13). 0.06 mM solution of DPPH was prepared with methanol. Ethanol solution of the sample extracts at various concentrations (0.2-1 mg/mL) was added to 2.5 mL of 0.06 mM methanolic solution of DPPH and allowed to stand for 30 min at 25°C. The absorbance of the samples was measured at 517 nm against to blank samples. 0.1 mM solution of DPPH in methanol was used as control, whereas ascorbic acid was used as reference standard. All tests were performed with triplicate. Higher DPPH free radical scavenging activity is indicated with lower absorbance of the reaction mixture. A standard column was prepared using Allantoin and Antioxidant Activity different concentrations of DPPH. The percent of DPPH decoloration of the samples was calculated according to the formula: Antiradical activity=[(Ablank-Asample / Ablank)]*100 β-Carotene bleaching assay Antioxidant activity was assessed using the β-carotene linoleate model system with a slight modification according to He, 2012 (14). 2 mg of β-carotene was dissolved in 10 mL of chloroform, and 1 mL of this solution was pipetted into a round-bottomed 250 mL flask containing 40 µL linoleic acid and 500 µL Tween-20. After removing chloroform using a rotary evaporator, 100 mL of distilled water was added slowly to the mixture with vigorous agitation to form a stable emulsion. Then, 3 mL aliquots of the emulsion were transferred into different test tubes containing various concentrations (0.2-1 mg/mL) of the samples and were incubated in a water bath at 50 • C for 2 h. Vitamin C was used as a standard for comparison. As soon as the emulsion was added to each tube, the zero time absorbance was measured at 470 nm. Antiradical activity was calculated as follows: Antiradical activity= [(Ablank-Asample / Ablank)]*100 Cupric reducing antioxidant capacity (CUPRAC) The CUPRAC method was used according to Apak (2010), with some modifications (15). The CUPRAC method is based on the reduction to copper I [Cu(I)] of copper II [Cu(II)] by antioxidants. 10 -2 M Cu(II) solution was prepared. 1mL Cu(II), Nc, and NH 4 Ac (pH: 7) buffer solutions were added to test tubes. 0.5 mL ethanol solution of the sample extracts at various concentrations (0.2-1 mg/mL) was added to the tubes. The tubes were waited 30 min. The absorbance at 450 nm (A 450 ) was recorded against to a reagent blank. The molar absorptivity for each antioxidant pertaining to the CUPRAC method was calculated with the absorbance. Determination of reducing power The reducing powers of the extracts of Plantago lanceolata, Plantago major, Robinia pseudoacacia, Platanus orientalis and Aesculus hippocastanum plant samples were determined according to the method of Oyaizu, 1986 (16), with some modifications. Various concentrations of the plant extracts (10-100 μg/mL) were added in 2.5 mL of 0.2 M phosphate buffer (pH 6.6), 2.5 mL of 1% potassium ferricyanide solution and incubated at 50°C for 30 min. After incubation; 2.5 mL TCA (10%) was added in the reaction mixture. The content was centrifuged at 6000 rpm for 10 min. Then the absorbance of the reaction mixture was measure at 700 nm. Ascorbic acid (10-100 μg/mL) was used as positive control. The higher the absorbance of the reaction mixture is the greater the reducing power. Results Plants may show various antioxidant properties in different biological systems in consequence of the presence of various substrates as well as the variable nature of products generated by the reaction system (17). The antioxidant capacities of plant extracts were assessed using four common assays, named 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging method, cupric reducing antioxidant capacity (CUPRAC), reducing power assay (FRAP) and β-carotene bleaching method. DPPH radical scavenging activity DPPH is a synthetic radical, which commonly used in in-vitro determination of antiradical activity (18). Reults of DPPH radical scavenging activity in various concentrations of plant extracts are shown in table 1. Cupric reducing antioxidant capacity (CUPRAC) Ethanol extracts of all of the plants showed the ability reducing Cu 2+ to Cu + . Also, all plant extracts showed antioxidant activities depend on concentrations. The extract of Plantago major displayed the highest reducing power Cu 2+ to Cu + . The extract of Robinia pseudoacacia showed the lowest activity. Decreasing Cu 2+ to Cu + is in order of Plantago major> Platanus orientalis > Plantago lanceolata > Aesculus hippocastanum. The obtained data are shown in table 2. Reducing power The results are shown in table 3. As a result of ferric reducing power activity, Plantago major had the highest activity and Robinia pseudoacacia displayed the lowest activity. In all plant extracts, it was observed that ferric reducing power activity was depending on concentration. Decreasing Fe 3+ to Fe 2+ is in order of Plantago major> Platanus orientalis > Plantago lanceolata > Aesculus hippocastanum. β-Carotene bleaching assay The results are shown in table 4. Plantago major has the highest activity and Robinia pseudoacacia showed the lowest activity in this assay. It was obtained that β-carotene bleaching activities of all plant extracts were depending on concentration. The antioxidant activity is observed in order of Plantago major> Platanus orientalis > Plantago lanceolata > Aesculus hippocastanum. Discussion In this study, we examined the antioxidant capacities of ethanol extracts of containedallantoin five plants by using 1,1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging method, cupric reducing antioxidant capacity (CUPRAC), reducing power assay and β-carotene bleaching method as in-vitro. In the results of all assays, Plantago major showed the highest antioxidant effect among ethanol extracts of other plant samples. Robinia pseudoacacia showed the lowest antioxidant activity. These results are supported with the studying antioxidant and antimicrobial properties of Plantago major leaves of Stanisavljević et al. (2008) (19). It was Table 1. Free radical scavenging activity using 1,1-Diphenyl-2-picrylhydrazyl radical (DPPH). Concentration The rate of scavenging of DPPH radical (%) All tests were performed as triplicate and the results are showed as mean of data in the table In DPPH assay, the stabilization of DPPH free radicals cause the color changing of reaction solution which is measured by spectrophotometer to determine the scavenging activity of a tested sample (23). The obtained results with DPPH assay show the radical scavenging activity of examined samples as dose-dependent. All ethanolic plant extracts have DPPH radical scavenging activities. The extract of Plantago major showed the highest DPPH scavenging activity in comparison to other plant samples. Plantago lanceolata, Platanus orientalis and Aesculus hippocastanum showed lower DPPH inhibition. Robinia pseudoacacia showed the lowest DPPH inhibition. The antioxidant activity of Plantago spp was reported by the previous study (24). The study showed that Plantago lanceolata has antioxidant activity by using DPPH and FRAP method (25). Metal ions can cause lipid peroxidation that can produce free radicals and lipid peroxides (26). Therefore, metal chelating activity indicates the antioxidant and antiradical properties. Absorbance decreased of the reaction mixture indicates higher metal chelating ability. CUPRAC assay has been used by many researchers to determine reducing power of antioxidant compounds (15). This method is based on Cu 2+ -Cu + reduction by antioxidants in the presence of neocuproine. All tests were performed as triplicate and the results are showed as mean of data in the table. All tests were performed as triplicate and the results are showed as mean of data in the table. In this assay, a higher absorbance indicates a higher cupric ions (Cu 2+ ) reducing power. Our data are supported by study which determining antioxidant activity of stem and root extracts of Rhubarb (Rheum ribes) by CUPRAC method (27). Antioxidant compounds cause the reduction from ferric (Fe 3+ ) form to the ferrous (Fe 2+ ) form because of their reductive capabilities. Prussian blue-colored complex is formed by adding FeCl 3 to the ferrous (Fe 2+ ) form. Therefore, reduction can be determined by measuring the formation of Perl's Prussian blue at 700 nm (24). A higher absorbance indicates a higher ferric reducing power. The order of antioxidant activity is shown in order of Plantago major > Platanus orientalis > Plantago lanceolata > Aesculus hippocastanum. There is no previous study about Plantago major extract. The results of our study is supported with data obtained of Plantago spp. using FRAP method (28). The antioxidants can inhibit free radicals with β-carotene bleaching (29). It was obtained that β-carotene bleaching activity was depending on concentration of plant extracts. Antioxidant activity is shown in order of Plantago major > Platanus orientalis > Plantago lanceolata > Aesculus hippocastanum. Beta carotene bleaching method is used to determine the antioxidant activities of plant samples. The previous study showed that ethanolic extract of Meconopsis quintuplinervia has antioxidant activity and it is determined with beta carotene bleaching method (14). Conclusions This study showed that extracts of contaniedallantoin plant samples have antioxidant activities. It is considered that allantoin shows antioxidant properties and allantoin effects total antioxidant capacity positively. In addition, the results of this study will shed light on new researchs in the future and contribute to the scientific literature.
2018-06-21T12:41:03.798Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "da20f26ad5e1d181316931551cabfc80ce6451b4", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "da20f26ad5e1d181316931551cabfc80ce6451b4", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
58025226
pes2o/s2orc
v3-fos-license
Multicomponent assessment and ginsenoside conversions of Panax quinquefolium L. roots before and after steaming by HPLC-MSn Background The structural conversions in ginsenosides induced by steaming or heating or acidic condition could improve red ginseng bioactivities significantly. In this paper, the chemical transformations of red American ginseng from fresh Panax quinquefolium L. under steaming were investigated, and the possible mechanisms were discussed. Methods A method with reversed-phase high-performance liquid chromatography coupled with linear ion trap mass spectrometry (HPLC-MSn)-equipped electrospray ionization ion source was developed for structural analysis and quantitation of ginsenosides in dried and red American ginseng. Results In total, 59 ginsenosides of protopanaxadiol, protopanaxatriol, oleanane, and ocotillol types were identified in American ginseng before and after steaming process by matching the molecular weight and/or comparing MSn fragmentation with that of standards and/or known published compounds, and some of them were determined to be disappeared or newly generated under different steaming time and temperature. The specific fragments of each aglycone-type ginsenosides were determined as well as aglycone hydrated and dehydrated ones. The mechanisms were deduced as hydrolysis, hydration, dehydration, and isomerization of neutral and acidic ginsenosides. Furthermore, the relative peak areas of detected compounds were calculated based on peak areas ratio. Conclusion The multicomponent assessment of American ginseng was conducted by HPLC-MSn. The result is expected to provide possibility for holistic evaluation of the processing procedures of red American ginseng and a scientific basis for the usage of American ginseng in prescription. Introduction American ginseng (Panax quinquefolium L.) originally grows in southeast of Canada and northern USA, whereas Panax ginseng Meyer is commonly referred to ginseng or Korean ginseng. American ginseng and ginseng are of the same Araliaceae and Panax genus, present pharmacological activity to reduce stress, enhance immune function and treat several chronic diseases, and so on to improve the quality of life [1e4]. Ginseng has a "warm" property based on the traditional Chinese medicine theory and American ginseng has a "cool" property. Therefore, the efficacy, pharmacological effects, and clinical indications of these two similar roots are different, and they are suitable for different physique and age groups. Ginseng has two types of popular products by drying and steaming process, named white ginseng and red ginseng, respectively. Compared with white ginseng, the red one seemed to show better bioactivity in some cases. Meanwhile, only dried American ginseng is found available in the commercial market. Ginsenosides are the major pharmacologically active constituents of Panax genus. According to the aglycone skeletons, most are the dammarane triterpene type with protopanaxadiol (PPD) and protopanaxatriol (PPT), oleanane type (OLE) and ocotillol type (OCO) of ginsenosides [4,5]. Compared to ginseng, the OCOC-type ginsenoside is the characteristic in American ginseng. The ginseng roots also contain ginsenosides with a malonyl or acetyl residue attached to the glucose substituent on C-3 or C-6 position. Ginseng has gained increasing attention in health care; therefore, identification and quantification of the chemical composition of ginseng is necessary for quality, safety, and efficacy control. Many studies of red ginseng have demonstrated that steaming-or heating-or acidic condition-induced ginsenoside structural conversions are significantly related to the biological activities improvement [6e11]. And also some informations on ginsenosides in American ginseng processing were reported [12e14]. Recent studies demonstrated that ginseng presents its efficacy through multi-targeted mechanisms instead of a single chemical constituent influence. Hence, monitoring the chemical components, as many as possible, in red American ginseng processing is important. The structure diversity of ginsenosides are the chemical basis of dried and red American ginseng; they exert different pharmacological activities. During the past few years, many modern techniques have been developed to determine ginsenosides in ginseng. The most commonly used are HPLC-UV [15], HPLC coupled with evaporative light scattering detector [16,17], HPLC combined with electrospray ionization tandem mass spectrometry (HPLCeESIeMS n ) [18e23], and ultraperformance liquid chromatography/quadrupole time-of-flight MS [24e27]. The ongoing developments of MS permit high sensitivity, selectivity, resolution, and throughput analysis of traditional Chinese medicine. However, the systematic comparison of ginsenosides and chemical transformations between dried and red American ginseng has not been studied and discussed. In this paper, HPLC-MS n technique was developed to evaluate the global ginsenosides of dried American ginseng and steaming processed red American ginseng. The 59 main constituents of ginsenosides were identified by combining the complementary fragmentation data for structure confirmation and eluting sequence provided. The relative peak areas of PPD, PPT, OCO, and OLE type components were calculated and compared. Base on the multicomponent assessment approach, the influence of steaming time and temperature on the composition of ginsenosides was determined. The conversions brought about by each type of ginsenosides during steaming process were investigated, and the possible mechanisms were discussed. The validated HPLC-MS n method demonstrated that holistic chemical profiling as characteristic chemical components to assess the quality of American ginseng and standardize the processing procedures of red American ginseng is reasonable and effective. Reference solution preparation Reference solutions for 15 ginsenosides were prepared individually by 50% methanol-water to final concentration of 0.1 mg/ mL and were diluted with HPLC grade methanol to make desired concentrations for MS n fragmentation analysis. Each of the individual reference solutions were combined and diluted to obtain final concentrations for HPLC-MS retention time analysis. Plant materials and sample preparation The 5-year-old fresh Panax quinquefolium L. roots were collected from Fusong, Jilin province. The botanical origin was identified by Prof Shumin Wang and deposited at the Jilin Ginseng Academy. The main roots with diameter measuring 1e2 cm were chosen for the process experiment (drying and steaming). The dried American ginseng was prepared under sunlight. The red American ginseng was prepared as following: (1) fresh American ginseng roots were cleaned and allowed to air-dry at room temperature; (2) the roots were placed in an autoclave to steam at 100 C or 120 C for 2 h or 4 h or 6 h, separately; (3) the steamed roots were cooled and then dried in a ventilated oven at 50 C for approximately 2e3 d. Furthermore, 100 g of each root were powdered using a pulverizer and sieved before extraction to assure the analyzed samples were well-distributed and typical. Ultrasonic-assisted extraction of ginsenosides from the pulverized ginseng roots powder (1.0 g) was performed using 80% methanol-water for 30 min. The extraction was repeated thrice with fresh solvent. Then, the extracts were combined, reduced pressure concentrated, and re-dissolved to 1 mL 80% methanol-water. The supernatant was filtered through a 0.45mm polyvinylidene fluoride syringe filter and used for direct HPLC-MS n analysis. The extraction of five replicated samples and blank samples was prepared under the same procedure. Instrument and condition HPLC system (Ultimate 3000, Dionex) consists of a quaternary gradient pump, an autosampler, and a thermostatic-column compartment, coupled to linear trap quadrupole (LTQ) linear ion trap mass spectrometer (Thermo Scientific) and controlled with Xcalibur version 3.0 data system software. Thermo Scientific Syncronis C18 HPLC column (100 mm  2.0 mm, 1.7 mm) was used for separations. The ESIeMS n was operated under negative ion mode. The data were acquired in centroid scan mode with normal scan rate and m/z 150.0e2000.0 scan range. According to standard calibration procedure, the mass scale was calibrated prior to detection. The sheath gas and aux gas were high purity N 2 with a flow rate of 40 mL/min and 10 mL/min, respectively. The other parameters were set to achieve the best ion signals: electrospray voltage, 3.0 kV; capillary voltage, 20 V; and capillary temperature, 320 C. For the MS n analyses, the isolation width was 1.0 Da, and collision energies ranged from 10% to 30%. The ginsenoside references were analyzed by infusing individual solutions directly to mass spectrometer at 5 mL/ min using the syringe pump. The mobile phases were 0.1% formic acid in acetonitrile (Solvent A) and 0.1% formic acid in ultrapure water (Solvent B). The gradient elution program with 0.2 mL/min flow rate was as follows: 0e10 min held at 10% A, 10e40 min linearly increased to 100% A, 40e50 min held at 100% A, 50e60 min returned to 10% A. The temperatures of autosampler and column were set at 15 C and 35C , respectively, and the injection volume was 5 mL. Development of HPLC-MS n method The LTQ linear ion trap mass spectrometer was applied for determination. First, each ginsenoside solution was analyzed by direct injection in both positive and negative ion modes to check the appropriate ionization mode. The [M À H] À ion presented higher intensity than [MþH] þ ion. Thus, the identification and quantification of ginsenosides were carried out in the negative ion mode by ESI-MS n and HPLC-MS n . The spray voltage, capillary temperature, S-lens RF level, and flow rates of sheath gas and aux gas were checked manually to obtain the best experimental conditions. To gain maximum sensitivity and highest signal intensity, the other MS parameters were also optimized by tuning each type of ginsenosides (PPD, PPT, OLE and OCO). After direct MS analysis, HPLC was coupled to the mass spectrometer via the column cell outlet. After several trials, chromatographic conditions were optimized using American ginseng extraction to ensure the appropriate resolution. The composition of mobile phases considerably affects the transference yield of analytes from liquid to gas in MS detection. Formic acid concentrations (0.1%, 0.05%, and 0.01%) were tested; the best peak shape and improved ionization efficiency lead to a significant signal enhancement. The gradient elution program was investigated to ensure acceptable separation of adjacent peaks. ESI-MS n and HPLC-ESI-MS analysis of four types of ginsenoside standards The structures of four types of ginsenosides (PPD, PPT, OCO, and OLE) are shown in Fig. 1. The aglycones of ginsenosides with dehydration and addition reaction are also presented in Fig. 1. In MS full scan under both positive and negative ion mode, the molecular weights of ginsenosides were easily obtained by their quasimolecular ions. The characteristic fragmentations of each four types of ginsenoside standards were investigated and summarized in Table 1. According to the specific fragments of each type of ginsenosides, the substituted saccharide chain types and sequence and the aglycone type were determined. The nomenclature for ginsenoside fragmentation is based on the description given by Domon and Costello [28] and Liu et al. [29]. The saccharide substitution at C-20 is a chain, while saccharide substitution at C-3 (PPD) or C-6 (PPT) is b chain [29]. Ions (produced by glycoside cleavages) retaining the charge at the reducing terminus are termed Y and Z, whereas product ions retaining the charge at the non-reducing terminus are termed B and C [28,29]. Cross-ring cleavages produced ions retaining the charge at the reducing terminus are termed X, and product ions retaining the charge at the non-reducing terminus are termed A, with superscript numbers indicating the two bonds cleavage [28,29]. Take Rb 1 and Re as an example, the fragmentation nomenclature of PPD-and PPT-type ginsenosides are shown in Scheme 1. For PPD-type ginsenoside Rb 1 , the Y 1a ion at m/z 945 is generated by loss of a glucose residue (162 Da Composite 15 ginsenosides standards were analyzed by HPLC-ESI-MS for retention time determination. The molecular weights, fragmentation patterns, and retention times were useful for identification of ginsenoside structures in complex mixtures. Ginsenoside profiling of dried and red American ginseng The ginsenoside profiling of dried and red American ginseng extracts were analyzed by HPLC-ESI-MS n to determine the retention time, molecular weight information, aglycone type, and saccharide substitution sequences according to characteristic fragmentation. Fig. 2 showed the HPLC-MS total ion chromatogram (TIC) in the negative ion mode of the ginsenosides of dried and red American Ginseng (100 C and 120 C) extracts. Approximately 59 major ginsenosides were investigated, 15 of which were unambiguous identified by comparison of retention times, molecular weight, and specific fragmentations with standards. The other peaks of TIC were tentatively identified by matching the molecular weight with the reported information of known ginsenosides and further verified by characteristic fragment pathways to provide structural information for the elucidation of results. The detailed identifications of components are listed in Table 1. As the ginsenosides presented similar polarity and isomerization, the appropriate chromatographic gradient and extracted ion chromatogram were applied to analyze the overlapped and isomeric peaks. For the peaks presented, m/z 475 fragment ions were classified as PPT-type ginsenosides, as shown in Table 1 detected. A total of 17 major PPT-type ginsenosides were identified from the extracts of dried and red American ginseng. There were five OCO-type ginsenosides detected in the extracts of American ginseng. According to the specific fragment at m/z 491 and saccharide chain composition and position, structural characterizations were obtained. Similarly, another five OLE-type ginsenosides were identified (Table 1). Effects of steaming time and temperature on ginsenosides composition in red American ginseng The holistic chemical profiles of dried and red American ginseng were systematically compared by qualitative and quantitative analysis. The ginsenoside composition differences between dried and red American ginseng, steamed at 100 C or 120 C for 2 h or 4 h or 6 h are shown in Table 2. The effects of steaming time and temperature on the ginsenoside chemical conversion during processing were investigated. As shown in Table 2, the relative peak areas of 59 ginsenosides were calculated by the area ratios of individual peak to total peaks. As the malonyl and acetyl ginsenosides were rather unstable, the relative peak areas of this kind of ginsenosides decreased sharply to undetectable levels with the increasing of steaming time or temperature. Compared to dried American ginseng, the relative peak areas of Rb 1 , Rb 2 , and so on increased rapidly at 100 C for 2 h, indicating that the malonyl and acetyl ginsenosides degraded to their corresponding neutral ginsenosides. And then Rb 1 , Rb 2 , Rb 3 , Rc, Rd, Rg 1 , Re decreased gradually during steaming. After steaming at 120 C for 6 h, Rb 1 , Rb 2 , Rb 3 , Rc, Rd, Rg 1 , Re levels were much lower. A number of ginsenoside products increased gradually. Meanwhile, some newly formed ginsenosides among them were identified as 20(R,S)-Rf 2 , 20(R)-Rh 1 , 20(R)-Rh 2 , Rh 3 , Rh 4 , Rk 1 , Rk 2 , Rk 3 , 20(R)-Rg 2 , 20(R)-Rg 3 , Rg 4 , Rg 5 , Rg 6 . For OCO-and OLE-type ginsenosides, pseudoginsenoside F 11 and Ro decreased sharply after steaming for 6 h at 120 C. Correspondingly, newly converted pseudoginsenoside RT 5 , Chikusetsusaponin IVa, Zingibroside R 1 , and Calenduloside E were identified and relatively quantified. The results indicated that malonyl and acetyl ginsenosides with high molecular weights are characteristic constituents of dried American ginseng. The neutral and high molecular weight ginsenosides are major components of red American ginseng (100 C), while rare ginsenosides with low molecular weight and less polarity form specific composition of red American ginseng (120 C). Similar results were obtained in ginseng and notoginseng steaming. In our work, the acetyl rare ginsenosides (Rs 3 , Rs 4 , and Rs 5 ) were only detected in red American ginseng (100 C and 120 C), which were transformed from Rs 1 and Rs 2 . Many studies have revealed that rare ginsenosides in red ginseng enhanced its bioactivities. As discussed above, changes in the PPD, PPT, OCO, OLE types of ginsenosides in red American ginseng were studied systematically, indicated that the steaming time and temperature were significant and influencing parameters of the processing procedure. Due to the chemical complexity in the variation of ginsenosides compositions, multicomponents should be monitored for critically standardizing the conditions and controlling the quality during red American ginseng process. And the multicomponents quantitative assessment could ensure the therapeutic effects of dried and red American ginseng products. 3.5. Summary of steaming-induced ginsenosides conversion of red American ginseng derived from Panax quinquefolium L. To study the chemical conversion during the process of American ginseng, the ginsenosides markers were identified and relatively quantified. The results demonstrated that the chemical transformation occurred under steaming. The transformation pathways of PPD, PPT, OCO, OLE types of ginsenosides are summarized in Scheme 2, and the content changes of each compound are presented using histograms added beside the corresponding structure. In the Scheme 2, main and secondary transformation pathways are shown with different sized arrows and the specific structure product ginsenosides are highlighted with gray background. The characteristic transformation mechanisms detected were discussed. Ginsenosides Rb 1 , Rb 2 , Rb 3 , Rc as original PPD ginsenosides transformed to Rd by the hydrolysis of glycosylation moiety at C-20 terminus, leading the relative peak areas declined. Rd could be hydrolyzed to 20(S)-Rg 3 with its epimer 20(R)-Rg 3 and its isomer F 2 by the loss of glucose residue at C-20 and C-3, respectively. Furthermore, 20(S,R)-Rg 3 produced 20(S,R)-Rh 2 through the hydrolysis of glycosylation moiety at C-3 terminus. The 20(21) and O20(22) isomers Rk 1 and Rg 5 were generated from 20(S,R)-Rg 3 through dehydration at C-20. And then, 20(S,R)-Rh 2 , Rk 1 and Rg 5 could convert to 20(21) and O20(22) isomers Rk 2 and Rh 3 . The results demonstrated that the transformation pathways of PPT ginsenosides were similar to those of PPD ginsenosides, as the characteristic conversion shown in Scheme 2(A) and 2(B). The products were observed involving the glycosylation moiety hydrolysis at C-20 terminus to form 20(S,R)-Rg 2 and at C-6 to form 20(S,R)-Rh 1 and 20 (21) or O20(22) dehydration at C-20 to yield Rg 6 and Rg 4 . There is also a specific transformation of PPT ginsenosides, C24 and C-25 hydration, and gave rise to 20(S,R)-Rf 2 . In the published reports, the transformation mechanisms and pathways of PPD and PPT ginsenosides were described during fresh ginseng steaming [8,9,11,27]. And the previous results partially agreed with our findings obtained from American ginseng steaming in related transformation mechanisms. Because of the differences in ginsenoside compositions of ginseng and American ginseng, the transformation pathways were not identical. For OCO-and OLE-type ginsenosides, the losses of glycosylated substitution were the main chemical transformation pathways. The possible ginsenoside products were deduced and shown in Scheme 2(C) and 2(D). With regard to OCO-and OLE-type ginsenosides, the chemical transformations have not been systematically studied in American ginseng research to our knowledge. The malonyl and acetyl ginsenosides released malonic and acetic acid by demalonylation and deacetylation reactions, respectively, to yield their corresponding neutral ginsenosides. Malonyl and acetyl ginsenosides could reportedly convert to neutral ginsenosides and provide the acidic environment to further promote degradation of neutral ginsenosides [10,11]. Under steaming, acetyl ginsenosides produced their corresponding neutral ginsenosides firstly and subsequently transformed to their corresponding rare ginsenosides with low molecular weight. In our study, the acetyl rare ginsenosides were detected in red American ginseng for the first time, transformed from their corresponding acetyl ginsenosides by hydrolysis of terminus glucosylation moiety and dehydration at C-20. This result demonstrated that acetyl ginsenosides presented two kinds of transformation pathways, which have not been reported yet. Our results provide related chemical transformation of four types of ginsenosides during American ginseng processing. These ginsenosides generated in steaming of American ginseng may be helpful for evaluating pharmacological effects and bioactive constituents' definition. Conclusions In summary, the HPLC-MS n -based multicomponent profiling was developed to assess the holistic qualities of dried and red American ginseng. The specific fragments of four major types of ginsenosides were PPD at m/z 459, PPT at m/z 475, OCO at m/z 491, and OLE at m/z 455. And the aglycone of chemically-derived ginsenosides produced specific fragments at m/z 441 and m/z 457 for O20(21)or O20(22)-dehydrated PPD and O20(21)or O20(22)-dehydrated PPT, meanwhile m/z 493 for 24,25-hydrated PPT were also observed in MS n . Based on the characteristic fragmentation pathways of four types ginsenosides, the structure of 59 multiginsenoside components in dried and red American ginseng were analyzed. The chemical markers that could discriminate dried and red American ginsengs were discussed, and the possible transformation mechanisms were summarized. The ginsenosides composition of red American ginseng changed with the increase in the steaming time and temperature. The ginsenosides with higher molecular weight and more polarity converted to the rare ones with lower molecular and less polarity via hydrolysis of saccharides substituents. The malonyl or acetyl ginsenosides transformed to their corresponding neutral ginsenosides and acetyl rare ginsenosides. And the production of 20(R)-ginsenosides epimers, dehydrated, and hydrated ginsenosides were the specific constituents of red American ginseng. The results, discussed above, are definitely helpful for quality assessment and standardizing the processing procedures of red American ginseng. Furthermore, the results also provided a scientific basis for the research on biological compositions, which is responsible for the pharmacological efficacy of red American ginseng and the safe usage of American ginseng in clinic. Conflicts of interest The authors declare that they have no competing interests.
2019-01-22T22:35:25.913Z
2017-08-05T00:00:00.000
{ "year": 2017, "sha1": "05b51bf935330959ab4028f5d2c0a09b7cc5eab6", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jgr.2017.08.001", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05b51bf935330959ab4028f5d2c0a09b7cc5eab6", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
118892813
pes2o/s2orc
v3-fos-license
Homological Projective Duality via Variation of Geometric Invariant Theory Quotients We provide a geometric approach to constructing Lefschetz collections and Landau-Ginzburg Homological Projective Duals from a variation of Geometric Invariant Theory quotients. This approach yields homological projective duals for Veronese embeddings in the setting of Landau Ginzburg models. Our results also extend to a relative Homological Projective Duality framework. Introduction A fundamental question in algebraic geometry is how invariants behave under passage to hyperplane sections. In his seminal work [Kuz07], Kuznetsov studied this question extensively for the bounded derived category of coherent sheaves on a projective variety and developed a deep homological manifestation of projective duality. He suitably titled this phenomenon, "Homological Projective Duality" (HPD). The HPD setup is as follows. One starts with a smooth variety X → P(V ), together with some homological data which is called a Lefschetz decomposition, and constructs a Homological Projective Dual; Y → P(V * ) together with a dual Lefschetz decomposition. This establishes a precise relationship between the derived categories of any dual complete linear sections X × P(V ) P(L ⊥ ) and Y × P(V * ) P(L); we call this result of Kuznetsov the "Fundamental Theorem of Homological Projective Duality." [Kuz07,Theorem 6.3] (Theorem 2.4.10 below). In this paper, we develop a robust geometric approach to constructing Homological Projective Duals as Landau-Ginzburg models. The idea, in the terminology of high-energy theoretical physics, is to pass to a gauged linear sigma model and "change phases" [HHP08,HTo07,DSh08,CDHPS10,Sha10]. In mathematical terms, this is first passing from a hypersurface to the total space of a line bundle [Isi12,Shi12], then varying Geometric Invariant Theory quotients (VGIT) to do a birational transformation to the total space of this line bundle [BFK12,Kaw02,VdB04,Seg11,HW12,DSe12]. A nice consequence of our technique is that we can expand the Homological Projective Duality framework to the relative setting i.e. all our results are proven in the relative setting over a general smooth base variety. Specifically, using the semi-orthogonal decompositions from [BFK12], we construct both Lefschetz collections and Homological Projective duals for a large class of quotient varieties. Our main application is to Veronese embeddings P(W ) → P(S d W ) for d ≤ dim W . After recovering the natural Lefschetz decomposition in this case, we prove that the Landau-Ginzburg pair [W × P(S d W * )/G m ], w , where the G m -action is by dilation on W and w is the universal degree d polynomial, is a homological projective dual to the Veronese embedding. When d > 2, assuming a technical result (Conjecture 4.4.2), this Landau-Ginzburg pair is derived-equivalent to the pair (P(S d W * ), A), where A is a Z-graded sheaf of A ∞ -algebras defined explicitly as and µ i = 0 for 2 < i < d. When d = 2, we recover the non-commutative variety from [Kuz05]. It should be noted that neither Kuznetsov's precise definition of a homological projective dual nor his Fundamental Theorem are available at this level of generality. We instead construct non-commutative varieties which are weak homological projective duals and prove that the conclusions of the Fundamental Theorem hold directly in our setting (Theorem 3.1.3) (and in the relative setting). Additionally, we provide a new description of the derived category of a degree d hypersurface fibration, p : X → S, extending the case d = 2 in [Kuz05]. Using the same method of starting with a variety, passing to a derived-equivalent gauged LG model using the results of [Isi12,Shi12], and performing VGIT, we prove a relative version of a theorem of Orlov [Orl09]. This compares the derived category of p : X → S to the derived category of a related gauged Landau-Ginzburg model. Then, following calculations in [Sei11,Dyc11,Efi12], we produce a local generator for the derived category of this gauged Landau-Ginzburg model and determine its derived endomorphism sheaf of dg-algebras B over the base S. A version of the homological perturbation lemma applies to show that, assuming the technical result (Conjecture 4.4.2) mentioned above, B can be replaced by a quasi-isomorphic sheaf A of minimal A ∞ -algebras. We also give formulas for the first d higher multiplications of A. Homological Projective Duality was exhibited by Kuznetsov for the double Veronese embedding P(W ) → P(S 2 W ). In this case, Kuznetsov [Kuz05] proves that a homological projective dual is given by Y = (P(S 2 W * ), Cliff 0 ), where Cliff 0 is a sheaf of even Clifford algebras. As a consequence, Kuznetsov recovers a theorem of Bondal and Orlov [BO95] relating the derived category of intersection of two even dimensional quadrics to the derived category of a hyperelliptic curve. Moreover, his Homological Projective Duality framework provides analogous descriptions for arbitrary intersections of quadrics as in [BO02]. In [Kuz06], Kuznetsov constructs the dual to the Grassmannian of two dimensional planes in a vector space W of dimension 6 or 7 with respect to the Plücker embedding, Gr(2, W ) → P(Λ 2 W ). In these cases, the homological projective dual is a non-commutative resolution of the classical projective dual: the Pfaffian variety. Among the many applications is a derived equivalence between two non-birational Calabi-Yau varieties of dimension 3, originally studied by Rødland as an example in mirror symmetry [Rød00]. This derived equivalence was proven independently by Borisov and Cȃldȃraru [BC09] who demonstrated that generic Grassmannian Calabi-Yau varieties can be realized as moduli spaces of curves on the dual Pfaffian Calabi-Yau. Homological Projective Duality for the Grassmannian Gr(3, 6) was studied in [Del11]. A relative version of the 2-Veronese example was considered in [ABB11]; it was used to relate rationality questions to categorical representability. Another example of Homological Projective Duality is conjectured by Hosono and Takagi and supported by a proof of a derived equivalence between the corresponding linear sections [HTa11,HTa13a,HTa13b]. The main cases that this paper does not interpret in our larger framework are the Grassmannian and Hosono-Takagi examples [Kuz06,Del11,HTa11,HTa13a,HTa13b]. However, these examples do admit similar physical interpretations [DSh08,Hor11]. Thus, it is plausible that all known examples of HPD would fall within the scope of our methodological approach. The main issue is that the results of [BFK12] need to be expanded to handle the complexity of the VGIT theory which arises. Indeed, work of Addington, Donovan, and Segal, [ADS12] uses more complex GIT stratifications to understand the Grassmannian case, albeit in a slightly less general context than HPD. Definition 2.1.1. A gauged Landau-Ginzburg model, or gauged LG model, is the quadruple, (Q, G, L, w), with Q, G, L, and w as above. We shall commonly denote a gauged LG model by the pair ([Q/G], w). To declutter the notation, given a quasi-coherent G-equivariant sheaf, E, we denote E ⊗ L n by E(n). Given a morphism, f : E → F, we denote f ⊗ Id L n by f (n). Following Eisenbud, [Eis80], one gives the following definition. (1), are multiplication by w. We shall often simply denote the factorization (E −1 , E 0 , φ −1 E , φ 0 E ) by E. The coherent G-equivariant sheaves, E 0 and E −1 , are called the components of the factorization, E. A morphism of factorizations, g : E → F, is a pair of morphisms of coherent G-equivariant sheaves, making the diagram, We let coh([Q/G], w) be the Abelian category of factorizations with coherent components. There is an obvious notion of a chain homotopy between morphisms in coh([Q/G], w). Let g 1 , g 2 : E → F be two morphisms of factorizations. A homotopy between g 1 and g 2 is a pair of morphisms of quasi-coherent G-equivariant sheaves, We let K(coh[Q/G], w) be the corresponding homotopy category, the category whose objects are factorizations and whose morphisms are homotopy classes of morphisms. There is a translation autoequivalence, [1], defined as For any morphism, g : E → F, there is a natural cone construction. We write, C(g), for the resulting factorization. It is defined as It is an easy exercise to see that translation and the cone construction induce the structure of a triangulated category on the homotopy category, K(Qcoh[Q/G], w). We wish to derive coh([Q/G], w), however, we lack a notion of quasi-isomorphism because our "complexes" lack cohomology. For the usual derived categories of sheaves, one can view localization by the class of quasi-isomorphisms as the Verdier quotient by acyclic objects. In [Pos09], Positselski defined the correct substitute in coh([Q/G], w) for acyclic complexes. This notion of acyclic complexes and the corresponding quotient also appears in [Orl11]. The following definitions give the correct analog for the derived category of sheaves for LG models, when Q is smooth. These definitions are due to Positselski, see [Pos09,Pos11]. of the bounded derived category of coherent sheaves by the thick subcategory of perfect complexes. The following result, based on Koszul Duality, is referred to in the physics literature as the σ-model/Landau-Ginzburg-model correspondence for B-branes, arising from renormalization group flow. We sometimes refer to it briefly as the "σ-LG correspondence". Theorem 2.1.5. Let Y be the zero-scheme of a section s ∈ Γ(X, E) of a locally-free sheaf of finite rank E on a smooth variety X. Assume that that s is a regular section, i.e. dim Y = dim X − rank E. Then, there is an equivalence of triangulated categories where w is the regular function determined by s under the natural isomorphism and G m acts by dilation on the fibers. Corollary 2.1.6. Let X be a smooth variety. Consider the trivial G m -action on X and let χ be the identity character of G m . For the Landau-Ginzburg model (X, G m , O(χ), 0), one has an equivalence of categories: Proof. This is a degenerate case of the above theorem where E = 0. The statement can also be seen directly by observing that objects of D(coh[X/G m ], 0) are the same as objects of D b (coh X) grouped into even and odd homological grading. Namely, the grading on D(coh[X/G m ], 0) induced by the G m -action is twice the homological grading on D b (coh X). Remark 2.1.7. We will also use the above corollary for global quotient stacks. The direct proof above can be seen to apply to factorizations in any Abelian category. 2.2. Semi-orthogonal decompositions. In this section we provide background material on semi-orthogonal decompositions and record a few facts we will need later. Standard references are [Bon89,BK90,BO95]. Definition 2.2.1. Let A ⊆ T be a full triangulated subcategory. The right orthogonal A ⊥ to A is the full subcategory of T consisting of objects B such that Hom T (A, B) = 0 for any A ∈ A. The left orthogonal ⊥ A is is the full subcategory of T consisting of objects B such that Hom T (B, A) = 0 for any A ∈ A. The left and right orthogonals are naturally triangulated subcategories. Definition 2.2.2. A weak semi-orthogonal decomposition of a triangulated category, T , is a sequence of full triangulated subcategories, A 1 , . . . , A m , in T such that A i ⊂ A ⊥ j for i < j and, for every object T ∈ T , there exists a diagram: where all triangles are distinguished and A k ∈ A k . We shall denote a weak semi-orthogonal decomposition by A 1 , . . . , A m . If A i are essential images of fully-faithful functors, Υ i : A i → T , we may also denote the weak semi-orthogonal decomposition by Lemma 2.2.3. The assignments T → T i and T → A i appearing in the definition of a weak semi-orthogonal decomposition are unique and functorial. Proof. This is standard, see e.g. [ We will need a few facts about idempotent completeness. The idempotent completion of A, denoted A, is the additive category whose objects are pairs (a, e) with A ∈ A and e : A → A idempotent. A morphism between (A, e) and (A , e ) is a morphism f : A → A in A such that f e = e f = f . Theorem 2.2.5. Let T be a triangulated category. The idempotent completion, T , can be equipped uniquely with the structure of a triangulated category such that the natural inclusion T → T is exact. Moreover, triangles in T are exactly retracts of triangles in T . Lemma 2.2.6. Let T = A, B be a weak semi-orthogonal decomposition. Then, there is a weak semi-orthogonal decomposition T = A, B . Moreover, T is idempotent complete if and only if A and B are idempotent complete. Proof. The second statement is an immediate corollary of the first statement. So, assume that we have weak semi-orthogonal decomposition T = A, B . We have natural inclusions A, B → T . It is clear from the definition of the idempotent completion that A ⊆ B ⊥ . Let (T, e) be an object of T . Then, T sits in a triangle There is a unique morphism of triangles By Lemma 2.2.3, e a and e b are idempotents. Thus, the sequence is a retract of the exact triangle and is, by definition, an exact triangle. Thus, we satisfy the conditions of a weak semiorthogonal decomposition. Closely related to the notion of a semi-orthogonal decomposition is the notion of a left/right admissible subcategory of a triangulated category. Definition 2.2.7. Let α : A → T be the inclusion of a full triangulated subcategory of T . The subcategory, A, is called right admissible if α has a right adjoint, denoted α ! , and left admissible if α has a left adjoint, denoted α * . A full triangulated subcategory is called admissible if it is both right and left admissible. Definition 2.2.8. A semi-orthogonal decomposition is a weak semi-orthogonal decomposition A 1 , . . . , A m such that each A i is admissible. The notation is left unchanged. 2.3. Elementary wall-crossings. In this section, we review part of the relationship between variations of GIT quotients [Tha96,DH98] and derived categories, following [BFK12]. While consideration of the general theory was inspirational to our approach to Homological Projective Duality, it is sufficient for this paper to consider only the simplest types of variations of GIT quotients, namely elementary wall crossings. Let Q be a smooth, quasi-projective variety and let G be a reductive linear algebraic group. Let σ : G × Q → Q denote an action of G on Q. Recall that a one-parameter subgroup, λ : G m → G, is an injective homomorphism of algebraic groups. From λ, we can construct some subvarieties of Q. We let Z 0 λ be a choice of connected component of the fixed locus of λ on Q. Set The subvariety Z λ is called the contracting locus associated to λ and Z 0 λ . If G is Abelian, Z 0 λ and Z λ are both G-invariant subvarieties. Otherwise, we must consider the orbits We will be interested in the case where S λ is a smooth closed subvariety satisfying a certain condition. To state this condition we need the following group attached to any one-parameter subgroup Definition 2.3.1. Assume Q is a smooth variety with a G-action. An elementary HKKN stratification of Q is a disjoint union obtained from the choice of a one-parameter subgroup λ : G m → G, together with the choice of a connected component, denoted Z 0 λ , of the fixed locus of λ such that • S λ is closed in X. We will need to attach an integer to an elementary HKKN stratification. We restrict the relative canonical bundle ω S λ |Q to any fixed point q ∈ Z 0 λ . This yields a one-dimensional vector space which is equivariant with respect to the action of λ. Definition 2.3.2. The weight of the stratum S λ is the λ-weight of ω S λ /Q | Z 0 λ . It is denoted by t(K). Furthermore, given a one parameter subgroup λ we may also consider its composition with inversion −λ(t) := λ(t −1 ) = λ(t) −1 , and ask whether this provides an HKKN stratification as well. This leads to the following definition. Theorem 2.3.4. Let Q be a smooth, quasi-projective variety equipped with the action of a reductive linear algebraic group, G. Let w ∈ H 0 (Q, L) G be a G-invariant section of a Ginvertible sheaf, L, and assume that L has weight zero on Z 0 λ . Suppose we have an elementary wall-crossing, (K + , K − ), a) If µ > 0, then there are fully-faithful functors, , and a semi-orthogonal decomposition, , and a semi-orthogonal decomposition, Proof. This is [BFK12, Theorem 3.5.2]. The categories, D(coh[Z 0 λ /C(λ)], w λ ) j , appearing in Theorem 2.3.4 are the full subcategories consisting of objects of λ-weight j in D(coh[Z 0 λ /C(λ)], w λ ). For more details, we refer the reader to [BFK12]. In our situation, we will only need the conclusion of the following lemma. We set Y λ := [Z 0 λ /(C(λ)/λ)]. Lemma 2.3.5. We have an equivalence, Further, assume that there there is a character, χ : C(λ) → G m , such that Then, twisting by χ provides an equivalence, Proof. This is Lemma 3.4.4 of [BFK12]; we give the very simple and short proof here. A quasi-coherent sheaf on Y λ is a quasi-coherent C(λ)-equivariant sheaf on Z 0 λ for which λ acts trivially, i.e. of λ-weight zero. For the latter statement just observe that twisting with χ is an autoequivalence of D b (coh[Z 0 λ /C(λ)]) which brings range to target and its inverse does the reverse. Let X be a smooth projective variety equipped with a morphism f : X → P(V ). We have a canonical section of O P(V ) (1) O P(V * ) (1) determined as follows. Under the natural isomorphism, V * ⊗ V ∼ = End(V ), the identity map on V corresponds to an element u ∈ V ⊗ V * . We define a section by taking the image of u under the isomorphism above. Definition 2.4.1. The zero locus of θ X is called the universal hyperplane section of f . It is denoted by X . The universal hyperplane section comes equipped with two natural morphisms, p : X → X and q : X → P(V * ). The fiber of q, X H , over H ∈ P(V * ) is exactly the hyperplane section of X corresponding to H. Remark 2.4.2. Recall that when X is smooth, the projective dual to X under the embedding f , is the closed subset with its reduced, induced scheme structure. Thus X ∨ is the non-regular, i.e. critical, locus of q : X → P(V * ) in P(V * ). Homological Projective Duality is a phenomenon that can be considered as a lifting of the notion of classical projective duality to non-commutative geometry. The starting data for HPD is a smooth variety, X, together with a map to a projective space, f : X → P(V ), and a special type of a semi-orthogonal decomposition called a Lefschetz decomposition. We now provide the setup to define a Lefschetz decomposition. Definition 2.4.3. Let B be an algebraic variety and T be a monoidal triangulated category. A B-linear structure on T is an exact monoidal functor We will often use the above definition when T is a subcategory of D b (coh X) and X is an B-scheme. The functor F will implicitly be assumed to be the pullback functor. Now let B = P(V ) and consider a P(V )-linear category T with respect to F . To simplify notation, denote by (s) the functor of tensoring with F (O(s)). Definition 2.4.5. A Lefschetz decomposition of a P(V )-linear category T is a semi-orthogonal decomposition of the form, T is a chain of admissible subcategories of T and A s (s) denotes the essential image of the category A s after application of the functor (s). Definition 2.4.6. A dual Lefschetz decomposition of a P(V * )-linear category T is a semiorthogonal decomposition of the form, T is a chain of admissible subcategories of T and B s (s) denotes the essential image of the category B s after application of the functor (s). Now consider a morphism f : X → P(V ). The most important property of a Lefschetz decomposition is that it induces a semi-orthogonal decomposition on the derived category of any linear section of X. Proposition 2.4.7. Consider a morphism f : X → P(V ) and a Lefschetz decomposition with respect to f * . Let L ⊆ V * be a linear subspace of dimension r and of the inclusion and derived restriction to X L . Proof. Let δ : X L → X be the inclusion. The statement is equivalent to the fact that the restriction δ * : D b (coh X) → D b (coh X L ) is fully-faithful on the full subcategory Let A s ∈ A s (s) and A t ∈ A t (t). Restrict the Koszul resolution on L to X to obtain an exact complex Applying global sections yields an exact sequence of hypercohomology A Lefschetz decomposition also induces a semi-orthogonal decomposition on the universal hyperplane section X with respect to f and similarly on the family of linear sections over any L ⊆ V * , X L := X × P(V * ) P(L). Let π L denote the natural map from X L to P(L) and define A k (k) D b (coh P(L)) to be the full triangulated subcategory of D b (coh X × P(L)) generated by objects F G, with Proposition 2.4.8. For any Lefschetz decomposition, there is an associated semi-orthogonal decomposition, where D L is defined as the right orthogonal to Proof. Notice that we get a semi-orthogonal decomposition Now, consider X × P(L) with the Segre embedding and apply Proposition 2.4.7 to get the result. The following is Definition 6.1 of [Kuz07]. Definition 2.4.9. Given f : X → P(V ) and a Lefschetz decomposition A 0 , A 1 (1), . . . A i (i) , a Homological Projective Dual Y is an algebraic variety together with a morphism g : Y → P(V * ) and a fully-faithful Fourier-Mukai transform Φ P with kernel The Fundamental Theorem of HPD relates linear sections in X with respect to f to their dual linear sections of Y with respect to g. Let N be the dimension of V and L ⊂ V * be a linear subspace of dimension r. Define, and Theorem 2.4.10 (Fundamental Theorem of Homological Projective Duality). Let Y → P(V * ) be a homological projective dual to X → P(V ) with respect to the Lefschetz decomposition {A i } in the sense of Definition 2.4.9. With the notation above we have the following: Then there exist semi-orthogonal decompositions, Proof. This is [Kuz07, Theorem 6.3]. Remark 2.4.11. Figure 1 is a useful representation of the pieces appearing in the semiorthogonal decompositions in the theorem above. The boxes themselves represent what Kuznetsov calls primitive subcategories a s := A s /A s+1 . The longer vertical line is placed at r, the dimension of L. The shaded boxes to the right of the long vertical line represent the terms of the perpendicular to C L in D b (coh X L ). The shaded boxes to the left of the vertical line represent the terms of the perpendicular to C L in the derived category of the homological projective dual Y L . In the i th column, the category generated by the boxes below the staircase correspond to A i−1 and the category generated by the boxes above the staircase give B j−i+1 . Figure 1. Kuznetsov's image of Lefschetz collections and their duals Remark 2.4.12. Homological Projective Duality is a duality in the following sense. . With respect to this Lefschetz decomposition, Kuznetsov shows that X → P(V ) is a homological projective dual to Y → P(V * ). In our setting, Kuznetsov's Fundamental Theorem of Homological Projective Duality does not apply. Instead, we are forced to prove a version of Kuznetsov's Fundamental Theorem of Homological Projective Duality directly in the setup that we consider in this paper, stated below as Theorem 3.1.3. Theorems 2.3.4 and 2.1.5 are the central tools in our approach. Let X be an S-scheme and E be a locally-free coherent sheaf over S. Let f : X → P S (E) be an S-morphism. We now consider Homological Projective Duality in the relative setting. This was already studied by Kuznetsov when E is the trivial bundle [Kuz07, Theorem 6.27] and, in the case of relative 2-Veronese embeddings, by Auel, Bernardara, and Bolognesi [ABB11, Theorem 1.13]. The definition of a Lefschetz decomposition extends to the relative setting by replacing the projective space P(V ) by the projectivization P S (E) and Definition 2.4.13. Given an S-morphism f : X → P S (E) and a Lefschetz decomposition together with an S-morphism g : where Φ denotes the essential image of a fully-faithful P S (E)-linear functor We may informally refer to any weak Homological Projective Dual Y that also satisfies a version of Theorem 2.4.10 as a Homological Projective Dual. Remark 2.4.14. The difference, between Kuznetsov's definition of Homological Projective Dual and the above definition of a weak Homological Projective Dual is in the assumption that the functor Φ is given by a Fourier-Mukai kernel in the fiber product D b (coh Y × P(V * ) X ). Recent work by Ben-Zvi, Nadler and Preygel [B-ZNP13] shows that a Fourier-Mukai kernel Remark 2.4.15. In the relative setting, we will consider, instead of linear sections X L and Y L , the fiber products Homological Projective Duality and VGIT In this section we construct a weak homological projective dual to a GIT quotient provided we are also given the data of an elementary wall-crossing. We follow the notation of Section 2.3. 3.1. Lefschetz decompositions and HPD from elementary wall crossings. Let Q be a smooth quasi-projective variety equipped with the action of a reductive linear algebraic group G and a morphism p : Let λ be a one-parameter subgroup of G which determines an elementary wall-crossing λ admits a G-equivariant affine cover and S λ has codimension at least 2. We let µ = −t(K + ) + t(K − ) and we assume that µ ≥ 0. Assume that X := [Q + /G] is a smooth and proper variety. Notice that X is an S-scheme by composing the inclusion with p. We denote this map by Let E be a locally-free coherent sheaf of rank N over S. One can consider the projective bundle . This bundle comes with a projection π : P S (E) → S. We denote the relative bundle by O P S (E) (1). Recall that we have fully-faithful functors 3.4 and Corollary 2.1.6. Therefore, when writing semi-orthogonal decompositions, we will denote the essential images of the functors Υ + j by Z + j . By Lemma 2.3.5, we see that and that tensoring by L induces an isomorphism between Z + n and that of Z + n+d for any n ∈ Z. When µ ≥ 0, the elementary wall crossing induces a semi-orthogonal decomposition on D b (coh X), which is a Lefschetz decomposition when X is considered together with the map f to P S (E). The fineness of the Lefschetz decomposition depends on the λ-weight d of M. Proof. Taking d = −t(K − ) in Theorem 2.3.4, in combination with Corollary 2.1.6, gives a fully-faithful functor, , and a weak semi-orthogonal decomposition Since X is smooth and proper, D b (coh X) is saturated [BV03, Corollary 3.1.5] and so is any weak semi-orthogonal component. By [BK90, Proposition 2.8], all the subcategories are fully admissible and we can mutate to get a new semi-orthogonal decomposition We conclude the proof by noticing, as above, that tensoring by L induces an isomorphism between Z + n and that of Z + n+d for any n ∈ Z. Let X 0 be the incidence scheme in P S (E) × S P S (E * ) and let be the relative universal hyperplane section of the S-morphism f : X → P S (E). We will now set up an elementary wall crossing for an action of . The gauged Landau-Ginzburg model corresponding to the quotient [(U 1 E * ) − / G] obtained from the elementary wall crossing will be our weak homological projective dual. Here, α 1 ∈ G m and α 2 ∈ G m act by dilation on the fibers of the two respective bundles and the action of G on V Q (M) is induced by the equivariant structure of M. We now notice that the pull-back of the natural pairing to X × S P(V * ), i.e. the section whose zero-scheme is X , induces naturally a G×G m -invariant function w on U 1 E * . Indeed, as we have assumed that S λ had codimension at least two in Q, it follows that S λ 1 has codimension at least two in U 1 E * , as well. This, together with the fact We are now ready to state: Theorem 3.1.2. Let Q be a smooth quasi-projective variety equipped with the action of a reductive algebraic group G and a morphism p : [Q/G] → S. Let λ be a one-parameter subgroup of G which determines an elementary wall-crossing (K + , K − ) such that S 0 λ admits a G-equivariant affine cover and S λ has codimension at least 2 in Q. Assume that X = [Q + /G] is a smooth and proper variety and let f : is a weak homological projective dual of f with respect to the Lefschetz decomposition given by Further below we will give a proof of this theorem, based on Theorem 2.3.4 applied directly to the elementary wall-crossing given by λ 1 and Theorem 2.1.5, but we first state the other main result of this section. The action of G and λ 1 restrict to U 1 V to give an elementary wall-crossing and these structures are compatible with taking fibers. In particular We claim that the semi-orthogonal decompositions in the statement of Kuznetsov's Fundamental Theorem of Homological Projective Duality hold in our context. Then there exist semi-orthogonal decompositions, respectively, are used to illustrate that the corresponding categories are equivalent, even though they are embedded by different functors and in different categories. Similarly, the notation Remark 3.1.5. Figure 2 demonstrates the tabular representation of the Fundamental Theorem of Homological Projective Duality in the case of the above theorem. The long vertical line is placed at r, the dimension of L. The shaded boxes to the right of the vertical line represent the terms of the perpendicular to C V in D b (coh X × P S (E) P S (W)). The shaded boxes to the left of the vertical line represent the terms of the perpendicular to C V in the derived category of the homological projective dual D(coh[(U 1 V ) − / G], w). In the i th column, the category generated by the boxes below the staircase corresponds to A i−1 and the category generated by the boxes above the staircase gives B N −i+1 . Comparing with Remark 2.4.11, one should notice that this picture breaks the subcategories into Z i 's rather than the primitive subcategories, which are, in general, larger. Before proving Theorems 3.1.2 and 3.1.3 we will set up a more complete picture of the various elementary wall-crossings that appear in the proofs. For each subbundle V of E, we will set up what is, in principle, a variation of GIT quotients problem (we will specify four different elementary wall crossings arising in such a setup), which interpolates between the corresponding linear sections of X, X , the Landau-Ginzburg model ((U 1 E * ) − , G, O(β), w) which is the homological projective dual and the Landau-Ginzburg model whose derived category is equivalent to the category C V in the statement of Theorem 3.1.3. The proofs will then follow from applying Theorem 2.3.4 to some of these wall-crossings. Consider the variety equipped with a G × G m -action. As M is a G-equivariant invertible sheaf on Q we can take the induced G-action on V Q (M) and the trivial action on the other component V S (V). Meanwhile, we let G m act with weight −1 on the fibers of V Q (M) and with weight 1 on the fibers of V S (V). This describes the G × G m -action. There is another G m -action which acts by dilation only on the fibers of V Q (M) giving us in total an action of G = G × G m × G m but we will ignore this for now as it will not take part in the elementary wall crossings we consider. To describe these, let us consider four G × G m -invariant open subsets of Q V , and one-parameter subgroups given by λ 1 (α) := (λ(α), 1) Although what we do now is slightly more general than the GIT framework, it is convenient to assemble the four wall crossings and open sets into a GIT fan as seen in Figure 3. To clarify, in what follows, we set Proof. This is easily checked. Lemma 3.1.7. There are new elementary wall-crossings Proof. We treat the case where i = 1. The rest are similar. Denote by the open immersions. Notice that We will verify that these are elementary HKKN stratifications. As S ±λ is closed by assumption, it is clear that By assumption It remains to check that the maps and are isomorphisms. We will check this for the first map; the proof for the second one is similar. First, we can cancel the G m with the one appearing in P (λ 1 ) = P (λ) × G m and look at the map Now, we can forget the (V S (V)\0 V S (V) ) on both sides, as P (λ) acts trivially on this factor, and look at the map We have an isomorphism, τ * λ M| S λ ∼ = O G M| Z λ . This induces the desired isomorphism on the corresponding geometric vector bundles. The computation of the t(K ± i ) follows directly from the definitions. Remark 3.1.8. The fourth elementary wall crossing corresponding to U 4 V and λ 4 is not used in the proofs which follow. However, it is interesting to note that this wall crossing can be used to prove the semi-orthogonal decompositions appearing in the Fundamental Theorem of Homological Projective Duality in the case where the Lefschetz collection is the trivial one with A 0 = D b (coh X), which would give that As we noted above, X , the relative universal hyperplane section of the S-morphism f : X → P S (E), is the zero locus of the pullback of the canonical section, The same is true for a subbundle V and we will abuse notation by also writing this section as w when U i V is an open subset of Q V for general V even though w, in general, depends on both V and 1 ≤ i ≤ 4. Recall now, that we also have a fiber-wise G m -action on Q V which acts by dilation on the fibers of V Q (M) and trivially on the remaining fibers. This commutes with the G × G maction and hence can be inserted into all the elementary wall crossings of Lemma 3.1.7. Our total action on Q V is now by As before, since w corresponds to an element of Γ(S, Sym 1 (E ⊗ O S E * )) we see that it is invariant under the first two factors of G and has weight one with respect to the third factor. We are now ready to prove Theorems 3.1.2 and 3.1.3 Proof of Theorem 3.1.2. We will prove the statement for any V a subbundle of E * and then, setting V = E * , we will obtain the desired result. Consider the gauged Landau-Ginzburg model, (U 1 V , G, O(β), w) as above. By Theorem 2.1.5, there is an equivalence . Consider the elementary wall-crossing ((K + ) 1 , (K − ) 1 ) from Lemma 3.1.7. Since the new G maction commutes with the G×G m -action, it is also an elementary wall-crossing for the action of G. Notice that since, by assumption, the G-action has weight d > 0 on the fibers of V Q (M), we can choose the following connected component of the fixed locus of λ 1 : where Z 0 λ is the connected component of the fixed locus chosen for (K + , K − ). Finally, inside G we have, where the G m -action is trivial. Furthermore, for this choice, which admits a G-invariant affine cover as we have assumed the existence of such for S 0 λ . Therefore, we may apply Theorem 2.3.4 to obtain a weak semi-orthogonal decomposition Proof of Theorem 3.1.3. We will first use the wall crossing ((K + ) 2 , (K − ) 2 ) as in Lemma 3.1.7. Consider U 2 V together with the (restriction of the) function w and the additional G m -action scaling the fibers of V Q (M) (as before). All together we have a G-action on U 2 V . As the additional G m -action commutes, the elementary wall crossing ((K + ) 2 , (K − ) 2 ) persists. The fixed locus of λ 2 is Z 0 acts with the usual G-action on Q − and with a trivial G maction. Also notice that we have a character σ : G → G m given by projection onto the second factor and σ • λ 2 = Id. We now apply Theorem 2.3.4, Lemma 2.3.5, and Corollary 2.1.6, see also Remark 2.1.7. For notational purposes, we denote the essential image(s) of the embedding of the category D b (coh[Q − /G])(O(σ j )) by Y j . We obtain a semi-orthogonal decomposition For the elementary wall crossing ((K + ) 3 , (K − ) 3 ), consider U 3 V together with the function w and again with an additional G m -action scaling the fibers of V Q (M). All together we have a G-action on U 3 V . As the additional G m -action commutes, the elementary wall crossing of Lemma 3.1.7 remains valid. The fixed locus of λ 3 is and since C(λ 3 ) = C(λ) × G m × G m we may cancel the fibers of this line bundle with the first G m -action to obtain an isomorphism We apply Theorem 2.3.4 and Corollary 2.1.6 (see also Remark 2.1.7) to two cases. When µ > dr we obtain a semi-orthogonal decomposition When µ ≤ dr we obtain a semi-orthogonal decomposition where, as before, we denote by Z − k the essential images of the fully faithful functors Υ − k . Therefore, in the case µ ≤ dr we have (3.6) where the first line is (3.2), the second is (3.3), the third is (3.1), the fourth is (3.5), the fifth comes from mutating and pairing (starting with the last) Y j with the last d − 1 of the Z − k 's until we exhaust the Z − k 's, the sixth is Theorem 2.1.5 using the fact that X × P S (E) P S (W) is a complete linear section, and the last comes from equivalences between the essential images Z + j of Υ + j in D(coh([(U 2 V ) + / G], w)) and the essential images In the case where, dr < µ, as before, we still have a semi-orthogonal decomposition We can now proceed to the proof of the statements in the theorem. Setting V = E * and noticing that in this case X × P S (E) P S (W) = ∅ , we obtain the dual Lefschetz decomposition Equation 3.6, with C V = D b (coh X × P S (E) P S (W)), amounts to the statement of the theorem in the case where µ ≤ dr and equations 3.7 and 3.8, when setting C V = D(coh[(U 3 V ) − / G], w), amount to the statement of the theorem in the case where dr < µ. Remark 3.1.9. From the proof of the theorem we see that when dr < µ, the category C V has an interpretation as D(coh[(U 3 V ) − / G], w). On the other hand when µ ≤ dr we have Remark 3.1.10. Notice that in the special case that Q − = ∅, B i = 0 for i > N − µ d . In other words, the Lefschetz collection has fewer terms and a simpler form. Furthermore, the homological projective dual simplifies to with G acting as usual on Q and trivially on the factor P S (E * ). In addition, (U 2 V ) + = (U 2 V ) − and the linear sections of the homological projective dual can be directly compared to X × P S (E) P S (W). As a GIT fan, this is displayed in Figure 4. The examples below will have Q − = ∅. 3.2. A first example: Projective Bundles. In this section we provide an elementary and explicit example of Homological Projective Duality using the results of the previous section. The results presented here were first proved in [Kuz07]. Let P be a locally-free coherent sheaf on B with, For the projective bundle, π : P B (P) → B, the relative invertible sheaf, O P(P) (1), provides a a map, j : P B (P) → P(V ). With the notation as in the previous section, we set where G m acts by fiber-wise dilation and χ(α) = α. It follows that By Remark 3.1.10, the weak homological projective dual reduces to with G m acting fiber-wise with weight 1. This is isomorphic to where G m acts fiber-wise with weight 1. Therefore, in this case, we can do more. Namely, we may apply Theorem 2.1.5 to see that where Z(w) is the zero locus of w in B × k P(V * ). Furthermore, by definition, Z(w) can be described as the set Remark 3.2.1. This is precisely the homological projective dual obtained by Kuznetsov in [Kuz07]. Also notice, as observed in [Kuz07, Lemma 8.1], that where P ⊥ is the locally-free coherent sheaf defined as the kernel of the evaluation map Remark 3.2.2. If we project down to P(V * ) then the fiber over s ∈ V * = H 0 (B, P) is precisely the vanishing of s. In particular, the image is the set of degenerate sections of P. When rk P = dim B + 1, this is precisely the projective dual of P B (P) (see Theorem 3.11 in [GKZ94]). However, unlike the usual projective dual, the homological projective dual is smooth. Derived categories of degree d hypersurface fibrations 4.1. An aside: Relative version of Orlov's theorem. We start by stating a well-known theorem of Orlov. Theorem 4.1.1. Let X be a hypersurface of degree d given as the zero-scheme of w ∈ Γ(P(V ), Proof. This is [Orl09, Theorem 2.13]. We will generalize this statement and the complete intersection version of it (also in [Orl09]) to families of complete intersections over a base. To this end, let S be a smooth, connected variety, E be a locally-free coherent sheaf of rank n on S and let L i be invertible sheaves on S, for 1 ≤ i ≤ c. Set U = i L i . Let Let q : P(E) → S be the projection. Choose sections s i ∈ Γ(S, S d i E ⊗ L i ), lets i be the corresponding sections in Γ(P(E), O P(E) (d i ) ⊗ q * L i ) and let X be the zero locus of (s 1 , . . . ,s c ) in P(E). Let w i be the associated regular functions on Q. Let w = i w i . Consider the G 2 m -action on Q given by . The function w becomes invariant with respect to the G m -action given by the one-parameter subgroup λ(α) = (α, 1). It is semi-invariant of weight 1 for the other G m -action. The one-parameter subgroup λ induces an elementary wall crossing. To see this, first observe that the fixed locus, Z 0 λ , is the zero section, 0 Q , of Q. We have Both are closed. The condition involving P (λ) is trivial when the ambient group is Abelian. We then have And, µ = rank E − d i . Note that C(λ) = G 2 m so C(λ)/λ ∼ = G m . By Corollary 2.1.6, we have an equivalence are projections and inclusions. Let π : Q → P(E) be the projection. The pullback, π * O P(E) (1) has weight 1 with respect to λ. Therefore, we have equivalences between the essential images Applying Theorem 2.3.4, we have the following statements. There is an isomorphism under which w corresponds to the regular function determined by ⊕ isi . Assuming further that ⊕ isi is a regular section (which implies that X is of pure codimension c) and applying Theorem 2.1.5, we have an equivalence Under this equivalence, we have an isomorphism of functors where q : X ⊂ P(E) → S also denotes the projection from X. Thus, we get the following Proposition 4.1.2. With the assumptions above, we have the following Proof. This follows from Theorem 2.3.4 and Theorem 2.1.5, as discussed above. In the case when X is a family of degree d hypersurfaces, U is a invertible sheaf and we can write where the action of G m is by fiber-wise dilation. In the coming sections, we will restrict our attention only to this case. We record this in the following corollary. Corollary 4.1.3. Let s ∈ Γ(S, Sym d E ⊗ L) and let X ⊂ P(E) be the associated degree d hypersurface fibration over S, with structure map, q : P(E) → S. Proof. This is a special case of Proposition 4.1.2. A local generator and Morita theory. In this section, we continue within the setting presented in Section 4.1 with c = 1 and show that the gauged LG model ([V S (E)/G m ], w) is derived-equivalent to the pair (S, B w ) where B w is an equivariant sheaf of dg-algebras. However, it will be more convenient to work with the isomorphic gauged LG model ( The sheaf B w will be the derived equivariant endomorphism sheaf of algebras of a 'local generator' G of the derived category of this latter LG model. Throughout this section, we will make the further assumption that the subvariety defined by w = 0 in P S (E) is smooth. Let us recall our setup. We work over a base S which is a smooth connected variety and E is a locally-free sheaf S. Meanwhile, we have specialized to the case where U is an invertible sheaf on S. We have a G 2 m -action on given by . We also have a projection q : P(E) → S and have denoted by X, which is assumed to be smooth, the zero locus in P(E)of a section of O P(E) (d i )⊗ O P(E) q * U corresponding to the regular function w on Q. Recall that ). We will replace the category D(coh[Q − /G 2 m ], w) by the derived category of a G m -equivariant sheaf of dg-algebras over S. Let π : Q → S denote the projection. We shall also denote the projection, π : Q − → S. Recall that Since π is affine, we have an equivalence of D(coh[Q − /G 2 m ], w) with D(mod Z 2 R, w), the category of Z 2 -graded coherent factorizations over R. From now on we will be working in this latter category. Proof. Let C be the zero locus of w inside the relative spectrum Spec R. Then, there is an essentially surjective functor, Proof. We first check that F induces natural quasi-isomorphisms of chain complexes for any object E ∈ D(mod Z 2 R, w). Since B w := F (G), we have Therefore, where the last step follows from the fact that F commutes with • ⊗ O S L. Applying RΓ shows that F is fully-faithful on the smallest thick subcategory of D(mod Z 2 R, w) containing all the objects G ⊗ O S L for any equivariant invertible sheaf L on S. By Proposition 4.2.1, this is all of D(mod Z 2 R, w). Proposition 4.2.6. The essential image of F is the subcategory of perfect modules. Proof. Since by Proposition 4.2.1 D(mod Z 2 R, w) is generated by objects of the form G ⊗ O S L and F is fully-faithful by Proposition 4.2.2, the essential image of F is dense in the smallest thick subcategory containing the set of objects {B w ⊗ O S L} for L invertible. By Lemma 4.2.5, this is the subcategory of perfect modules. Finally, by Corollary 4.1.3 and Lemma 2.2.6, D(mod Z 2 R, w) is idempotent complete thus the essential image is also thick. We have thus proved the following: Proposition 4.2.7. There is an equivalence We now calculate the endomorphism sheaf of dg-algebras of G and obtain a more explicit description of B w . To this end, we will replace G with a quasi-isomorphic locally-free factorization in D(mod Z 2 R, w). Define a factorization by The Koszul differential d Koszul is given by the composition The relative 1-form γ is defined as where dw is the relative algebraic deRham differential of w. In local coordinates, where the x i are a basis of E, ∂ ∂x i is the corresponding basis of 1-forms in Λ(E * ), and dx i is the corresponding basis of 1-forms in Λ(E), we have Proof. This is a direct consequence of [BDFIK12, Theorem 3.9]. Since the equivalences above respect the grading shifts, we can now use the graded endomorphism algebra i∈Z Hom R,w,Z 2 (F(i, 0), F), which we still denote by B w , to calculate i∈Z RHom R,w,Z 2 (G(i, 0), G). Note that as an R := Sym E ⊗ O S Sym(U, U −1 )-module, we can write B w as Under this description, a basic local section (β ⊗ f ⊗ θ), denoted briefly as (β, f, θ), for β ∈ Λ • (E), f ∈ Sym(E) ⊗ O S Sym(U, U −1 ) and θ ∈ Λ • (E * ), corresponds to the endomorphism of F that acts on basic local sections of F by where the pairing θ, β is the natural pairing between Λ • (E * ) and Λ • (E) (in particular, the pairing is 0 unless θ and β live in the same wedge power, so this pairing is different from the contraction pairing). The sections of the sheaf B w have a dg algebra structure with differential ∂ induced by δ F and product structure, induced by composition of the endomorphisms, given by Finally, note that there are two gradings on B w , the internal Z-grading and the cohomological grading. Sections of Λ 1 E have internal degree −1 and cohomological degree −1, sections of Λ 1 E * have the opposite gradings; whereas sections of U have internal degree d and cohomological degree 2 and sections of Sym E have internal degree −1 and cohomological degree 0. 4.3. Transferring to an A ∞ -structure. We will be working with sheaves of A ∞ -algebras over S and modules over them. These will have an "internal" Z-grading in addition to the usual cohomological grading on such objects. We require that the restriction maps for these sheaves are always strict morphisms of A ∞ -algebras and modules. In this section we will prove the following theorem: Theorem 4.3.1. There exists a sheaf of graded A ∞ Sym(U, U −1 )-algebras (A, µ • ) with Moreover, (i) If d = 2, A is a sheaf of Clifford algebras. The Clifford relations are given in local coordinates, with x i a local basis of E and v i the dual basis, by A is a minimal A ∞ algebra with the following properties: • The multiplication µ 2 is given by the usual wedge product on A induced from Λ • (E * ). • µ k = 0 for 3 ≤ k ≤ d − 1 and, in local coordinates, we have Remark 4.3.2. Local calculations have been provided in the above statement as they are more explicit and easier to state. We will also give formulas for the global µ k later in Lemma 4.3.12 and Proposition 4.3.15 below. To prove Theorem 4.3.1, we first observe that A is the cohomology sheaf of algebras of a different Z-graded dg-algebra B, which is the same as B w except that it has a modified differential and is easily seen to be formal. The strategy is to use the homological perturbation lemma to obtain an A ∞ structure on A = H • (B) that makes it quasi-isomorphic to B w . To this end, let us consider the pair (F, d Koszul ) instead of (F, δ F ) where we had δ F = d Koszul +γ ∧ •. Let B be the endomorphism dg algebra of (F, d Koszul ). As we had for B w , we have with the same product structure described in the previous section, but with a differential now induced by d Koszul , different from the differential of B w . Definition 4.3.3. We denote the differential induced by d Koszul by ∂ : B → B and differential of B w by ∂ in keeping with standing notation for homological perturbation. Proof. The factorization, F, after forgetting the differential γ ∧ • becomes a chain complex quasi-isomorphic to Sym(U, U −1 ). Since F is locally-free, we have a quasi-isomorphism The latter chain complex is formal with cohomology exactly as claimed. Note that sections of U have internal degree d and cohomological degree 2 whereas sections of Λ 1 E * have internal degree 1 and cohomological degree 1. We now define the maps which will allow us to transfer the dg-structure on B to the A ∞ -structure on A . We define p : B → A to be the projection by the ideal generated by Sym E and Λ • E. Note that this is not a map of sheaves of algebras, but a map of sheaves of chain complexes. Next, we will define a map i : A → B. In order to do that, for each a ∈ Z ≥0 , consider the composition α a of the maps where the first map is induced by the map O S → E ⊗ O S E * corresponding to the identity in End O S (E), while the second one is induced by the wedge product. We define i 0 : A → B to be the obvious inclusion and i k : A → B by We can now define i by Lastly, we want to define a homotopy between ip and 1. We first define h 0 : B → B to take (β, f, θ) to (df ∧ β, θ) for basic local sections (β, f, θ) of B. Here, d is the deRham differential. One can then define Lemma 4.3.5. The morphisms, h, i, p, satisfy: Proof. This is a straightforward, but tedious, computation. It is suppressed. Proposition 4.3.6. There exists an A ∞ structure, µ, on A and a quasi-isomorphism Proof. Lemmas 4.3.4 and 4.3.5 guarantee that we can apply homological perturbation, as in [KS01], Section 2.4 in [Cra04], or [Mar04], which provides the desired µ and f . The general formulas for the higher products of Proposition 4.3.6 on A can be described as sums over ribbon trees with one root and d leaves such that the valency of any vertex is either 2 or 3. Each such tree T with k-leaves determines a term µ k T in the higher product µ k on A. Orienting T from the leaves to the root, we can explicitly describe the composition of maps that define µ k T as follows: • Figure 6. Ribbon trees contributing to the differential on A The higher products are given by where the sum is taken over all ribbon trees with k-leaves. Corollary 4.4.4 above is the first part of Theorem 4.3.1. Using the explicit description of the µ above, we will now verify the properties of the A ∞ -structure on A. In the process of doing so, we will make some calculations in local coordinates. More precisely, consider any affine open U ⊂ S where E and U are trivial and take {x i } to be a basis of E| U and u in U. We will denote the corresponding basis of Λ 1 E by {dx i } and the dual basis of Λ 1 E * by In these local coordinates, the formulas above can be rewritten as follows: • i : A → B is given locally by • h : B → B is given locally by where s = deg f + deg β (when f β is a constant, h takes the element (β, f, θ) to zero). • ∂ − ∂ is given locally by: Remark 4.3.8. In the arguments that follow, we will make use of a Z 2 -grading on B different from the internal and cohomological gradings we have considered so far. This is only for the purposes of the arguments below and will help us in simplifying the computation of the A ∞ products on A. The two Z-gradings consist of: • The f -degree on B is a Z-grading which comes from considering Sym(E) with its natural grading and the other factors of the tensor product in degree zero. • The β-degree on B is a Z-grading which comes from considering Λ(E) with its natural grading and the other factors of the tensor product in degree zero. Lemma 4.3.9. The f -degree and β-degree have the following properties: Proof. This follows immediately from the definitions. Remark 4.3.10. The number of trees contributing to each A ∞ product is finite. Indeed, any tree containing a long enough chain will not contribute to the summation because both h and ∂ − ∂ increase the β-degree and all elements of positive β-degree are in the kernel of p. Lemma 4.3.11. The sheaf of graded A ∞ -algebras A is minimal, i.e., the differential µ 1 on A is trivial. Proof. Consider the trees in Figure 6. By the tree summing formula (4.2), the differential is given by Since ∂ −∂ has f -degree d−1, and p kills everything of positive f -degree we have p(∂ −∂) = 0 (the f -degree is an N-grading). Lemma 4.3.12. The multiplicative structure µ 2 on A can be described as follows: (i) If d > 2, µ 2 is induced by the ribbon tree with two leaves and a single trivalent vertex. In particular, the multiplication is given by the usual wedge product on A induced from the wedge product on Λ • (E * ). (ii) If d = 2, µ 2 is induced by the ribbon tree described in (i) plus the ribbon tree with two leaves and one bivalent vertex connected to the second leaf. In particular, the multiplication satisfies the Clifford relations, given locally by and globally by where d is the deRham differential and s 1 , s 2 are sections of Λ 1 E * . Proof. Let a i = (r i , v i ) and a j = (r j , v j ) be in A with v i , v j ∈ ∂ ∂x 1 , . . . , ∂ ∂x n . (i) Any tree T contributing to the summation formula for µ 2 will have exactly one trivalent vertex (since it has two leaves). Moreover, if we let m be the number of bivalent vertices, we see that both h and (∂ − ∂) appear m times in T . However, by Lemma 4.3.9, h has f -degree −1, while ∂ − ∂ has f -degree d − 1, it follows that before applying p, the operator associated to T will have f -degree m(d − 2). Since d > 2 the quantity m(d − 2) is positive if and only if m > 0. Consequently, if m > 0 then the operator proceeding p lies in the kernel of p. Therefore, the only tree contributing to the summation formula is the one with m = 0 bivalent vertices. (c.f. Figure 7) Now, by what we argued above, to calculate µ 2 (a i , a j ) we need to compute p(i(a i )i(a j )). We first note that we only need to consider the β-degree 0 part of i(a i ) since all the higher β-degree terms, after multiplication, will be sent to 0 via p. Thus, without loss of generality, we may assume i(a i ) = (1, r i , v i ). By the definition of the product structure on B the only part of i(a j ) contributing to the product is the β-degree 1 component (all the others will vanish when multiplied with i(a i )). More precisely, we can assume i( (ii)Again, any tree T contributing to the summation formula will have exactly one trivalent vertex since it has two leaves. Moreover, if we let m be the number of bivalent vertices, then h and (∂ − ∂) appear m times in T . Since p(∂ − ∂) = 0, the last vertex (the one connected to the root) must have valency 3. We have p(m(h(−), −)) = 0, as, by Lemma 4.3.9, h(b) ∈ B >0,β for any b, and m(B >0,β , B) ⊆ B >0,β . It follows that the last vertex must be connected to the first leaf. Thus, µ 2 T (a i , a j ) = p(m(i(a i ), P (a j )), where P is the remaining part of the operator which is attached to the ribbon tree with the edges connecting the leaf and the root to the last vertex removed and h replacing p. Now, as we saw in part (i), without any loss of generality, we can assume that i(a i ) = (1, r i , v i ). Moreover, as before, the only part of i(a j ) contributing to m is the β-degree 1 component. Since the β-degree of h is ≥ 1 it follows that m has to be 0 or 1. If m = 0 then the tree that we obtain is the one we had in (i). If m = 1 we obtain the tree with one trivalent vertex and one bivalent one connected to the second leaf. (c.f. Figure 7) To compute the contribution of the latter tree we first note that only the β-degree 1 part of the output of P will contribute to the operator induced by T . Now, the operator P is just h(∂ − ∂)i so, since h already has β-degree > 0 we can assume, with loss of generality, that i(a j ) = (1, r j , v j ) and that only the β-degree 0 part of (∂ − ∂) will contribute to the operator. Last, but not least, we also see that only the β-degree 1 term in h will contribute to the sum. We now compute P (a j ). We have = h(1, ∂w ∂x j r j , 1) = 1 2 k (dx k , ∂ 2 w ∂x k ∂x j r j , 1). (4.6) Therefore, the total contribution of T is, Thus, we have proved that µ 2 (a i , a j ) = (r i r j , v i ∧ v j ) + ( 1 2 r i r j ) ∂ 2 w ∂x i ∂x j and this is a Clifford multiplication on A. Similarly, we have that µ 2 (a j , a i ) = (r i r j , v j ∧ v i ) + ( 1 2 r i r j ) ∂ 2 w ∂x i ∂x j and therefore µ 2 (a i , a j ) + µ 2 (a j , a i ) = r i r j ∂ 2 w ∂x i ∂x j which gives the Clifford algebra structure on A. The global calculation follows directly from this local version. The following proposition follows a similar argument to those in [Sei11]: Proposition 4.3.13. The A ∞ -structure on A coming from the tree summation formula agrees with the trivial A ∞ -structure up to order d − 1. Proof. We want to show that any tree with k leaves, for k < d, contributes a trivial operator to the summation and therefore µ k = 0 for k < d. Consider now the A ∞ product, µ k for k > 2. Note that for k > 2, any tree with no bivalent vertex does not contribute to the summation as the term h(i(a)i(a )) = h(i(aa )) = 0 necessarily appears as the output of a trivalent vertex and thus the operator induced by such a tree would be 0. Therefore, the number m of bivalent vertices, is at least 1. By Lemma 4.3.9, h has f -degree −1, while ∂ − ∂ has f -degree d − 1. Since T has k-leaves, it has exactly k − 1 trivalent vertices. Therefore h appears in the operator at most k − 2 + m times while ∂ − ∂ appears m-times. Since m preserves the f -degree, it follows that before applying p, the operator will have f -degree m(d − 1) − (k − 2 + m). Therefore if m(d − 1) − (k − 2 + m) > 0 then the operator vanishes. In summary, in order to get a non-trivial contribution from trees with m bivalent vertices one must have (4.7) Thus, µ k can be non-trivial only when d ≤ k. Lemma 4.3.14. Any tree providing a nontrivial contribution to µ d has exactly 1 bivalent vertex. Proof. Equation (4.7) for k = d tells us that the number of bivalent vertices must be 1. The following proposition completes the proof of Theorem 4.3.1: where v i j are not necessarily distinct elements of the basis v i = ∂ ∂x i ; and globally by Proof. We consider a j = (r j , v i j ) for j = 1, . . . , d and v i j = ∂ ∂x i j . Before calculating µ d (a 1 , . . . , a d ) we first note that there is only one tree contributing to the summation formula ( Figure 8). Indeed, let T be any tree that contributes to the summation formula for µ d (a 1 , . . . , a d ). By Lemma 4.3.14, T has only 1 bivalent vertex. On the other hand, since T has d leaves, it has exactly d − 1 trivalent vertices. Moreover, the bivalent vertex has to be connected to one of the last two leaves since otherwise there is at least one h appearing before ∂ − ∂ and that means the f -degree of the output of the operator given by T (before applying p) has f -degree greater than or equal to 1 and thus it lies in the kernel of p. Therefore, the vertex connected to the root has valency 3. Furthermore, arguing as in Lemma 4.3.12 it follows that this last vertex is connected to a leaf. Thus, µ d T (a 1 , . . . , a d ) = p(m(i(a 1 ), P (a 2 , . . . , a d )), where P is the remaining part of the operator which, as before, is attached to the ribbon tree with the edges connecting the leaf and the root to the last vertex removed and h replacing p. In summary, we have established that the last/left-most vertex appears as in Figure 8. Moreover, we note that, without any loss of generality, we can assume i(a 1 ) = (1, r 1 , x i 1 ). This follows from the same argument as above, when we calculated the multiplication on A. This also forces P (a 2 , . . . , a d ) to have β-degree 1 (or, more precisely, all other terms in P (a 2 , . . . , a d ) will vanish when multiplied with i(a 1 )). We now argue that the one bivalent vertex cannot be connected to the penultimate leaf since, in this case, the operator P would have β-degree ≥ 2 and thus the operator induced by T would be 0. This follows inductively by tracing the β-degree of the operator induced by the tree we are considering. We thus conclude that there is only one tree contributing to the summation formula giving µ d and we can calculate its contribution using a similar calculation as in Lemma 4.3.12 which yields the desired result. The formula for the global version follows directly from this local version. 4.4. The derived category and a technical assumption. Recall that A and A ∞ Amodules are assumed to have strict restriction morphisms. We start with the following definition: Definition 4.4.1. For a sheaf of graded A ∞ algebras A, let D pe (Mod ∞,Z A) be the smallest thick triangulated subcategory of the derived category of strictly unital graded A ∞ Amodules containing all modules of the form A ⊗ O S L for graded invertible sheaves L on S. If we wish to emphasize the underlying variety, we will also denote this category by D pe (Mod ∞,Z (S, A)). Recall that for a sheaf of graded dg O S -algebras B, D pe (Mod Z B) is the full subcategory of the derived category of dg B-modules, classically generated by objects of the form B w ⊗ O S L. The following conjecture is believed to be true, however a proof is beyond the scope of this paper. Remark 4.4.3. When S = Spec k, this conjecture is known by the results in [LH03], Section 2.4.2; in which case, the category of A ∞ -modules has a model structure where the weak equivalences are quasi-isomorphisms and every object is fibrant and cofibrant. This makes considerations of derived functors more straightforward. For the general case, one would need to define the appropriate model structures and prove the derived-equivalences induced by the diagram in the proof of Lemma 2.4.2.3. in loc. cit.. In the notation of Section 4.3, we have the following: Homological Projective Duality for d-th degree Veronese embeddings We will now apply the results of the previous two sections to construct a homological projective dual to the degree d Veronese embedding. In view of potential applications, we will do this in the relative setting. Then, if d = 2, we will recover Kuznetsov's construction for degree two Veronese embeddings [Kuz05] (when S is a point) and the relative version in [ABB11]. Let S be a smooth, connected variety and P be a locally-free coherent sheaf on S. We consider the relative degree d Veronese embedding for d > 0, g d : P S (P) → P S (S d P). Notice that g * d (O P(S d P) (1)) ∼ = O P(P) (d). Consider the Lefschetz decomposition D b (coh P S (P)) = A 0 , . . . , A i (i) where the subcategories A j are defined to be where k = rk P − d( rk P d − 1). We will first consider P S (P) as a quotient and use the results of Section 3. Let us take Q = V S (P) and consider the G = G m -action given by fiber-wise dilation. Take the character given by χ(α) = α d and the invertible sheaf M = O(χ) on Q. Taking the one-parameter subgroup λ : G m → G m given by λ(α) = α −1 , we see that we have an elementary wall crossing with S λ = 0 V S (P) , S −λ = V S (P). We get that [Q + /G] = P S (P), where d is the weight of the λ-action on M. This shows that M induces the morphism g d : P(P) → P(S d P * ). Using Proposition 3.1.1, we recover the Lefschetz decomposition with The universal degree d polynomial w is given by w := (g d × 1) * θ ∈ Γ(P S (P)) × S P S (S d P * ), O P S (P) (d) O P S (S d P * ) (1)), where θ is the tautological section in Γ(P S (S d P) × S P(S d P * ), O P(S d P) (1) O P S (S d P * ) (1)). The zero locus w in P S (P) × S P S (S d P * ) is the universal hyperplane section X of P S (P) with respect to the embedding g d . We have thus constructed a Landau-Ginzburg model which is a homological projective dual. Theorem 5.1. The gauged Landau-Ginzburg model ([V S (P) × S P S (S d P * )/G m ], w) is a weak homological projective dual to P S (P) with respect to the embedding g d and the Lefschetz decomposition constructed above. Moreover, we have: Proof. By directly applying Theorems 3.1.2 and 3.1.3 to the elementary wall crossing described above, and simplifying as described in Remark 3.1.10, we get the desired result. For the first part of the theorem, we can alternatively consider X as a degree d hypersurface fibration over P(S d P * ) and use our relative version of Orlov's theorem (Corollary 4.1.3) with S = P S (S d P * ), E = π * P and U = O P S (S d P * ) (1) to get the decomposition D b (coh X ) = D(coh[V P S (S d P * ) (π * P)/G m ], w), A 1 (1) ⊗ D b (coh P S (S d P * )), . . . , A i (i) ⊗ D b (coh P S (S d P * )) . Observing that V P S (S d P * ) (π * P) ∼ = V S (P)× S P S (S d P * ), we obtain the required semi-orthogonal decomposition. Using this result, we can state the following: Theorem 5.2. Let S be a smooth, connected variety and P be a locally-free coherent sheaf over S. Let d ≥ 3 and d ≤ rank P. Assuming Conjecture 4.4.2, there exists a sheaf of minimal A ∞ -algebras (A, µ) on P S (S d P * ) with A = Sym(uO P S (S d P * ) (1), u −1 O P S (S d P * ) (−1)) ⊗ Λ • P * , and • If d > 2: µ i = 0 for 2 < i < d and, in local coordinates, • If d = 2: µ i = 0 for i > 2 and (A, µ 2 ) is a sheaf of Clifford algebras with Clifford relations given, in local coordinates, by such that the non-commutative variety is a weak homological projective dual to P S (P) with respect to the embedding g d and the Lefschetz decomposition constructed above. Theorem 5.3. With the same assumptions as in Theorem 5.2 above, we have the following (i) The perfect derived category of the non-commutative variety (P S (S d P * ), A) admits a dual Lefschetz collection D pe (Mod ∞,Z (P S (S d P * ), A)) = B j (−j), . . . , B 1 (−1), B 0 (ii) Let V ⊂ S d P * be a subbundle and W the orthogonal subbundle in S d P. Assume that P S (P) × P S (S d P) P S (W) is a smooth, complete linear section, i.e. dim(P S (P) × P S (S d P) P S (W)) = dim(P S (P)) − r. Then, there exist semi-orthogonal decompositions: Proof. This follows from setting S equal to Spec k in Theorem 5.3. Remark 5.6. If d = 2 then, as we have noticed in the previous section, A is actually a sheaf of Clifford algebras, therefore by [Ric10], we get an equivalence D pe (Mod Z (P(S 2 V * ), B w )) ∼ = D pe (Mod Z (P(S d V * ), A)), without the need for Conjecture 4.4.2. Now, using Proposition 3.7 in [Kuz05], one has D pe (Mod Z (P(S 2 V * ), A)) ∼ = D b (mod(P(S d V * ), B 0 )), where B 0 is the sheaf of even Clifford algebras defined in [Kuz05]. This recovers the homological projective dual in [Kuz05]. Similarly, for a smooth complete intersection of quadrics, we get the same description as in loc. cit. using Corollary 5.5. The relative versions in [Kuz05] and [ABB11] follow similarly.
2014-09-19T09:33:33.000Z
2013-06-17T00:00:00.000
{ "year": 2013, "sha1": "0a317e080034d7afe7c025ee034291f5de1ca80b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1306.3957", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0a317e080034d7afe7c025ee034291f5de1ca80b", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119090337
pes2o/s2orc
v3-fos-license
Three dimensional resonating valence bond liquids and their excitations We show that there are two types of RVB liquid phases present in three-dimensional quantum dimer models, corresponding to the deconfining phases of U(1) and Z_2 gauge theories in d=3+1. The former is found on the bipartite cubic lattice and is the generalization of the critical point in the square lattice quantum dimer model found originally by Rokhsar and Kivelson. The latter exists on the non-bipartite face-centred cubic lattice and generalizes the RVB phase found earlier by us on the triangular lattice. We discuss the excitation spectrum and the nature of the ordering in both cases. Both phases exhibit gapped spinons. In the U(1) case we find a collective, linearly dispersing, transverse excitation, which is the photon of the low energy Maxwell Lagrangian and we identify the ordering as quantum order in Wen's sense. In the Z_2 case all collective excitations are gapped and, as in d=2, the low energy description of this topologically ordered state is the purely topological BF action. As a byproduct of this analysis, we unearth a further gapless excitation, the pi0n, in the square lattice quantum dimer model at its critical point. I. INTRODUCTION Fractionalised phases, i.e. phases where low energy excitations exhibit fractional quantum numbers, have played an important role in condensed matter physics over the past two decades. Probably the two most salient examples are provided by the quantum Hall effect, where the charge of a quasiparticles can be a rational fraction of the fundamental electronic charge, 1 and by Anderson's suggestion of spin-charge separation being at the root of high-temperature superconductivity. 2 This latter proposal has spawned a set of theories collectively known as resonating valence bond (RVB) theories. 3,4 In its short-range incarnation, 4 RVB theory uses dimers as its basic degree of freedom, which are cartoons for a pair of adjacent electrons forming a singlet ('valence') bond. At zero doping, when all sites are singly occupied, all electrons are involved in a singlet bond with a neighbour. The Hilbert space of this theory is thus given by the classical dimer coverings of the lattice. Resonance moves between dimer coverings provide a quantum dynamics for the dimers. The original hope was that the resulting quantum dimer model would have a liquid phase, the RVB liquid, hypothesized as an alternative to Neel order in antiferromagnets by Anderson and Fazekas in the early seventies. 5 This state would exhibit fractionalized S = 1/2 spinons and as a consequence holes doped into it would undergo spin-charge separation and the resulting charged spinless quasiparticles ('holons') would Bose condense to form a superconductor. However, it turns out that the square lattice quantum dimer model appropriate for the cuprates exhibits only solid phases with the exception of one critical point. 4,6,7,8,9 These properties are now wellunderstood within the framework provided by the height representation 10 for dimer coverings. 11,12 By contrast, for dimer models on non-bipartite lattices, the present authors have demonstrated that an RVB liquid phase is possible. 13 The different outcomes on bipartite and nonbipartite lattices can be rationalised, following the ideas of Fradkin and Shenker on lattice gauge theories, 14 via the absence (presence) of a deconfined phase in U (1) (Z 2 ) gauge theories in 2+1 dimensions. 12,15 We note that the RVB phase on non-bipartite lattices is best characterized as a topologically ordered phase 16 whose low energy theory is the purely topological BF theory. 17 This understanding of the situation in 2+1 dimensions has led us, in the present work, to investigate the behaviour of quantum dimer models in d = 3+1, specifically on the bipartite cubic lattice and the non-bipartite facecentred cubic lattice. We find that both models exhibit RVB phases with deconfined spinons, which correspond to the Coulomb phase of the U (1) gauge theory (ordinary electromagnetism) and the deconfined phase of the Z 2 gauge theory, respectively. The FCC lattice RVB liquid is topologically ordered. It has a gap to all excitations and its ground state degeneracies and topological interactions between spinons and vortex loops are encoded in the topological 3+1 dimensional BF action. By contrast, the cubic lattice RVB phase supports a gapless, transverse collective mode. This can be identified as the photon of the low energy theory, which is now a non-topological Maxwell theory. Consequently, we identify the ordering of this phase as quantum order in Wen's sense, 18 reserving the term topological order for cases where the low energy theory is purely topological. As a byproduct of this investigation, we have also revisited the theory of the critical point of the square lattice dimer model where we find, in addition to the resonons which are the analogs of our photons in d = 2, a further gapless excitation, the pi0n, which signals the incipient crystalline order. We note that our main results were announced in Ref. 19. From the foregoing it follows that a deconfined phase in a valence bond dominated regime is easier to achieve in d = 3 than in d = 2, where its existence is ruled out for bipartite lattices. This encouraging fact is, however, counterbalanced by the difficulty of stabilising a valence-bond dominated phase in comparison to a Neel phase, which becomes increasingly hard as the coordination of the lattice grows. There is also the possibility of realizing dimer models in other contexts -e.g. the Santa Barbara group has shown that multiple dimer models arise when frustrated Ising magnets are imbued with a ring-exchange dynamics. Indeed in Ref. 20 the reader can find a parallel discussion of a Colulomb phase in multidimer models on the diamond and cubic lattices. Alternatively, dimer models can arise as Bose Mott insulators, electronic Mott insulators at fractional fillings 21 and in mixed-valence systems on frustrated lattices. 22 We would be remiss if we did not note that quite generically U (1) gauge theories of Heisenberg magnets should be expected to predict a Coulomb phase in d = 3. Specifically, a treatment of three-dimensional quantum magnets in the bosonic large-N theory 23,24 at a fixed (and small) number of bosons per flavour, supplemented by 1/N corrections yields results along the lines presented here for dimer models. The dimer models themselves can be obtained, at order 1/N , in the large-N limit taken with a fixed boson number, and hence a vanishing number of bosons per flavor. 25 It is a remarkable fact that these two very different limits yield the same physics. The equally remarkable fact that they also agree in d = 2 was established previously. It appears safe to conjecture that this agreement will hold in higher dimensions as well. Finally, we note that Wen has recently urged that the physical photon be viewed as the consequence of a quantum ordered vacuum. 26 The models he has considered that give rise to an emergent photon have considerable resemblence to the cubic dimer model considered in this paper. Indeed, dimers and monomers have a natural string interpretation when superposed on a reference configuration. In the following, we first briefly summarise the known properties of the quantum dimer model which are of relevance to us. We then describe the RVB liquids, first on the simple and second on the face-centred cubic lattice. In both cases, we discuss in turn correlations, topological properties and the excitation spectrum. II. QUANTUM DIMER MODELS: GENERALITIES On a general lattice, allowed moves between two different hardcore dimer configurations consist of interchanging alternately occupied and empty links forming a closed loop, known as a resonance loop. If we denote the presence (absence) of a dimer by an Ising link variable, σ x = +1(−1), such a resonance move is generated by the kinetic operator where the loop, denoted pictorially by •, contains n l links, and σ ± are the raising and lowering operators for to σ x . A corresponding projection operator, determines whether dimers are arranged on a given loop so as to be able to carry out a resonance move; such loops we call flippable loops. The quantum dimer Hamiltonian proposed by Rokhsar and Kivelson considers only the shortest possible resonance loops. It contains two terms, each involving one of these operators. The first carries out resonance moves with a kinetic energy t • , and the second exacts an energy cost v • for a flippable loop: 37 Some features are common to the phase diagrams of quantum dimer models on most lattices. We have depicted a generic phase diagram in Fig. 1. For v/t > 1, it is favourable not to have any possible flippable loops; configurations satisfying this requirements can usually be found, and the resulting phase is called the staggered phase, a name borrowed from the appearance of one of the inert square lattice configurations. For v/t = −∞, by contrast, one wants to maximise the number of flippable loops. In both cases, there are no quantum flucutations. These phases are confining, and confinement typically exists for most values of v/t, with the possibility of transitions between different confining phases. From the point of view of deconfined, liquid 38 phases, the region v/t < ∼ 1 is the most interesting, because the balance between kinetic term and potential term overly favours neither high or low densities of resonance loops. The ground state wavefunction can then be spread moderately uniformly over most of the classical configuration space; if the ensemble of classical configurations is sufficiently disordered, this can then lead to an RVB liquid phase. This statement can be made precise at the Rokhsar-Kivelson (RK) point v/t = 1. Define a sector as the set of dimer coverings connected byT • . Then the equal amplitude superpositions of dimer coverings in each sector is a ground state at the RK point. Hence equal time correlations can be computed as correlations of the uniform fugacity classical dimer model on the same lattice. 4 These can often be obtained analytically -exactly (especially in d ≤ 2) or at least asymptotically -or from Monte Carlo simulations. Consequently, advances in determining classical correlations give valuable information on the quantum model. Indeed, it was the solution of the classical dimer model on the triangular lattice with its short-ranged correlations that allowed the triangular RVB liquid to be discovered. 13 Recently, Huse, Krauth and the present authors have discussed the classical dimer correlations on the cubic and FCC lattices. 19 In the next section we will build on this work to establish the presence and nature of RVB liquid phases on these two lattices. III. CUBIC LATTICE: AN RVB U(1) LIQUID The general structure of the phase diagram on the cubic lattice is indeed as sketched in Fig 1. For v/t > 1, the ground states are the non-flippable ('staggered') configurations, such as the ones obtained by appropriately stacking the two-dimensional staggered configurations. For v/t large and negative, there are six maximally flippable columnar configurations, again obtained from stacking planar columnar configurations. Apart from these two crystalline phases we have been able to establish the existence of a liquid, RVB phase for v/t < ∼ 1. This is a "Coulomb" phase, characterised by algebraically decaying dimer ("magnetic field") correlations and whose spectrum contains a linearly dispersing transverse collective excitation ("photon"), a gapped topological defect ("electric monopole") in addition to Coulombically interacting spinons ("magnetic charges"). We now sketch the derivation of these results. A. RK Point We begin with the properties of the RK point, v/t = 1. The classical dimer model on the cubic lattice was studied in Ref. 19. It was shown there that a useful parametrization of the dimer configurations is in the language of solenoidal magnetic fields. Briefly, on the bipartite cubic lattice, on can think of a dimer as a magnetic flux, B of strength 5/6 = 1 − 1/z pointing from sublattice A to sublattice B; here, z is the coordination of the lattice. Identifying an unoccupied bond with flux −1/6 = −1/z, one has ∇ · B = 0 -magnetic charges are excluded. Upon local coarse-graining, analogous to what is done for models with height representations in twodimensions, long wavelength configurations acquire an entropic weight quadratic in B and this allows computation of the long distance correlations of the classical model. These are purely dipolar, where K can be determined via Monte Carlo simulations. 19 The language of magnetic fields also allows a characterization of the sectors. For a cube with periodic boundary conditions, the flux through any surface that wraps around it is invariant under local dimer moves and is invariant under lattice translations of the surface. In particular, if we let Σ i be planes perpendicular to the cubic unit vectors e i , the fluxes Φ B i through them are the invariants that characterize a given sector of the dimer model. This flux is also known as a U (1) winding number. 39 For an L 3 cube these range in magnitude from 0 to L 2 /2. The form of the correlations given above is the average over all sectors, but also holds individually for all sectors with , which dominate the sum at large L. Finally, we note that the "staggered" configurations maximize Φ B , or more accurately, the average magnetic field strength. At the RK point equal amplitude superpositions in all magnetic flux sectors constitute degenerate ground states. As we will see below, for v/t < ∼ 1 this degeneracy is lifted and only the sectors with Φ B ∼ o( √ L) will be important in the low energy analysis-so we will not need more classical information than given in Eq. 3.1 above. For this reason it is sufficient to focus on the zero flux sector at the RK point for now. As the exact ground state wavefunction is known, we can use the single mode approximation (SMA) for dimer models, pioneered by RK in their original paper 4 to get a good description of the excitation spectrum. We remind the reader that the SMA will provide a rigorous upper bound on the excited state energy at each momentum. As RK's explanation was rather compressed and their treatment incomplete in one important respect in the d = 2 case, we will provide a fuller account between this section and the appendices. Single Mode Approximation: Let |0 denote the ground state of the dimer model under consideration. As above, we define σ x τ (r) as the Pauli spin operator the eigenvalues ±1 of which correspond to presence and absence of a dimer on the link at location r. Here, we have added the 'direction' vectorτ for clarity to distinguish between dimers pointing in the different possible directions. Next, we define the Fourier transform of the dimer density operator, Acting with this operator on the ground state yields a state with nonzero momentum q so that Provided that q,τ |q,τ = 0 , (3.5) this excitation has a variational energy of where the numerator and denominator in the final line are the oscillator strength -f (q) -and structure factors(q). These can be evaluated as ground state expectation values. If it now so happens that the densityσ x τ (q 0 ) is a conserved quantity, then [H QDM ,σ x τ (q 0 )] = 0, and thence E(q 0 ,τ ) = 0. From the orthogonality relation (Eq. 3.4) it then follows that this is a variational upper bound on the energy of the 'excited' state |q 0 ,τ . Similarly, the behaviour of f (q) near q 0 can be used to determine a bound on the dispersion of the soft excitations. Further, and again only if Eq. 3.5 holds, a smooth behaviour of f (q) while s(q) diverges, can also be used to infer gapless excitations. Indeed, this is the classic signature of incipient order. RK identified a conserved quantity for the square lattice, which generalises to the cubic case as the density of dimers pointing in a given direction (τ =x, say) at wavevector q 0 = (q x , π, π). This follows from the fact that the quantum dynamics always creates and destroys pairs of neighbouring dimers. This implies that the oscillator strength, f (q 0 ), vanishes. In fact, at a wavevector q 0 + k, one finds as outlined in Appendix A. This form holds for all values of −∞ < v/t ≤ 1. This, however, does not lead to lines of zero energy excitations because the structure factor also vanishes for all q 0 with q x = π -Eq. 3.5 is not satisfied. In particular, at the RK point, for momentum q = (π, π, π) + k, it has the form of a transverse projector, as one can see by Fourier transforming Eq. 3.1, This implies that only transverse excitations are generated byσ x τ , a fact that traces back to the absence of monopoles, ∇ · B = 0. The transverse nature of such excitations has already been noted by Hastings. 28 It is important to note that these are the only gapless excitations for the cubic lattice. By contrast, the square lattice also exhibits gapless excitations near (π, 0) and (0, π), where there is a divergence of, respectively, sx and sŷ, signalling the immediate proximity of a crystalline phase. This is consistent with the identification of the RK point as a critical point terminating a crystalline phase. These excitations, which we have christened pi0ns, are discussed further in Appendix B. The absence of pi0ns in d = 3 signals that the RK point does not abut a crystalline phase but instead terminates an RVB phase. We will use this 'soft pi0n theorem' again in the analysis of the FCC lattice. B. Effective field theory and RVB phase The above RK point results can now be used to constrain a long wavelength field theory valid in the proximity of this point. From it, we will be able to deduce the existence and properties of the neighboring RVB phase. To this end we follow Ref. 19 and solve the constraint ∇ · B = 0 by writing B = ∇ × A where A lives on the dual cubic lattice. As this representation comes with a local gauge invariance, we will pick the gauge ∇ · A = 0 to have a faithful representation of the dimer states. 27 If we now think of the time evolution generated by the kinetic energy, this will involve flipping plaquettes. It is straightforward to see that such a flip represents an (essentially) spatially local change in A. This fact and the RK point properties allow us to write down the action, (3.9) We can recognize Eq. 3.9 as the restriction to ∇ · A = 0 and A 0 = 0 of the manifestly space-time gauge invariant action where E = ∂ t A−∇A 0 , is the electric field and B = ∇×A as before. For ρ 2 = 0, this action reproduces the equal time (classical) dimer correlators at the RK point. It also yields a transverse photon with ω ∼ ρ 4 k 2 in agreement with the SMA dispersion. Further, we see that only gradients of B enter the Hamiltonian so that all flux sectors will yield degenerate ground states, as is the case in the exact solution. Finally, for ρ 2 < 0 the system will be driven to maximize the average value of |B|-which is the case microscopically for v/t > 1. All of this confirms that Eq. 3.10 is the correct long wavelength field theory with ρ 2 changing sign exactly at the RK point. Readers familiar with this set of problems will recognize that this is in precise correspondence with the treatment of the square lattice dimer model 11,12 with the difference that in d = 2 one solves the constraint by writing B = ∇×h 29 in terms of the scalar height function. Unlike in d = 2, where the discreteness of the height field becomes relevant when ρ 2 > 0, in the present problem there are no relevant operators beyond those listed in Eq. 3.10 when ρ 2 ≥ 0. Hence for v/t < 1 the action is now that of the standard Maxwell theory and gives rise to a linearly dispersing transverse photon, ω ∼ ρ 2 k, at long wavelengths. Other excitations: The (gapped) spinons are represented by monomers which lead to a local divergence ∇ · B = ±1 on sublattice A/B. Hence they carry a magnetic charge. As the Maxwell action is invariant under the duality E ↔ B, we could also recast the spinons as electrically charged particles. Either way, they interact via an inverse square force law and are therefore deconfined. At the RK point the vanishing of ρ 2 results in the force vanishing altogether. In addition to the photon, there is a topological defect of interest in the model with dimers alone. This is the electric monopole, being the magnetic monopole that is present in lattice U (1) gauge theories in our dual labelling of the fields. To see how it is constructed, consider solving the Gauss's law constraint ∇ · E = 0 that arises in the passage from (3.10) by writing E = ∇ × P. As E lives on the dual lattice, P lives again on the direct lattice. [In quantizing the theory we take P and B to be the conjugate variables on a link.] Following the standard construction of the monopole, one can pick a classical P such that E has the formr/r 2 at long distances from a specified point of the dual lattice while a Dirac string of plaquettes ensures that ∇·E = 0 holds everywhere. 30 The monopole state then can be written down in the dimer representation as where |0 is again the ground state. Requiring that the resonance energy cost along the string does not diverge with system size quantizes the monopole flux. The monopole generalizes the construction of the vortex at the RK point in d = 2 31 and is gapped as befits a deconfined phase. It was also noted in that analysis that the binding of vortices to spinons could alter their statistics. 32 Likewise in d = 3 the binding of electric monopoles to bosonic spinons will lead to fermionic dyons. The monopoles also interact via an inverse square force. C. Fractionalization and quantum order RVB liquid phases are fractionalized in that they support S = 1/2 spinons. In cases where there are no gapless excitations in the spectrum, e.g. the RVB phase on the triangular lattice and the quantum Hall liquids, the fractionalization is of a piece with a ground state degeneracy sensitive to ground state topology and a topological interaction between various gapped excitations-a complex of properties named topological order by Wen. 16 In its pristine form, topological order is unaccompanied by any symmetry breaking. Most succintly, the low energy theory for topologically ordered systems is a topological field theory, the Chern-Simons theory for the quantum Hall states 16 and the BF theory for the Z 2 RVB state. 17 We will see in the next section that the FCC RVB phase is indeed topologically ordered in this sense. However, the situation with regard to the U (1) phase discussed in this section is different. The low energy theory is the Maxwell theory, which is not topological. Concomitantly, the gapless photon makes the distinction between ground states and excited states fuzzy. While at the RK point there is an exact degeneracy between different magnetic flux sectors, elsewhere in the RVB phase, this degeneracy is lifted by the B 2 term in the action, the coefficient ρ 2 of which no longer vanishes. The splitting thus generated is L 3 × Φ B 2 /L 4 = Φ B 2 /L, which vanishes -only algebraically -in the thermodynamic limit (1), the splitting has the same magnitude as the excitation energy of a photon; this is no coincidence if one thinks of a photon as imposing a flux modulated at the photon's wavelength, so that locally a sector with a photon present looks like it belonged to a sector with a slightly different Φ B . As a matter of principle, one can identify the minimum energy states in the nonzero Φ B sectors as degenerate ground states while reserving the term excitation for excited states in a given sector. Indeed, at this level the symmetry between E and B implies that there is another set of states with a net electric flux that also have an energy of O(1/L) and should be identified as members of the ground state multiplet. These are the analogs, in d = 3 of the height shift mode discussed by Henley for the RK point 11 -in d = 2 the time derivative of h is the electric field. Nevertheless, such degeneracies would be hard to detect against the background of excitations with similar energies and so their utility appears to be limited. Consequently, we follow Wen in identifying the U (1) RVB liquid as an instance of "quantum order" 18 wherein the masslessness of the photon is attributed to the rigidity of the low energy gauge structure. IV. FACE-CENTRED CUBIC LATTICE: AN RVB Z2 LIQUID The behavior of the FCC lattice quantum dimer model parallels that of the triangular lattice problem in d = 2. Like the triangular lattice, this non-bipartite lattice does not admit a representation by solenoidal fields. Instead, it exhibits topological sectors, not connected under a local dynamics, that are labelled by an Ising variable for each periodic direction on the lattice. This we call a winding parity to distinguish it from the U (1) winding numbers Φ B . The winding parity is defined in an analogous manner, by counting whether the number of dimers cutting the planes Σ i is even or odd. To the right of the RK point, there exists the standard non-flippable phase (Fig. 1); a non-flippable configuration can again be obtained by appropriately stacking staggered square lattice configurations. The identification of the phase for large negative v/t remains an open problem. A. RK Point and RVB Phase For the RK point on the 3-torus there are 8 exactly degenerate ground states which exhibit exactly the same equal time correlations in the thermodynamic limit. More generally, there are 2 p degenerate states for a system with p independent non-contractible loops. The classical analysis in Ref. 19 has shown that the correlations decay exponentially to zero, with an extremely short correlation length of ξ = 0.4 nearest neighbour distances. Turning now to collective excitations within the SMA, we note that there are conserved quantities, the number of dimers in the xy planes at wavevector q 0 = (0, 0, π) and its symmetry related counterparts. Nevertheless, the structure factor vanishes as well at these wavevectors, quadratically by the analyticity in momentum space guaranteed by the short range of the correlations in real space. To see this, consider grouping all the sites of a finite FCC lattice with periodic boundary conditions in the zdirection according to their z-coordinate. Let N z be the number of sites with a given z-coordinate. Now, for a given dimer configuration, define the number of dimers linking a pair of sites with neighbouring z-coordinates by ∆ z,z+1 . Then the number of dimers in the xy plane, n, at coordinate z is simply 2n(z) = N z − ∆ z,z+1 − ∆ z−1,z . Therefore, for any k z = q z − π = 0, where the last term in parentheses was expanded in k z ; at q z = π, it vanishes linearly, yielding a quadratically vanishing structure factor. From the absence of any significant crystalline correlations or gapless excitations at the RK point (no soft pi0ns) it is safe to conclude that for v/t < 1 there is a finite RVB phase with liquid dimer correlations, an 8-fold ground state degeneracy on a 3-torus and a gap to all excitations. Indeed the first and third pieces are mutually consistent. The vanishing of the oscillator strength at q 0 holds everywhere, while the structure factor is also zero exactly at q 0 . If the correlations remain liquid, the structure factor will vanish analytically at q 0 and hence the gap will persist. Evidently, matters will be different when a solid phase is reached. B. Topological order The excitations of the FCC RVB phase include the ubiquitous spinons and gapped vortex (vison) loops. The topological interaction between them is described by the 3+1 dimensional BF action which encodes the phase factor of e iπn l for a spinon trajectory that links n l times with the vortex loop. 17 Quantization of the BF action on closed manifolds recovers the ground state degeneracy discussed above and the tunneling of the spinons and vortex lines lifts the ground state degeneracy by an amount of O(e −L ) and O(e −L 2 ) for a system of linear dimension L. V. SUMMARY In this paper, we have established the existence of two types of RVB liquids in d = 3, which occur on the simple and face-centred cubic lattices. As in d = 2, the possible RVB phases in quantum dimer models correspond to the known deconfined phases in compact gauge theories and exhibit the same topological scaling limits. In addition to their direct physical interest, they allow attractively simple illustrations of concepts such as topological and quantum order. Remarkably, the program outlined by Rokhsar and Kivelson in their quest for the square lattice RVB liquid 4 can be carried through for the simple cubic lattice in toto. The ability to perform calculations in a classical framework at the Rokhsar-Kivelson point to access properties of both ground and excited states has been invaluable for our study. Finally, it will be interesting to see if such phases can be realised, for example by destabilising Heisenberg Neel states on both lattices using frustrating exchange interactions. Q = (π, π) on the square lattice were considered, 4 we write q = Q + k; more generally, 2Q needs to be a reciprocal lattice vector. The dimer density operator commutes with the potential term of H QDM , so that we only consider its commutators with the kinetic term. All that needs to be done here is to use repeatedly. This form implies that the result in an isotropic phase will always be the kinetic energy operator (and, at the RK point, the expectation value of the loop flippability) times a function of q, the details of which we are interested in. The contribution of a given square plaquette with dimers in theτ directions at locations R and R ′ = R + r is given by Summing this expression over all plaquettes forces 2Q to be a reciprocal lattice vector, and yields the expression proportional to the kinetic energy operator. The dependence on k and Q now follows straightforwardly. For example, for the case of resonons on the square lattice, Q = (π, π), and r = ±ŷ forτ =x. Thence, f (k) ∝ 1−cos k y ∼ k 2 y ∼ (k×τ ) 2 , the original result of Rokhsar and Kivelson. 4 This form also obtains for Q = (0, π), whereas near Q = (π, 0), f (k) ∝ cos 2 (k y /2) is nonzero everywhere. The same analysis carries over directly to other lattices. On the cubic lattice, there are two transverse directions, and f (k) ∼ (k ×τ ) 2 continues to hold near Q = (π, π, π). For the hexagonal case, where an elementary resonance move involves a triplet of dimers, one finds the resonon to occur near zero wavevector, Q = 0, with the same form of f (k) ∼ (k ×τ ) 2 . On the triangular lattice, the same algebra yields a qualitatively different result. If we consider the triangular lattice as a square lattice decorated with diagonal bonds pointing from the botton left to the top right corner of each square, the transverse projector form for Q = (0, π) is replaced by f (k) ∼ k 2 y + (k y + k x ) 2 , which vanishes quadratically as k → 0 but is nonzero in all directions away from k = 0. Finally, for the FCC lattice, where the conservation law involves two different dimer directions, the algebra is more complicated but one again finds a vanishing f (k) as the wavevector approaches Q = (0, 0, π). for any value of qτ . Thus, f (q 0 ) vanishes. As in the cubic lattice, this does not lead to an entire line of zero-energy excitations along the Brioullin zone edge. Rather, the resonon only exists as a transverse excitation near (π, π). To demonstrate this, we have explicitly determined the structure factor forσ x τ (q) andτ =x. The calculation of the structure factor for two-dimensional models at the RK point is straightforwardly achieved using the Fermionic path integrals developed for the study classical dimer models. A detailed calculation of correlations for the triangular lattice has been presented in Ref. 33,34. The result is plotted in Fig. 2. The structure factor vanishes everywhere on the line (qx, π) except for at Q = (π, π). This implies that q,τ |q,τ = 0 on this line away from (π, π), so that it is not a candidate for a gapless excitation. In fact, the form of the structure factor near Q is welldescribed by the transverse projector where q = Q + k. Together with 4 f (k) ∼ 1 − cos k y ∼ k 2 y = (k ×τ ) 2 , this implies that there is only a single gapless excitation per dimer direction, namely at Q = (π, π), which is of a transverse nature. In Fig. 2 a peak in sx(q) is visible at q 1 = (π, 0). Its scaling with system size can be determined easily since the long-distance part of the correlations of the classical dimer model in real space are known exactly: 11,35 Cxx(r) ∼ (−1) x+y y 2 − x 2 r 4 + (−1) x 1 r 2 . This form implies that the peak height grows logarithmically with system size, and equivalently that its height a distance k = q−q 1 from the centre decays (isotropically) as − log k. Near q 1 = (π, 0), the oscillator strength depends on k simply as f (k) ∼ cos 2 (k y /2), which is a constant to leading order. Therefore, E(q 1 ,x) = 0, and there exist further gapless excitations for each dimer direction, which -due to their location in reciprocal space -we call pi0ns. The dispersion obtained within the SMA is logarithmic, but more on that below. We mention in passing that the same analysis carries over to the hexagonal lattice, where the resonon exists at Q = (0, 0) and the now inappropriately named pi0n at (4π/3, 0). This similarity is efficiently encoded in the language of two-dimensional height models, analysed in detail by Henley. 11 Both resonon and pi0n can be connected back to slow modes in the height model at zero wavevector, the connection to the dimer modes being via the gradient and vertex operators, respectively. Indeed from the height analysis one learns 36 that the structure factor at q 1 + k decays in imaginary time as e −k √ τ which confirms that there are gapless excitations near q 1 but also shows that the SMA does not do a very good job of constructing them. Absence of gapless modes on the triangular lattice The argument that the SMA mode for on the triangular lattice is gapped at all momenta, despite the presence of a conservation law, is completely analogous to the one presented above for the FCC lattice. It again involves showing that the corresponding structure factor vanishes at least quadratically. The relevant momentum for dimers pointing in thex direction is Q = (0, π). A direct computation of the structure factor using the results of Ref. 33 yields the same result. This SMA result was already noted in Ref. 13. these issues further and set t• > 0 uniformly. 38 In the lattices discussed in this paper the two terms are equivalent. On more complicated lattices, such as the Fisher lattice discussed in [R. Moessner and S. L. Sondhi,cond-mat/0212363], confining phases do not break lattice symmetries and hence are indistinguishable from liquids from the viewpoint of translational order. 39 The definition provided here works for a system with even sidelengths. More generally, one has to resort to the slightly more involved transition graph construction described in Ref. 4
2019-04-14T02:03:12.211Z
2003-07-23T00:00:00.000
{ "year": 2003, "sha1": "6dae96fa1541fb8ba16ba3ee7dfaca95b7b7cf74", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0307592", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6dae96fa1541fb8ba16ba3ee7dfaca95b7b7cf74", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
196813131
pes2o/s2orc
v3-fos-license
Impact of PReOperative Midazolam on OuTcome of Elderly patients (I-PROMOTE): study protocol for a multicentre randomised controlled trial Introduction Premedication of surgical patients with benzodiazepines has become questionable regarding risk-benefit ratio and lack of evidence. Though preoperative benzodiazepines might alleviate preoperative anxiety, a higher risk for adverse events is described, particularly for elderly patients (≥ 65 years). Several German hospitals already withhold benzodiazepine premedication from elderly patients, though evidence for this approach is lacking. The patient-centred outcome known as global postoperative patient satisfaction is recognised as a substantial quality indicator of anaesthesia care incorporated by the American Society of Anesthesiologists. Therefore, we aim to assess whether the postoperative patient satisfaction after premedication with placebo compared to the preoperative administration of 3.75 mg midazolam in elderly patients differs. Methods This study is a multicentre, randomised, placebo-controlled, double-blinded, two-arm parallel, interventional trial, conducted in nine German hospitals. In total 614 patients (≥ 65–80 years of age) undergoing elective surgery with general anaesthesia will be randomised to receive either 3.75 mg midazolam or placebo. The primary outcome (global patient satisfaction) will be assessed with the validated EVAN-G questionnaire on the first postoperative day. Secondary outcomes will be assessed until the first postoperative day and then 30 days after surgery. They comprise among other things: functional and cognitive recovery, postoperative delirium, health-related quality of life assessment, and mortality or new onset of serious cardiac or pulmonary complications, acute stroke, or acute kidney injury. Analysis will adhere to the intention-to-treat principle. The primary outcome will be analysed with the use of mixed linear models including treatment effect and study centre as factors and random effects for blocks. Exploratory adjusted and subgroup analyses of the primary and secondary outcomes with regard to gender effects, frailty, pre-operative anxiety level, patient demographics, and surgery experience will also be performed. Discussion This is, to the best of our knowledge, the first study analysing patient satisfaction after premedication with midazolam in elderly patients. In conclusion, this study will provide high-quality data for the decision-making process regarding premedication in elderly surgical patients. Trial registration ClinicalTrials.gov, NCT03052660. Registered on 14 February 2017. EudraCT 2016-004555-79. Electronic supplementary material The online version of this article (10.1186/s13063-019-3512-3) contains supplementary material, which is available to authorized users. 14. Expected benzodiazepine requirement after surgery 15. Expected continuous mandatory ventilation after surgery 16. Patients who explicitly request anxiolytic premedication 17. Patients with severe neurological or psychiatric disorders 18. Refusal of study participation by the patient 19. Parallel participation in interventional clinical studies within the previous 30 days Recruitment Patients will be recruited consecutively during the preoperative anaesthesia consultation in the clinical routine by an investigator, with the support of the attending anaesthetists. Each participating centre will recruit as many patients as possible. The time-point of informed consent will be documented, to enable verification of the patient recruitment and randomisation sequence, in order to prevent selection bias. All screened patients (including the screening failures and enrolled patients) will be documented in a screening/ enrolment log. Strategies to enhance recruitment rates will include newsletters and telephone calls on a regular basis. Furthermore, the publication policy will further motivate the participating centres, as the authorship will depend on the number of enrolled and completely documented patients. Allocation To guarantee adequate sequence generation and allocation concealment, randomisation will be carried out computer-based by the Department of Medical Informatics RWTH Aachen University Hospital. A randomisation stratified by study centre will be implemented. Sequences will be generated using a 1:1 ratio of the treatment arms and a permuted bock randomisation. The block sizes will be concealed from the investigators. A unique randomisation number will be assigned to each randomised patient. Allocation sequence list will be provided only to the pharmacy directly by the biostatistician. The Department of Pharmacy, University Medical Center Johannes Gutenberg-University Mainz, Germany will provide sealed, opaque containers containing the assigned treatment to each centre. These containers will be labelled with the ascending unique randomisation number. For emergency un-blinding, all centres will receive opaque, sealed emergency envelopes including the information about the assigned treatment by the Pharmacy. The investigators are obliged to take the next consecutive container with the ascending randomisation number at visit 1, see below and Fig. 1. The container will be handed out to the independent nurse, who is responsible for the patient (see description of intervention below). Intervention Patients, meeting all inclusion criteria and none of the exclusion criteria, will be randomly assigned to receive an oral premedication with either 3.75 mg midazolam or placebo. Premedication will be administered once, 30-45 minutes before the estimated surgery time-point, as recommended in the summary of product characteristics for midazolam and usually performed in the participating sites. The investigational products are encapsulated and packed into single small, opaque, sealed and relabeled containers by the Department of Pharmacy, University Medical Center Johannes Gutenberg-University Mainz, Germany according to the MHRA (Medicines wand Healthcare Products Regulatory Agency). Study investigators will have to take the next consecutive container and note the patient identification number on a prescribed space on its label. Thereafter, he/she will hand out the respective container to an independent nurse, who is responsible for the patient but not involved in the study. The principal investigator (PI) will inform the complete ward staff of the different units in the hospital about the performance of this study before initiation of the study. The responsible nurse will be informed in addition each time, when a patient is enrolled. The nurse will be advised to hand out the respective container to the patient face to face, as usually done in the clinical routine. The only difference is that the container contains a capsule and in the clinical routine the patients would receive a tablet. A specific training for this procedure is not necessary. The patients have to take the medication with a small sip of water. The location of intervention intake will be either the standard care ward of the patient or the patient preparation room, depending on the standard operating procedure (SOP) of the respective participating site. Interventions-adherence Intervention adherence will be assessed by storage of the empty container for each patient by the respective nurse. The monitoring team will check the entered patient-identification number and the randomisation number on the container and crosscheck it with the enrollment sequence. Interventions-modifications In accordance with the requirement of our Ethics Committee, patients with apparent or verbally expressed anxiety might receive additional midazolam intravenously (i.v.), when entering the surgery area, according to the clinical routine (study-and group-independent). This midazolam will be applied carefully titrated (á 0.5 mg) i.v., by the attending anaesthetist under monitoring of the patients' vital data, according to the SOP of the respective department. Additional i.v. "Rescue" midazolam will be noted in the patient's file. These patients will be retained in the study and followed-up to prevent missing data, according to the intention-to treat (ITT) principle. Of note, the preoperative anxiety level, which is measured at operating room admission, will be recorded before administration of this "Rescue" midazolam. Interventions-concomitant care After patient inclusion, the entire ward-staff will be informed and it will be noted in the patient files that the patient should not receive any benzodiazepine in the clinical routine, if not indispensable until the surgery. Other medications may be provided as usual in the routine care. Anaesthetic and surgical management will be performed according to the clinical routine, without any study-specific restrictions. Outcomes Primary outcome measure Global patient satisfaction will be evaluated with the self-report EVAN-G questionnaire [22] on the first postoperative day, at visit 4 (see Fig. 1). The EVAN-G is a validated questionnaire, comprising 26 items within six dimensions (attention, information, privacy, pain, discomfort and information), which is used to assess the perioperative patient satisfaction within the first 48h after surgery. Secondary outcome measures · Assessment of preoperative frailty within our patient population and adjusted subgroup analysis of the primary outcome depending on the patient's frailty. Frailty assessment will be performed according to Oresanya et al. [23]. This includes in addition to the assessment of the medical history and laboratory values, the history of falls, the Mini-Cog test [24] and the timed "Up & Go" test [25]. · Analysis of the relationship of preoperative frailty and the other assessed postoperative outcomes · Assessment of the impact of premedication on the patients' functional and cognitive recovery (difference in proportion of patients). Functional ability will be assessed by the Instrumental Activities of Daily Living (IADL) scale [26] (recovery is defined as change between baseline and day 30 after surgery). Cognitive status will be assessed by the short blessed test (SBT) [27] (recovery is defined as change between baseline and day 1 and day 30 after surgery). The SBT was chosen for the cognitive assessment, as it can also be applied by phone on postoperative day 30. · Assessment of the impact of premedication on POD (difference in proportion of patients). Delirium will be assessed by the Confusion Assessment Method (CAM) [28] or the CAM-ICU for patients on the intensive care unit [29]. Delirium will be assessed baseline, and on the first postoperative day. · Assessment of the impact of premedication on the perioperative condition of well-being, pain and sleeping. These outcomes will be assessed by the Visual Analogue Scale (VAS, values 0-100, with 100 corresponding to best well-being, worst pain and best sleeping). These data will be assessed at baseline, in the operating room, 0.5-1.5 hours after surgery, and on the first postoperative day. · Assessment of the impact of premedication on the patient cooperation directly preoperatively. Patient cooperation will be rated by the attending anaesthetist (via VAS, with 100 corresponding to the best cooperation). · Assessment of the impact of premedication on the patients' anxiety at arrival in the operating room (rated via VAS by the patient, with 100 corresponding to the strongest anxiety). A cut-off value of 72 mm will indicate high anxiety [30]. · Assessment of the difference in the proportion of patients with rescue midazolam application before surgery. · Assessment of the difference in the proportion of patients with adverse vital data values upon arrival into the operating room, after extubation and 0.5-1.5 hours later. · Assessment of the difference in time to extubation depending on the premedication. The attending anaesthetist will measure this time from cessation of the anaesthesia until extubation. · Assessment of the difference between the groups regarding the change of the Health-related quality of life from baseline until postoperative day 30. This outcome will be measured by the EQ-5D-5L [31]. · Difference between the two groups in the proportion of the longer-term outcomes mortality or the new-onset of serious cardiac or pulmonary complications, acute stroke, or acute kidney injury within 30 postoperative days. Outcomes will be defined according to the following definitions: 1. Serious cardiac complication Cardiac arrest: The absence of cardiac rhythm or presence of pulseless electrical activity requiring the initiation of CPR, which includes chest compressions. Myocardial infarction: Electrocardiography changes, new elevation in troponin, or physician diagnosis. Signs of myocardial infarction in the autopsy. Serious pulmonary complication Pneumonia: Clinical or radiological diagnosis. Pulmonary embolism: Radiological diagnosis. Signs of pneumonia or pulmonary embolism in the autopsy 3. Acute Stroke Defined as a new focal or generalised neurological deficit of >24h duration in motor, sensory, or coordination functions with compatible brain imaging and confirmed by a neurologist. Transient ischemic attack is not considered as acute stroke. Signs of stroke in the autopsy. 4. Acute kidney injury Defined according to the AKIN classification [32] as AKI stage ≥2. This means increase of creatinine >2-3x from baseline within the hospital stay. Or urine output less than 0.5 ml kg-1 per hour for more than 12 hours. Or signs of acute kidney injury in the autopsy. After hospital discharge, events will only be defined as present if they led to hospital re-admission or death. · Adjusted subgroup analysis of the primary outcome depending on the preoperative baseline anxiety level, the patient demographics and surgery experience of the patients and gender effects. Baseline anxiety will be assessed preoperatively by the self-reported German version of the Amsterdam Preoperative Anxiety and Information Scale (APAIS) questionnaire [33]. Patients with a cut-off value of 12 will be considered as anxious, as proposed by Berth et al. [33]. · Difference between the two groups in the proportion of adverse events (AEs) and serious adverse events (SAEs) according to the medical charts until postoperative day 30. · Assessment of the proportion of patients with amnesia on the first postoperative day. · Assessment of the impact of premedication on the hospital length of stay (LOS) and intensive care unit (ICU)-LOS. Difference of the durations between the two study groups. Participant timeline The time schedule of enrolment, interventions, assessments, and visits for participants is presented in After study-specific patient information and written informed consent, the investigator will perform a baseline visit, which includes the assessment of the patient demographics, medical history and the most recent preoperative routine laboratory values (only if done in the clinical routine). Furthermore, the study-specific baseline testing (anxiety, cognitive and functional assessment, health-related quality of life assessment, pain, sleeping and well-being) and frailty assessment will be performed. The patient will receive the next consecutive randomisation number. Visit 1 (Surgery day, preoperative) Eligible and enrolled patients will receive 30-45 minutes before surgery the assigned container including the allocated treatment (relabelled concealed capsule including midazolam or placebo). Visit 2 (Surgery day, intraoperative) Patient cooperation and anxiety will be evaluated at patient admission into the operating room via VAS scale. Anaesthesia will be conducted according to the clinical routine, these includes also the kind of anaesthesia. Intraoperative surgery-and anaesthesia-related data will be assessed. An additional application of benzodiazepines is not desired, but left to the discretion of the attending anaesthetist, who will be blinded to the allocation treatment. The attending anaesthetist will measure the time until extubation or removing of the airway device after cessation of the anaesthetic agent (inhalative or intravenous), respectively. Pain and well-being will be asked after surgery at operating room departure via VAS scale. Visit 3 (Surgery day, postoperative) The patient will undergo further study-specific assessments in the post-anaesthesia care unit or ICU. Postoperative analgesia will also be assessed until Visit 3. Visit 4 (First postoperative day) A follow-up visit with study-specific assessments will be performed on the ward or ICU. Visit 5 (30th postoperative day) A follow-up visit with study-specific assessments will be performed via telephone or visit on ward, if the patient is still in hospital. The hospital LOS and ICU-LOS data will be collected from the hospital database. Sample size The sample size was calculated based on detecting a minimum of 5 unit difference in the primary outcome variable overall patient satisfaction measured with the EVAN-G. Assumptions regarding the standard deviation of EVAN-G in the population was based on previous work [22]. Setting a type 1 error of 0.05, a power of 0.8 and assuming the standard deviation of EVAN-G to be 14 units, 248 patients per group are needed to detect a 5 unit difference. Considering a drop-out rate of 10% and a screening failure of 10%, we decided to include 614 patients in total (3.75 mg midazolam n=307 and placebo n=307). Blinding This study is planned in a double-blinded manner. The investigator, the intraoperative attending anaesthetist and the patient will not be aware of the treatment allocation in all cases, as the medication will be encapsulated and provided by an independent nurse. Data collection methods/ data management First, all collected patient data during this clinical study will be entered and/ or filed in the respective patient CRF. The patient's study participation must be documented appropriately in the patient CRF with study number, subject number, date of subject information and informed consent, and date of each visit. Source data should be filed according to the Good Clinical Practice (GCP) guidelines. The sponsor's data manager will be responsible for data processing, according to the sponsor's SOPs. Database lock will occur only after quality assurance procedures have been completed. Second, the investigators will transcribe all information required by the protocol into a web-based electronic data collection system OpenClinica (https://www.openclinica.com) electronic case report form (eCRF). The eCRF will be developed by the data manager for the study. Detailed information on the eCRF completion will be provided during the site initiation visits, by an eCRF completion manual and by provision of an e-learning tool. The access to the e-learning tool and to the eCRF will be password controlled. Plausibility checks will be performed according to a data validation plan. Inconsistencies in the data will be queried to the investigators via the electronic data collection system; answers to queries or changes of the data will directly be documented in the system. Plausibility checks will be performed to ensure correctness and completeness of these data. By signing the CRF (eCRF/ eSignature), the investigator confirms that all investigations have been completed and conducted in compliance with the clinical study protocol, and that reliable and complete data have been entered into the eCRF. Quality control Standardisation procedures will be implemented to ensure accurate, consistent, complete, and reliable data, including methods to ensure standardisation among sites (e.g., training, newsletters, investigator meetings, monitoring, centralised evaluations, and validation methods). To prepare the investigators and to standardise performance, training will be hold during the study initiation visit for each centre before study start. Manuals for standardised conduction of interviews will be provided to the investigators. The PI of each centre will ensure adequate qualification and information about the study of all subinvestigators and the assisting study personnel. The PI will maintain a study staff authorisation log, with listed responsibilities of each person. Record keeping Essential documents, which comprise among others: study subject files, the subject identification code list and signed informed consent forms, should be archived for at least 10 years. The PI should take measures to prevent accidental or premature destruction of these documents. Retention After inclusion and randomisation of the patient, the study site will make every reasonable effort to follow the patient for the entire study period. We do not expect a high loss to follow-up for the primary outcome on the first postoperative day. To enhance the participant retention for the 30 days followup, the investigators will schedule an appointment for the telephone call and verify the correctness of the phone number before patient-discharge from hospital. Appointment reminders will be set in electronic calendars. Patients may withdraw at any time from this study in whole or in parts. Investigators will have to ask the patient, if the patient is willing to continue his participation for further follow-up assessments. Statistical methods-outcomes analysis will also include the patients, who have received additional "Rescue" i.v. midazolam preoperatively, on behalf of the attending anaesthetist during the clinical routine. The exact prespecification of the full analysis set will be performed based on a blinded data review. According to ICH-E9 guideline patients who received no treatment can be excluded, if the decision to treat or not to treat is not influenced by the knowledge of the assigned treatment. All reasonable efforts will be made to evaluate the primary endpoint in all study subjects regardless of adherence to the study protocol. A per protocol (PP) data set will be defined for secondary analyses, composed of all randomized patients who have no major protocol deviations throughout their whole study period. Safety variables will be analysed on a data set comprising all study subjects who have received study medication. Descriptive analysis of all study data will be performed for both treatment arms. Frequencies for categorical variables and means, standard deviations and selected quantiles for quantitative variables, as well as frequencies of missing data will be tabulated. Distributions of variables will be graphically examined using appropriate visualization tools. The primary, confirmatory analysis will be performed on the EVAN-G Global Index measure using linear mixed-effects model including treatment effect, study centre and blocks, but no interaction terms. The treatment effect will be tested against a null hypothesis of no effect using an F-test, and 95% confidence intervals for the treatment effect estimate will be calculated. Secondary analyses will be performed to explore gender specific treatment effects, and the robustness of the results of the primary analysis will be explored by repeating the analysis on the per protocol data set and by imputation of missing primary endpoint data. These analyses of secondary outcomes will be considered exploratory and will be performed independently for each secondary outcome without adjustment for the multiple analyses. The outcomes functional ability, cognitive recovery, POD, use of rescue midazolam, adverse vital data, presence of long-term outcomes, AE and amnesia will be analysed as dichotomous outcome variables and the difference in proportions between the treatment groups along with their standard errors will be calculated. The outcomes well-being, pain and sleeping, which are measured using VAS, will be analysed using linear mixed-effect models including treatment effect and treatment-time interactions. The outcomes patient cooperation, anxiety in the operation room, length of hospital and ICU stay and time to extubation will be analysed as continuous outcome variables, and the means in each intervention group and differences in means will be calculated. Data analysis will be carried out using the R language for statistical computing (https://www.R-project.org/). The detailed trial statistical analysis plan will be finalized before database lock. Statistical methods-additional analyses Exploratory adjusted and subgroup analyses of the primary and selected secondary outcomes with regard to the gender effects, frailty status, the pre-operative anxiety level, the patient demographics and surgery experience will be performed in addition. These analyses will be performed independently for each outcome without adjustment for multiple analyses. The explanatory factors will be analysed as dichotomous variables. Data monitoring A formal Data Monitoring Committee will not be established for this study, which is performed during the clinical routine and implies minimal risks associated with the application of placebo instead of 3.75 mg midazolam. This study will be monitored regularly by a qualified monitor from the Center for Translational & Clinical Research Aachen (CTC-A) -belonging to the sponsor-according to GCP guidelines and the respective SOPs. Monitoring procedures include study initiation visits and interim monitoring visits on a regular basis according to a mutually agreed schedule. During these visits, the monitor will check for completion of the entries on the eCRF/CRF; for compliance with the clinical study protocol, GCP principles, and regulatory authority requirements; for the integrity of the source data with the eCRF/ CRF entries; and for subject eligibility. Monitoring will also aim to detect any misconduct or fraud. In addition, the monitor will check whether all AEs and SAEs have been reported appropriately within the required time periods. Further details of monitoring activities will be described in a monitoring manual of the CTC-A. Interim analysis and stopping guideline Interim analyses are not planned in this study. The coordinating PI may decide together with the sponsor's representative (CTC-A) to terminate this study entirely in case of a changed risk-benefit-ratio, which indicates a premature study termination in order to protect subject's health. The study will be prematurely terminated for an individual subject in case of: • Request of the patient or withdrawal of informed consent • Patient did not meet the inclusion and/or exclusion criteria • Patient condition, which is incompatible with a premedication or any study procedure Harms Safety assessments will consist of monitoring and recording all AEs and SAEs and the regular monitoring of intraoperative vital data by the attending anaesthetist. All AEs will be defined according to the ICH-GCP guidelines, see Additional file 3. Midazolam incorporates several side effects, which probably jeopardise the patient. Additional harms, other than the usually present side effects in the clinical routine are not expected in the midazolamgroup in this study. All possible side effects are described in the Summary of Medicinal Products Characterisation for midazolam. For the placebo-group, we do not expect any significant harm, as in the case of strong preoperative anxiety or agitation, additional midazolam application may occur on behalf of the attending anaesthetist at any time. Auditing Audits by the sponsor are not planned for this study, but a member of the sponsor's quality assurance function may arrange a visit in order to audit the performance of the study at a study site. Auditors conduct their work independently of the clinical study and its performance. Inspections by regulatory authority representatives and Institutional Ethics Committees (IECs) are possible at any time, even after the end of study. The investigator has to inform the sponsor immediately about any inspection. The investigator and institution will permit study-related monitoring, audits, reviews by the IEC and/or regulatory authorities, and will allow direct access to source data and source documents for such monitoring, audits, and reviews. Confidentiality All subjects will be identified by a unique 7 digits patient identification number (xxx-yyyy) and randomisation number (xxx-RAND-yyyy). The first 3 digits (indicate the centre) and the 4 further digits for the ascending patient/randomisation number. Each principle investigator will keep a list safely, which will allow the identification of the pseudonymised patients. The patient's informed consent, with their printed name and signature will be filed separately in the investigator's site file (ISF). All source data and the ISF will be protected against unauthorised access in locked cabinets with restricted access under the responsibility of the PI of each participating centre. Patients will be informed about data protection and that data passed to other investigators or an authorised party for analysis will occur in a pseudonymised manner. Data analysis by the biostatistician will be performed pseudonymised. Access to data benzodiazepine premedication is advantageous for all patients, especially the elderly ones. Also, there is no high quality of evidence that withholding of preoperative midazolam in all elderly patients is beneficial. A Cochrane analysis of the anxiolytic premedication effect on time to discharge in a day case surgery setting found similar discharge times between patients with premedication compared to the placebo group, though impaired psychomotor function after benzodiazepines application was described [35]. Of note, this Cochrane analysis failed to report outcomes of efficacy of anxiolytic premedication and the included studies were of poor quality and very heterogeneous. Therefore, a balanced judgement about risk and benefit of premedication was hindered. Another Cochrane review showed that there is a lack of evidence for premedication effects in elderly patients [14]. The decision to administer only 3.75 mg midazolam in this study instead of 7.5 mg is justified by the recommendation in the German summary of product characteristics for midazolam in elderly patients "Elderly patients showed a larger sedative effect, therefore they may be at increased risk of cardiorespiratory depression as well. Thus, midazolam should be used very carefully in elderly patients, and if needed, a lower dose should be considered." Administration of a reduced midazolam dose of 3.75 mg reflects the standard routine approach for elderly patients in Germany [36]. Exclusion of patients older than 80 years was based on the clinical routine of the participating centres, which generally do not administer midazolam in patients older than 80 years. Our decision to choose the global postoperative patient satisfaction as the primary outcome, is based on the increased importance for assessment of the patient reported experience of health care as an important outcome [4,6]. We acknowledge, that patient satisfaction is influenced by several factors, e.g. preoperative anxiety, amnesia, pain or surgical complications. Thus, we expect that the randomised design will enable an equal distribution of aforementioned factors. Furthermore, we are going to record the postoperative analgesia requirement, pain, amnesia as well as any adverse events in this study. One strength of this study is the double-blinded design. It will provide low-biased results and support a [18], a third parallel arm without any treatment was avoided, as blinding is an important part of the study design and an arm without treatment cannot be blinded at the patient level. One limitation is that we are not going to control the general anaesthesia regime (type, quantity, application time), but we think that this will provide more generalizable results. A further limitation is that the study results are not generalizable to institutions, which use other kinds of benzodiazepines than midazolam or other drugs like α2 adrenoceptor agonists for premedication. In conclusion, iPROMOTE will provide high quality data for decision-making process regarding premedication with 3.75 mg midazolam in the elderly patients. Any change in the study protocol and/or informed consent form will be presented to the competent Ethics Committee and the BfArM. They have to be approved by the Ethics Committee and the BfArM before implementation (except for changes in logistics and administration or when necessary to eliminate immediate hazards). The notification of the clinical trial according to § 67 German Medical Act to the local supervising authority was performed before trial start. The authority will also be notified about any amendment and the study end. Written informed consent will be obtained from all patients prior study-participation. The patients will voluntarily confirm their willingness to participate in the study, after comprehensive written and verbally information by an investigator. Consent for publication Not applicable Availability of data and materials Only the Coordinating Centre (University Hospital RWTH Aachen) will have access to the full final trial dataset. The PIs from each participating centre will have the access to their own site's data sets. It is agreed in the study site agreements that an individual study site should not disclose the individual datasets prior to the main publication. The datasets used and/or analysed during the current study are available for the public from the corresponding author only on reasonable request Competing interests AK is an associated editor of the Trials journal. The authors declare that they have no other competing interests. Funding This clinical trial is an investigator-initiated trial from the DGAI Scientific Committee Neuroanaesthesia. This trial is supported by the Department of Anaesthesiology, Medical Faculty RWTH Aachen University, Germany (costs for regulatory affairs, monitoring and the pharmaceutical products). This work received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
2019-07-17T04:38:57.311Z
2019-03-20T00:00:00.000
{ "year": 2019, "sha1": "885177e8732b2c0dfceb1a01785a26f1cc6abe47", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s13063-019-3512-3", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0b982dd97e505d7b0a44b9052ab8261418b34ebe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231733954
pes2o/s2orc
v3-fos-license
A Brief Review of High Efficiency III-V Solar Cells for Space Application The demands for space solar cells are continuously increasing with the rapid development of space technologies and complex space missions. The space solar cells are facing more critical challenges than before: higher conversion efficiency and better radiation resistance. Being the main power supply in spacecrafts, III-V multijunction solar cells are the main focus for space application nowadays due to their high efficiency and super radiation resistance. In multijunction solar cell structure, the key to obtaining high crystal quality and increase cell efficiency is satisfying the lattice matching and bandgap matching conditions. New materials and new structures of high efficiency multijunction solar cell structures are continuously coming out with low-cost, lightweight, flexible, and high power-to-mass ratio features in recent years. In addition to the efficiency and other properties, radiation resistance is another sole criterion for space solar cells, therefore the radiation effects of solar cells and the radiation damage mechanism have both been widely studied fields for space solar cells over the last few decades. This review briefly summarized the research progress of III-V multijunction solar cells in recent years. Different types of cell structures, research results and radiation effects of these solar cell structures under different irradiation conditions are presented. Two main solar cell radiation damage evaluation models—the equivalent fluence method and displacement damage dose method—are introduced. INTRODUCTION Space solar cells, being the most important energy supply unit, have been employed in spacecrafts and satellites for over sixty years since the first satellite was launched in 1958 [1]. It has been developed from the initial single junction low efficiency silicon solar cells [2] to the now high efficiency multi-junction III-V compound multi-junction solar cells [3]. The main objectives of space solar cell development are directed toward to improving the conversion efficiency and reducing the mass power ratio and increase the radiation hardness [4][5][6][7]. At present, the highest conversion efficiency of solar cells is 47.1% achieved by six-junction inverted metamorphic (6 J IMM) solar cells under 143 suns [8]. The high-efficiency III-V triple-junction cells are also becoming the mainstream of space solar cells. The best research-grade multi-junction space solar cell efficiency so far is 35.8% for five-junction direct bonded solar cell and 33.7% for the monolithically grown 6 J IMM multijunction solar cell [9,10]. Despite the high fabrication cost, they offer excellent performance and reliable stability for space missions [11][12][13]. GaInP/GaAs/Ge (1.82/1.42/0.67 eV) lattice-matched triple-junction cells are well established with efficiencies of over 30% and fulfilled many space applications in the past two decades. However, the current mismatch between its subcells makes it difficult to improve the conversion efficiency further [14]. New structures of current matched or lattice mismatched solar cell structures and different fabrication methods are proposed to overcome this problem, such as the metamorphic (MM) growth method [15], mechanical stack [16], wafer bonding technology [17], etc. While improving the efficiency of space solar cells, the radiation resistance should also be considered. In-orbit solar cells suffer from irradiation damages due to high energy protons and electrons in the earth's radiation belt and cosmic rays [18,19], and consequently, the photoelectric performance of solar cells will be degraded. The main reason of the degradation of solar cell performance is due to the radiation-induced displacement damage in the solar cell lattice, resulting in a decrease in the lifetime of the photo-generated carriers [20][21][22]. Therefore, the degradation mechanism and performance of solar cells under an irradiation environment must be explored, and it is necessary to apply radiation hardening methods before the space mission starts. The degradation of electrical performance in solar cells directly affects the life-time of space missions. The researchers aimed at improving the radiation resistance of solar cells by adding a certain thickness of protective cover to the solar cell to shield the damage of certain particles [23], using back-surface (BSF) [24] or distributed Bragg reflector (DBR) [25], and thinning the base layer thickness of the current-limiting subcell [26], or using the p-i-n structure and different doping methods for multi-junction solar cells [27]. The experimental observations show that annealing of the multi-junction solar cell can restore certain electrical properties after being radiated by high-energy particles [28]. In recent years, various new types of multi-junction solar cells with different combinations of materials have been developed by different research groups, and the expectations for future development are different. Solar cell conversion efficiencies are rapidly being updated, and scientists are still struggling to come up with solar cells which have high conversion efficiency and possess good radiation resistance. Although there are several reviews available which cover the manufacturing, efficiency, and application prospects of photovoltaic modules [29,30], the new types of high efficiency space solar cells based on III-V compound materials have not been summarized yet. This review attempts to give a brief review on different types of space solar cells and emphasize the high energy particle irradiation effects of solar cells and recent results on the most promising types of solar cells, including dilute nitride, metamorphic, mechanical stack, and wafer bonding multijunction solar cells. DIFFERENT TYPES OF HIGH-EFFICIENCY SOLAR CELLS With the improvement of the manufacturing process and deposition technology of materials, the solar cells industry has developed tremendously. Solar cell materials are developed from a single material (single crystal Si, single-junction GaAs, CdTe, CuInGaSe, and amorphous Si:H) to compound materials, such as III-V multi-junction solar cells, perovskite cells, dye-sensitized cells, organic cells, inorganic cells, and quantum dot cells [31][32][33]. The structure of solar cells also forms homogeneous junction cell to heterogeneous junction solar cell, Schottky junction solar cell, compound junction solar cell, and liquid junction solar cell. In the purpose of its usage, it has also been developed from flat cells to concentrator cells and flexible cells [34,35]. The silicon solar cells were used as the first choice in the spacecraft since the first solar-powered satellite was launched in 1958. The Soviet Space station, MIR, was launched in 1986, was equipped with 10 kW GaAs solar cells, and the power per unit area reached 180 W/m 2 [36]. Then, the fabrication technique of GaAs-based cells experienced changes from Liquid Phase Epitaxy (LPE) to metal organic vapor phase epitaxy (MOVPE), from homogeneous epitaxy to heterogeneous epitaxy, from single junction to multi-junction structure [37][38][39]. Notably, their efficiency was continuously improved from the initial 16-25%, and over 100 kW industrial-scale power output per year has been reached [40]. Higher efficiency reduces the size and weight of the array, increases the payload of the spacecraft and results in lower costs for the entire satellite power system. Therefore, GaAs-based solar cells are widely used in space systems and continue to be used today [41][42][43]. Comparing with silicon solar cells, GaAs solar cells have the following advantages [42]: (1) Higher photoelectric conversion efficiency. (3) The band-gap tailoring by controlling the composition and doping of material. (4) Superior radiation resistance. However, the processes involved in GaAs solar cell fabrication are complicated, and its cost is much higher than that of silicon solar cells owing to the expensive equipment and material preparation. Therefore, GaAs solar cells cannot be widely utilized in the civil market. Nevertheless, GaAs solar cells have gradually replaced silicon solar cells in the aerospace field, where higher cell efficiency and better radiation resistance are needed. The loss in the efficiency of solar cells can be divided into two parts: the unabsorbed loss and excessive energy loss. When the photon interacts with the semiconductor materials, where the photon energy is smaller than the bandgap width, the valence band electrons are not excited, and they do not generate an electron-hole pair to form an electrical current. However, when the photon energy is greater than the gap width, the excess energy is lost in the form of phonons or heat [44]. Fortunately, multijunction solar cells successfully solved this problem. Semiconductor materials with different band-gaps are composed from top to bottom from large to small band-gaps, and the higher energy photons are absorbed by the top large band-gap material. The lower energy photons go through the upper large band-gap material and reach the appropriate band-gap width to generate power. Therefore, for multi-junction solar cells, finding a current matching and lattice matching cell material is the critical and general focus [14,45]. The following sections present a brief introduction of different types of multijunction solar cells in terms of their performance. [46]. Since then diluted nitride GaInNAs materials have been widely used in heterojunction bipolar transistors (HBTs) [47] and lasers [48], where GaInNAs HBTs base layer can reduce the open voltage and run under low working voltage. These features of diluted nitride GaInNAs materials are also useful in wireless communications and power amplifier applications. GaInAsN is a direct band-gap semiconductor material, which can change its band-gap by adjusting the component content of nitrogen and indium while keeping its lattice constant matching to conventional substrate materials such as GaAs and Ge. These advantages bring great potential for using 1.0 eV subcell in a high efficiency multijunction solar cell [49]. The Ga 1-x In x N y As 1-y is used as a sub-cell material for GaInP/ GaAs/GaInNAs/Ge four-junction solar cell by NREL [50]. When y 0.3x, the lattice constant of Ga 1-x In x N y As 1-y matches GaAs and Ge, which is an ideal material to construct a GaInP/GaAs/ GaInNAs/Ge (1.88/1.42/1.05/0.67 eV) four-junction solar cell with band-gap matching. Figure 1A shows the representative GaInNAs devices structure grown by MOVPE. The device is grown with dimethylhydrazine (DMHy) as the nitrogen source. At the same time, the experimental results showed that the remaining factor of GalnAsN cell efficiency is 0.93 and 0.89 after 5 × 10 14 and 1 × 10 15 e/cm 2 electron fluence of 1 MeV electrons irradiation, respectively [51]. The specific degradation of the device parameters is summarized in Table 1. The results showed that this type of cell structure possesses superior radiation resistance comparing to the traditional lattice matched multijunction solar cell. However, difficulties in epitaxial growth of diluted nitride materials have prevented its further development. It is also found that the diffusion length of minority carriers is short, the internal quantum efficiency is low for the GaInNAs subcell, and these poor minority carrier transmission characteristics resulted in lower current and voltage. Since the current of a multijunction solar cell is determined by the smallest sub-cell, the low current in GaInNAs subcell has a significant impact on the overall cell performance [51]. The quality of GaInNAs solar cells was improved by fabrication of p-i-n structure cells [52], annealing [53], doping Sb in GaInNAs material [54], and changing substrate epitaxial orientation [55]. Miyashita et al. demonstrated a solar cell with p-i-n (p-GaAs/i-GaInAsN (Sb)/ n-GaAs) structure, and tested the cell under the AM1.5 spectrum. The short-circuit current density (J sc ) reached 21.5 mA/cm 2 , the open-circuit voltage (V oc ) reached 0.42 V, and the filling factor (FF) reached 0.71 [56]. They further optimized the content of Sb and found that the content of Sb was less than 1%, which is more beneficial to improve GaInAsN crystal quality and device performance [57]. Han et al. found that GaAsN material grown by (311) B substrate epitaxy could not only improve the incorporation efficiency of nitrogen, but also effectively enhance the carrier lifetime of the material [58]. Although the theoretical conversion efficiency of GaInNAs multijunction solar cell could reach 41%, the actual growth problem still needs to be overcome, which needs to be focused on as a development goal in the future. Mechanically Stacked Solar Cell According to the theoretical calculation, the optimum bandgap energy for top and bottom subcell for a tandem multijunction solar cell is 1.65-1.8 eV and 1.0-1.5 eV, respectively, and the conversion efficiency of this structure reaches 32.5% under 1-sun AM0 spectrum [59]. Typically, the III-V compound material based multijunction solar cells are fabricated by MOVPE or molecular beam epitaxy (MBE) techniques, where the lattice matching and energy matching between subcells is a critical problem. The presence of mechanical stacks makes this feasible and allows to use of lattice-and current-mismatched combinations of semiconductor materials. By using the mechanical stacking method, III-V compound materials can be stacked together regardless of their bandgap energy and lattice constants. This technique can reduce the production cost significantly, and the substrate removal process greatly reduces the weight of solar cells, which is useful if used in space solar cell applications [60][61][62]. Still, the utilization of multiple substrates requires the removal of substrates, thereby, it makes the solar cell fabrication process complicated and affects the interface quality [63]. At the same time, a complicated design requirement becomes an obstacle for large-scale applications of mechanical stacked solar cells in space [64]. Since each subcell in a multi-junction solar cell consists of a p-n junction, if individual subcells directly stacked together in series, it will form a reverse p-n junction between subcells which block the current flow. Therefore, interconnecting individually processed solar cells keeping both electrical and optical properties still is a technical challenge need to consider with high conductivity and low transmission loss [65]. This problem can be solved by adding a tunnel junction between the subcells [66]. The 30% conversion efficiency of III-V//Si multi-junction solar cells using smart stack technology have been reported [67]. A schematic diagram of InGaP/AlGaAs//Si triple-junction solar cell is shown in Figure 1B. A "smart stack," "areal current matching," and "solar concentration" two-terminal GaAs/Si tandem solar cell, having potential to achieve an efficiency of approximately 30%, are proposed and the indoor experiment proves its feasibility, although the experiment result is lower than the simulation [64]. These types of mechanical stacking processes are quite flexible and allow different cells to be combined, and consequently reduces the fabrication cost and increases the cell efficiency. Wafer Bonded Multijunction Solar Cell In multi-junction solar cells, the lattice dislocations induced by the lattice mismatch during the epitaxial growth reduce the material quality and deteriorates the device performance. The wafer bonding technique, which refers to the physical integration of the two different materials, overcomes this issue very well and allows the formation of a monolithic multijunction solar cell structure without both electrical and optical losses, regardless of the different lattice constant of subcells [68]. As early as 1986, Lasky et al. demonstrated surface treated silicon wafer bonding at room temperature and obtained very good bonding strength through high-temperature annealing [69]. The freedom of material selection in the process of semiconductor device design is greatly improved because the bonding technology can realize the lamination structure of materials with different thermal expansion coefficient and lattice constant, and can limit the dislocation and defect to the area near the bonding interface. Figure 2A shows the structure of GaInP/GaAs//InGaAsP/ InGaAs wafer-bonded four junction solar consists of GaInP/ GaAs and InGaAsP/InGaAs dual junction cells, whereas Figure 2B shows the degradation of InGaAsP and InGaAs subcell electrical properties under 1 MeV electron irradiation. The result shows that the bonding interface has no effects on overall cell performance and radiation resistance, the main degradation happened in the third and fourth subcell, and InGaAsP subcell has a superior radiation resistance than InGaAs cell owing to In-P bonds [70]. At present, the wafer bonding technology has been studied extensively and widely used in the structural design and integration field of microelectronics and optoelectronics. A variety of bonding methods have been developed, however, they can be mainly divided into two categories: direct bonding and intermediate-layer bonding [71]. The direct wafer bonding process includes cleaning and activating two polished wafers and sticking them together at room temperature. Heterogeneous integration of multi-junction solar cells has to meet three stringent conditions: good mechanical strength at the bonding interface, high optical transmittance, and low resistivity. It does not work without each condition being met. Intermediate-layer bonding technology introduces a layer with good ductility and adhesion to alleviate stress and improve the bonding interface [72]. GaInP/GaAs//GaInAsP/GaInAs four-junction waferbonded concentrator solar cells with an efficiency of 46% at 508 suns have been reported [73]. It demonstrates that the wafer bonding is a feasible method to combine lattice mismatched Inverted Metamorphic and Upright Metamorphic Solar Cells Another approach of solving current matching issue in multijunction solar cell is the metamorphic growth method, which involves applying a compositionally graded buffer (CGB) layer between the lattice mismatched subcells [74]. The main purpose of using a CGB layer is to distribute the strain relaxation layers into a lattice constants gradually changed thick buffer layer instead of growing a highly lattice mismatched layer onto its under layer [75]. According to the growth method, the metamorphic growth technique can be divided into two types: IMM [76] and upright metamorphic (UMM) [77]. In the IMM method, for example GaInP/GaAs/InGaAs triple junction solar cell, a CGB layer is inserted between the GaAs and InGaAs subcells and the growth direction of solar cell epilayers is in the inverse direction, in order to delay the strain relaxation in CBG layer at the later stages of growth process. However, the IMM method requires an additional step of substrate lift-off before solar cell device fabrication processes. On the other hand, in the UMM method, the growth direction is from bottom subcell to top subcell, so it requires an even higher quality CBG layer to control the strain relaxation. Figures 3A,B show the general structure of IMM and UMM cells, respectively. Comparing to traditional lattice matched (LM) solar cells, metamorphic multijunction solar cells are aiming to achieve current matching between its subcells and expected to have higher conversion efficiency. Experimentally, NREL has shown that IMM solar cells have a conversion efficiency of 40.8% for triple-junction solar cells at high concentration [15] and 47.1% efficiency for 6 J solar cells at 143 suns concentration [8]. Furthermore, IMM cell can reduce the cell weight by removing the substrate and increase the mass power ratio. IMM cells also can be utilized to develop flexible solar cells for any non-flat surfaces. These high efficiency, flexible, and lightweight properties of IMM multi-junction solar cells make them the most promising for space applications [ [14,78,79]]. For UMM multijunction solar cell, the conversion efficiency exceeded 31% under one Sun AM0 spectrum compared to traditional lattice matched solar cell conversion efficiency is limited to 30% [80]. The critical point of increasing UMM solar cells is improving the quality of CGB layer to suppress lattice dislocation and threading dislocations induced by strain relaxation. However, the fabrication process of UMM solar cells is similar to the matured fabrication technology of LM cells, therefore, this advantage makes the UMM multi-junction solar cell utilization possible for potential applications [77]. The reported experimental efficiency, theoretical limits, and their advantages of all above mentioned solar cells in Lattice Matched GaInNAs Multi-Junction Solar Cells, Mechanically Stacked Solar Cells, and Wafer Bonded Multijunction Solar Cells are presented in Table 2. According to the Shockley-Queisser balance model, the theoretical efficiency limit of single-junction, triple-junction, and four-junction solar cells are 33.5%, 56% and 62%, respectively [85]. Along with the rapid development of new materials and high quality fabrication technologies, all these solar cells could achieve higher efficiency and better performance. Furthermore, UMM and IMM solar cell efficiencies tend to surpass LM cells even more as their manufacturing technology continues to be innovated and developed, and they are expected to become the next generation space solar cells. STUDIES ON RADIATION EFFECTS OF SOLAR CELLS The solar cell arrays in a spacecraft are exposed to a harsh space environment during its mission. Therefore, radiation resistance is also a critical index to evaluate the quality of space solar cells. The main reason for the performance deterioration of space solar cells is the irradiation of high-energy particles, that include electrons (energy up to 10 MeV) and protons (energy up to several hundred MeV) from the earth's radiation belt, as well as solar cosmic rays (energy up to GeV) [86]. When these high-energy particles collided with the cell materials, the transmitted energies cause the lattice atoms to shift their original positions and form displacement damages, and, consequently, the diffusion length of minority carriers decreases and the performance of solar cells is degraded [87]. Therefore, only solar cells with high conversion efficiency and good radiation resistance could be employed as space solar cells. Radiation Effects of Double-Junction GaAs/ Ge Solar Cell Messenger et al. investigated the electron and proton irradiation effects of double-junction GaAs/Ge solar cell with different energies [88]. The cell performance is degraded with an increase of the particle fluence under both electron and proton irradiation, as shown in Figure 4. Additionally, electron irradiation with greater incident energy resulted in more severe deterioration of cell performance for the same irradiation fluence. For proton irradiation, on the other hand, low energy protons produced a bigger reduction in cell performance when the proton energy is in the range of 0.2-9.5 MeV. Proton irradiation experiments on GaInP/GaAs/ Ge, GaAs/Ge solar cells with proton energy of less than 200 KeV are reported and it was found that the low energy particle irradiation caused defects in different subcells and different regions in the tandem multi-junction solar cell [23,89]. The radiation-induced defects in the base and the emitter layers form non-radiative recombination centers and capture photo-generated electron-hole pairs before they are collected by the junction region, and eventually reduce the short circuit current. On the other hand, the radiation-induced damages in the junction region mainly cause the degradation of open circuit voltage by introducing deep energy levels into the band gap and accelerating the recombination of valence band holes and conduction band electrons [89]. The effects of 40, 100, 170 keV energy proton irradiation on GaAs/Ge cells with different fluences were discussed, the degradation of spectral response results indicated that the largest damages are caused by 170 keV protons, and the lowest by 40 keV protons, in the long wavelength range (720-900 nm). The degradation effect on the normalized maximum power (P max ) is the largest for the 170 keV protons and lowest for 100 keV protons because irradiation with 170 keV protons produces the most severe damage in the junction region of the cells. The short circuit current decreases with increasing proton energy under the energies of 40-170 keV proton irradiation and the degradation extent of V oc is the largest for 170 keV proton because they produce defects with deep energy levels in the space charge region, accelerating the recombination of electrons and holes, which is also the reason for the significant decline in P max for solar cell [23]. Radiation Effects of Lattice Matched GaInP/ GaAs/Ge Triple-Junction Solar Cell Sharps et al. investigated the electron and proton radiation effects of GaInP/GaAs/Ge solar cells with different energies [91]. Same as that of GaAs/Ge double-junction solar cell, the P max of solar cells declined with the increase of the particle fluence for the same energy. For 1-12 MeV electron irradiation, the degradation of cell performance is severer for higher energy electrons for same irradiation fluence, as shown in Figure 5A. The degradation of P max is 9 and 13% for 1 MeV electrons at fluences of 5 × 10 14 and 1 × 10 15 e/cm 2 . However, for proton irradiation, the degradation of cell performance is bigger for lower energy protons in the energy range of 50-200 keV, as shown in Figure 5B. The main reason for this is that less relative damage occurs for energies below 200 keV, and lower energies would have more of an effect on the emitter as compared with the base region. At the same time, GaInP top cell and GaAs middle cell current matching point was studied in more detail by quantum efficiency measurements. It was found that the crossover from GaInP current-limited subcell to GaAs currentlimited subcell occurs at 2 × 10 15 e/cm 2 for the 1 MeV electrons [90]. This result indicated that advanced triple junction solar cells with current matching at end of life (EOL) can be achieved by reducing the amount of beginning of life (BOL) current mismatch, which is helpful for designing high efficiency radiation hardened space solar cells. Wang et al. investigated the electron irradiation effects of GaInP/GaAs/Ge solar cell with 1.0-11.5 MeV energy electron beams [91]. The degradation of normalized P max of GaInP/GaAs/ Ge solar cell under different fluence of electron irradiation is shown in Figure 6A, whereas the degradation of external quantum efficiency (EQE) changes irradiated with 1.8 MeV electron beam with different fluences are shown in Figure 6B. The degradation effects of electron irradiation of the P max and the EQE of LM solar cells increase with the increase of irradiation fluences and electron energy, as observed in GaAs/Ge doublejunction cell. Figure 6C exhibits that the EQE of GaAs middle cell degrades more than GaInP top cell under same irradiation fluence which is indicating the radiation resistance of GaInP/ GaAs/Ge triple-junction cells is dominated by the GaAs middle cell. [92]. In this study, the electrical parameters V oc , I sc (shortcircuit current) and η decayed o 89.3%, 96.3%, 84.1% and 88.6%, 98.6%, and 79.5% of its original values for one group of cells have higher efficiency at BOL and another group of cells have higher efficiency at EOL, respectively, when the electron irradiation fluence reaches 1 × 10 15 cm −2 . These details of degradation of electrical parameters of these two types of cells are summarized in Table 3. Imaizumi et al. studied the radiation response of In 0.5 Ga 0.5 P, GaAs, In 0.2 Ga 0.8 As, and In 0.3 Ga 0.7 As single-junction solar cells, whose materials are also used as component subcells of inverted metamorphic triple-junction solar cells, and results show that the photo-generation current in the InGaAs bottom subcell of InGaP/GaAs/InGaAs IMM3J cells was severely damaged under the electron and proton radiation, which can be attributed to the stronger decrease of minority-carrier diffusion length in InGaAs compared with that in InGaP and GaAs subcells after irradiation [93]. By comparing the irradiation resistance of two InGaAs cells (In 0.2 Ga 0.8 As and In 0.3 Ga 0.7 As cells), GaAs and InGaP cells, it was found that radiation resistance of these two InGaAs cells is approximately equivalent to InGaP and GaAs cells from the initial material qualities. However, the InGaAs cells show lower radiation resistance especially for the I sc comparing to InGaP and GaAs cells due to the bigger decrease of minoritycarrier diffusion length in InGaAs materials. And the InGaP and two InGaAs cells exhibited equivalent radiation resistance of V oc , but with different degradation mechanisms. Zhang et al. investigated the 1 MeV electron radiation effects of IMM GaInP/GaAs/InGaAs solar cells by electrical properties, spectral response, and photoluminescence (PL) signal amplitude analysis [94]. The results show that the electrical parameters of IMM solar cell decrease continuously with an increase in the electron fluence same as traditional LM GaInP/GaAs/Ge solar cells. As shown in Figure 7, P max illustrates the maximum degradation compared to V oc and I sc , and, V oc is degraded more compared to I sc . This phenomenon is explained as V oc is the sum of three series sub-cell voltages, where the I sc is the smallest current produced in three series sub-cells. The In 0.3 Ga 0.7 As bottom subcell exhibited most severe damage in the irradiated IMM triple junction cells due to the drastic degradation of the effective minority carrier lifetime (τ eff ) of In 0.3 Ga 0.7 As subcell than that of GaAs subcell. Therefore, the radiation hardness of IMM GaInP/GaAs/InGaAs solar cell is mainly determined by the InGaAs subcell. RADIATION DAMAGE EVALUATION OF SPACE SOLAR CELLS To obtain better radiation hardened performance, it is essential to explore the radiation damage mechanism of the solar cells. The interaction of high energy charged particles in irradiation environment with the solar cell materials includes ionization and non-ionization (displacement damage) processes [95]. The displacement damage effect is the main reason for the degradation of solar cell performance. It is an effective method to explain the formation, distribution, and evolution of displacement defects after irradiation by different high-energy particles. To examine the At present, for space solar cells, two assessment methods for evaluating radiation damages of solar cells are available, which are through ground irradiation simulation experiments: equivalent fluence method and equivalent displacement damage method [88]. Both methods are used to study the radiation damage effect and to reveal the degradation mechanism of solar cells. They also provide theoretical guidance and an experimental basis for scientifically predicting the on-orbit behavior of solar cells. The Equivalent Fluence Method The equivalent fluence method was proposed by Tada et al. from Jet Propulsion Laboratory, California Institute of Technology [96,97]. The key point of this approach is the corresponding relative damage coefficient, which is related to the radiation damage effects caused by different types of charged particles with a different energy to relative damage coefficients. In the first step, the critical fluence (ϕ) of electron or proton irradiation, which is corresponding to the electrical properties of the solar cell degraded to a specified level of its original value (such as 75% of I sc0 , V oc0 , P max0 ), has to be determined according to the experimental results [98]. Then, the relative damage coefficient (RDC) of different energy electron and proton regarding 1 MeV electron and 10 MeV proton is calculated. The ratio of the critical fluence for 1 MeV electrons to the critical fluence for other electron energies is taken as a measure of the RDC of electrons, and similarly, the relative damage coefficients of different proton energies are normalized regarding the 10 MeV proton critical fluence for RDC of protons, as shown in the following equations [88]: where ϕ e and ϕ p are the critical fluence for electrons and protons. The next step is using the orbital environment parameters to calculate the corresponding relative damage coefficients for omnidirectional particles on bare cells from the measured values for normally incident particles. By substituting the electron and proton environment parameters under consideration into following integration equations, one can obtain the equivalent 1 MeV electron fluence for the mission in question. where ϕ e (E) and ϕ p (E) is the particle fluence of electron and proton, respectively at energy E. D e (E) and D p (E) are the relative damage coefficient of electron and proton, respectively. D pe is the proton to electron damage equivalency ratio, which converts 10 MeV proton fluence to an equivalent 1 MeV electron fluence. The calculation result of Eqs. 3, 4 are the 1 MeV electron fluence normally incident on solar cells that will cause same damage as the selected omnidirectional spectrum. The result of calculated relative damage coefficient for omnidirectional proton irradiation and electron irradiation of GaAs/Ge solar cells are shown in Figures 8A,B. In the equivalent fluence method, it requires a sufficient measurement of data to calculate the RDC and generate a detailed degradation curve, as shown Figure 4, which shows eight proton energies (0.05, 0.1, 0.2, 0.3, 0.5, 1.3, and 9.5 MeV) and four electron energies (0.6, 2.4, and 12 MeV). Another relatively simple way of predicting the degradation of a solar cell performance regarding a given electron or proton fluence is using a semi-empirical equation [99]: where P 0 and P ϕ are the output power (also can be replaced with I sc and V oc ) of the solar cell before and after irradiation with different irradiation fluences ϕ, respectively. C and ϕ x are the fitting parameters for the same structure solar cell using a large amount of experimental data upon specific irradiation particles. The Displacement Damage Dose Method The displacement damage dose method was initiated by Naval Research Laboratory (NRL) [100]. The key point of this method is finding the non-ionizing energy loss (NIEL) values for different materials. By using the NIEL method, the radiation fluence of particles is converted into displacement damage dose (DDD), and the degradation curve of the electrical parameters of solar cells with the change of DDD can be obtained. The NIEL values of different materials upon different particles with different energies can be calculated by using MULASSIS software [101] or the following equation [102]: where n is the atomic density of the target material, T d is the threshold energy to displace atom, Q max is the maximum energy that can be given to a recoil nuclei by an incident particle of a given energy E, G(Q) is the energy partition function and (dσ/ dQ) E the differential interaction cross section. Figure 9A shows the calculated results of electron and proton NIEL for GaAs with the energy range from zero up to 200 MeV. The DDD induced by irradiation particles can be calculated by the following equation using particle fluence: where D d (E) is the DDD, ϕ(E) is the particle fluence, and NIEL is the non-ionizing energy loss value of the target material. When Frontiers in Physics | www.frontiersin.org February 2021 | Volume 8 | Article 631925 9 calculating D d of proton irradiation, n value is 1, and the n value of electron irradiation is in the range of 1 to 2. Using NIEL of the same particle with different energies, the relative damage coefficient for different energies of a charged particle, i.e. proton, the irradiation can be obtained from using the following equation [99]: where D x→10 is the X MeV proton to 10 MeV proton relative damage coefficient, and NIEL is the corresponding nonionization damage energy for different energies. Electron to proton irradiation damage equivalency factor Rep can be calculated by following equation [99]: where Rep is electron to proton irradiation equivalent damage coefficient, D j is the actual DDD values by using electron irradiation and D i is the corresponding fitting values of DDD. By considering Rep degradation curves of electrical parameters of solar cells against the irradiation fluence can be described into a single curve against the D d . For example, Figure 9B shows the replot of Figure 4 using NIEL method. The DDD of solar cells in complex electron and proton environment can be calculated by Eq. 10, and applied to evaluate the degradation curve of the electrical performance of space solar cells with the DDD. Besides, predicting the degradation of a solar cell performance regarding a given electron or proton fluence can also be conducted by using the following semi-empirical equation: where P max and P 0 are the output power (also can be I sc or V oc ) of solar cell before and after irradiation, D d is the given DDD regarding to the given irradiation fluence, respectively. C and D x are the fitting parameters obtained by on-ground experimental data. Eq. 4 is more versatile than Eq. 5 for different kinds of particles by using D d as variable for given irradiation condition. Comparing to the equivalent fluence method, the NIEL approach requires fewer experimental measurements to successfully predict the radiation damage of different particles [88]. The displacement damage dose method was successfully applied for predicting on-board solar cell measurements for GaAs/Ge and CIS solar cells at 500 × 67,300 km near equatorial orbit, shown in Figure 10. The Development Trends of Space Solar Cells There is no doubt that space solar cells should move toward higher efficiency, low cost and better radiation resistance. In this direction, many types of new technologies are trying to solve these problems. Currently, LM triple-junction solar cells are the main stream in space applications. In theory, the solar cell has more junction number has higher efficiency, but it is difficult to increase the number of cell junctions in real cell fabrication. The theoretical limit of N-junction (N for the infinite) solar cells conversion efficiency can reach 68.2% [3]. But the cell fabrication becomes more difficult when the number of junctions is increased, especially for more than 5-6 junctions. The efficiency of multi-junction solar cells from single-junction to six-junction is presented in Table 4. Besides, apart from the high cost of III-V materials, the price of GaAs is ten times than that of Si, the growth of III-V materials requires expensive equipment, and hence, the production cost of multijunction solar cells is very high and mostly used in space applications now. Therefore, in the future, the development trends of III-V multijunction solar cells are low-cost, high efficiency, high radiation resistance and simple fabrication method. CONCLUSION III-V multijunction solar cells are the primary power supply for space application due to its super high photoelectric conversion efficiency and better radiation resistance. Despite the high fabrication cost, it is widely used in different space applications. New types of space solar cells with new materials and new structures are continuously coming out and the performance characteristics of various space solar cells have been extensively studied. Owing to the development of the balance between the bandgap matching and lattice matching, the structure of solar cells is continuously optimized, more suitable new materials are discovered, and more mature manufacturing processes are utilized. The highest conversion efficiency of solar cells is constantly being refreshed. Comparing to conventional silicon solar cells, the conversion efficiency of III-V multijunction solar cells is significantly improved. Till now, the highest efficiency of crystalline silicon heterojunction solar cell reached 25.6%, and the world record of 47.1% conversion efficiency is achieved by six-junction inverted metamorphic solar cells under 143 suns. Furthermore, latticematched GaInP/GaAs/Ge triple junction solar cell fabrication technology is becoming more and more mature with large scale of mass production while keeping the conversion efficiency over 30%. The space radiation environment is the main threat for inorbit solar cell performance and lifetime. The high energy particles, such as electrons and protons, induce displacement damages in different regions of solar cell structure and lead to electrical and spectral performance degradation of solar cells. The main reason for the deterioration of space solar cells is that the radiation induced displacement damages forms nonradiative recombination centers and reduces the minority carrier lifetime, and, consequently, results in a reduction of solar cell electrical and spectral parameters. The equivalent fluence method and the displacement damage dose method FIGURE 10 | The displacement damage does method measurements for GaAs/Ge and CIS solar cells [88]. are two main approaches for evaluating solar cell radiation effects and degradation of cell parameters. Especially, the displacement damage dose method is a more convenient and effective tool. Although the radiation effects of solar cells are widely studied, the radiation damage mechanism of multijunction solar cells with different materials and different structures are not fully explored yet. More experimental and theoretical studies are still needed for further investigating feasible radiation hardening methods. DATA AVAILABILITY STATEMENT The data that supports the findings of this study are properly cited with literature and also available from the corresponding author upon reasonable request. AUTHOR CONTRIBUTIONS JL, AA, and YL designed the research, conducted the literature review and wrote this manuscript. All authors contributed to the literature review, discussion of the results and edited the manuscript.
2021-02-02T18:20:25.827Z
2021-02-02T00:00:00.000
{ "year": 2021, "sha1": "7611b5abafc9afa59fb8ac91205f56bd05462868", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphy.2020.631925/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "7611b5abafc9afa59fb8ac91205f56bd05462868", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [] }
54839033
pes2o/s2orc
v3-fos-license
Anticaries Activity of Azolla pinnata and Azolla rubra Article Information The present study was carried out to investigate anticaries activity of two Azolla species viz., A. pinnata and A. rubra. Inhibitory efficacy of methanolic extract of both Azolla species was tested against six oral isolates of Streptococcus mutans by Agar well diffusion and Minium inhibitory concentration (MIC) determination. The S. mutans isolates were shown to be susceptible to extracts. Among Azolla species, A. pinnata displayed high inhibitory effect against oral isolates when compared to A. rubra as evidenced by wider inhibition zones and low MIC values. These Azolla species can be used to treat dental caries. Article History: Received : 27-06-2014 Revised : 12-09-2014 Accepted : 18-09-2014 INTRODUCTION Dental caries is one of the most important infections of the oral cavity affecting people of all age groups and remains a major problem worldwide. Among cariogenic flora, mutans streptococci in particular Streptococcus mutans is a primary cause of dental caries. It is acidogenic and aciduric and has the ability to adhere to tooth surfaces and forms biofilm. If left untreated, dental caries gradually leads to tooth loss with a variety of health problems. Hence, prevention of dental caries is preferable than treatment. Conventional methods used for prevention and treatment of dental caries include the use of antibiotics and mouth rinses. However, these strategies have some drawbacks such as side effects, development of resistance, high cost etc. Hence, search for alternatives is of much interest. Plants have been used for the prevention and control of dental caries and a number of researchers have shown the efficacy of plants against microflora causing dental caries (Fani et al., 2007;Ambrosio et al., 2008;Gupta et al., 2012;Chaiya et al., 2013;Junaid et al., 2013;Vivek et al., 2013;Vivek et al., 2014). Azolla (Salviniaceae) is a small aquatic pteridophyte with agronomic importance worldwide. It grows faster and produces maximum biomass in short time. It is an example for symbiotic interaction between eukaryotic Azolla and prokarytotic Anabena. Anabena lives as an endosymbiont in the leaf cavities of Azolla and is associated with all stages of fern's development. Azolla supplies carbon sources to Anabena and in return it gets its nitrogen requirements. Because of its ability to fix nitrogen at high rates and low cost, Azolla is used as biofertilizer especially in paddy fields. Besides, Azolla is used as green manure, animal feed, human food and medicine, water purifier, hydrogen fuel, biogas producer, weed and insect controller, and reduces ammonia volatilization after chemical nitrogen application. Azolla improves the water quality by removing excess quantity of nitrates and phosphorus (Ray et al., 1979;Pabby et al., 2003;Chris et al., 2011 andSadeghi et al., 2013). It is experimentally shown that Azolla species exhibit plant growth promotory (Bindhu et al., 2013), hepatoprotective (Kumar et al., 2013), antioxidant (Nayak et al., 2014), bioremediation (Zazouli et al., 2014), and antimicrobial activity (Nayak et al., 2014). The present study was conducted to determine anticaries activity of methanol extract two Azolla species viz., A. pinnata and A. rubra. Collection and Extraction of Plant Materials The Azolla species viz., A. pinnata and A. rubra were obtained from UAS, GKVK, Bangalore. The whole plant materials were dried under shade and powdered in a blender. 10g of powdered A. pinnata and A. rubra was added to 100ml of methanol (HiMedia, Mumbai) in separate conical flasks and left at room temperature for two days with occasional stirring. The solvent extracts were filtered using Whatman No. 1 filter paper and the solvent was evaporated to obtain concentrated extract (Vivek et al., 2014). Anticaries activity of A. pinnata and A. rubra The efficacy of extracts to inhibit cariogenic bacteria was tested by Agar well diffusion method against 6 oral isolates of S. mutans (Sm). The bacterial isolates were inoculated into sterile Brain heart infusion broth (HiMedia, Mumbai) tubes and incubated at 37 o C overnight. The broth cultures were aseptically swabbed on sterile Brain heart infusion agar (HiMedia, Mumbai) followed by punching wells of 6mm diameter in the inoculated plates. Short Communication 100µl of extract (20mg/ml of 25% dimethyl sulfoxide [DMSO; HiMedia, Mumbai]), standard (Streptomycin, 1mg/ml of sterile distilled water) and DMSO (25%, in sterile distilled water) were transferred into respectively labelled wells. The plates were incubated aerobically at 37 o C for 24 hours. The zone of inhibition formed around each well was measured using a ruler (Vivek et al., 2014). Minimal Inhibitory Concentration (MIC) The MIC of Azolla extracts was determined by dilution method. The extract dilutions (ranging from 20 to 0.0mg/ml) were tested against each clinical isolate. Twofold dilutions of Azolla extracts were prepared in sterile Brain heart infusion broth tubes. Broth tubes with different concentrations of extracts were inoculated with test bacteria and incubated at 37 o C for 24 hours. The MIC was determined by observing the visible growth of the isolates after incubation. The extract dilution revealing no visible growth was considered as the MIC (Kosanic and Rankovic, 2010). RESULTS The result of inhibitory effect of extract of A. pinnata and A. rubra against the clinical isolates of S. mutans is shown in Table 1. The S. mutans isolates were susceptible to the extract of both Azolla species. The extract of A. pinnata was more effective in inhibiting the test bacteria (zone of inhibition ranging 2.6 to 3.4cm) than that of A. rubra (zone of inhibition ranging 2.3 to 3.1cm). Inhibition caused by reference antibiotic was higher than that of extracts of Azolla species. DMSO did not cause inhibition of any bacteria. In MIC determination also, similar kind of inhibition of oral isolates by Azolla extracts was observed. Extract of A. pinnata inhibited oral isolates at low concentration when compared to A. rubra. MIC ranged between 0.312 to 1.25 and 0.625 to 2.5mg/ml in case of A. pinnata and A. rubra respectively (Table 2). DISCUSSION Dental caries can be effectively controlled by mechanical removal of dental plaque by tooth brushing and flossing. However, the majority of the human population (particularly aged people) may not follow this mechanical plaque removal sufficiently. In such cases, the use of antimicrobial mouth rinses may be preferred to limit plaque-related oral infections. However, these chemicals show undesirable side effects such as tooth staining, taste alteration and development of hypersensitivity reactions. Antibiotics are routinely used to prevent oral infections. These antibiotics also suffer from problems such as side effects and risk of development of resistance against antibiotics in cariogenic flora (Aneja et al., 2010;Fani and Kohanteb, 2012;Chaiya et al., 2013). Plants are routinely used for prevention and control of dental caries and periodontal infections. These are safer and do not cause side effects that are observed in case of the antibiotics and other synthetic chemicals. Researchers have shown the potential of plants against cariogenic bacteria and have come out with promising results (Wolinsky et al., 1996;Prashant et al., 2007;Fani et al., 2007;Gupta et al., 2012;Chaiya et al., 2013;Junaid et al., 2013;Vivek et al., 2014;Kekuda et al., 2014). In this study, methanolic extract of A. pinnata and A. rubra were screened for their inhibitory activity of S. mutans isolates. Both species of Azolla were effective in inhibiting the clinical isolates of S. mutans. Marked inhibitory activity was observed in case of A. pinnata when compared to A. rubra as indicated by wider zones of inhibition and low MIC. It has been shown that extract of some Azolla species possess antimicrobial activities. In a study, Angalao et al. (2012) found antimicrobial activity of A. filiculoides against fungi. However, bacteria were not inhibited by extract. The study of Gerard (2013) showed that the methanoli extract of A. microphylla exhibit inhibitory activity against several strains of Xanthomonas. More recently, Nayak et al. (2014) observed marked antibacterial activity of methanolic extract of A. caroliniana against multidrug resistant pathogenic bacteria such as S. aureus, P. mirabilis, Enterococcus sp., E. aerogenes, E. coli and P. aeruginosa. CONCLUSION A marked anticaries activity of A. pinnata and A. rubra was observed in this study. These Azolla species can be used to control dental caries. Further studies on purification of active components from Azolla extracts and determination of their inhibitory activity against cariogenic bacteria are under progress.
2019-03-22T16:18:28.345Z
2014-11-18T00:00:00.000
{ "year": 2014, "sha1": "03a83dee9e95dbc30bac70350a4628d3f496796d", "oa_license": null, "oa_url": "https://www.ajol.info/index.php/star/article/download/109831/99572", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e6b4144d05ed16f333b6b2af092c154463837f35", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
26464614
pes2o/s2orc
v3-fos-license
Hypersensitivity and nanoparticles: update and research trends Nanotechnology holds a great promise for a wide range of medical-intent applications (diagnostic, treatment and prophylaxis of various diseases). Their advantages are due to their size, versatility and potential for multiple simultaneous applications. However, concerns have been formulated by scientific world due to insufficient data on toxicity of nanomaterials. One area of interest is represented by the interactions between nanoparticles and the components of the immune system. We review herein reported data on hypersensitivity reactions. The role exerted by nanoparticles in both immunostimulation and immunosuppression in allergen-driven mechanisms was studied, as well as future trends in worldwide research. Introduction Increasing evidence on possible interaction between nanoparticles and the immune system has been released lately, however, research data is still limited. Amongst the possible immune-related effects, sensitization as a result of nanoparticle exposure represents a current experimental goal for many research groups. It has been said that NPs mat be responsible for inducing allergic sensitization (contact dermatitis). However, it has been said that NPs are unlikely to act as a hapten inducing a specific IgE production. Rather, it is consider that they are likely to act as adjuvant and induce a specific pattern of cytokines, antibody and cells that favor allergic sensitization to environmental allergens [1]. Importantly, stimulation of inflammatory cytokines has been demonstrated to be a keypoint in nanoparticle-induced immunostimulatory reactions. Different types of nanomaterials have been reported to was reported following occupational exposure to final or intermediate products of dendrimer synthesis. The mechanisms, which may be also be interfered with reactive oxygen species used within the synthesis process, is to be further investigated [7,8]. Magnetite iron oxide nanoparticles Ig E blood concentrations were significantly increased following iron oxide nanoparticles single-dose intra-tracheal installation. However, the concentration of IgE in the bronho-alveolar fluid did not reveal any change following treatment [9]. Titanium dioxide (TiO 2 ) and gold (Au) nanoparticles (NPs) A recent study demonstrate the ability of TiO 2 and AuNPs to induce a two-fold (TiO 2 ) and three -fold (AuNPs) increase in airways hyperreactivity following inhalation, along with bronchoalveolar lavage cells, histology and total IgE alterations [10]. Also, other research groups have found that exposure to TiO 2 nanoparticles in the case of a preexistent skin barrier dysfunction/defect can exacerbate AD symptoms through Th2-biased immune responses. Also, TiO 2 nanoparticles were demonstrated to exert a significant role in the initiation and/or evolution of skin pathologies following the barrier dysfunction/defect by histamine discharge even in the nonexistence of allergen [11]. Silver nanoparticles In a recent experiment, repeated oral administration of AgNPs 1 mg/kg for 14 days in mice, induced significantly elevated TGF-in serum, and B cell distribution especially in small dimension AgNPs. Also, the repeated-dose toxicity of AgNPs (42 nm) was also investigated in mice by oral administration for 28 days. Cytokines including IL-1, IL-6, IL-4, IL-10, IL-12, and TGF-were also increased in a dose-dependent manner by repeated oral administration. In addition, B cell distribution in lymphocyte and IgE production were increased. Based on these results, it is suggested that repeated oral administration of nano-sized AgNPs may cause organ toxicity, inflammation and allergic responses in mice [7]. Polystirene nanoparticles (PS) A complex experiment has investigated the effects of different-size-PS nanoparticles on the atopic dermatitis (AD)-like skin lesions in NC/Nga mice assumed to show the skin barrier defect/dysfunction in the presence or absence of mite allergen. Male NC/Nga mice were administrated PS nanoparticles through intradermal injection. Different sizes of PS nanoparticles were used (25, 50, or 100 nm) and/or mite allergen administration into their right ears. PS nanoparticles aggravated AD-like skin lesions related to mite allergen, which was concordant to the local protein levels of interleukin-4, CCL2/monocyte chemotactic protein-1, CCL3/macrophage inflammatory protein-1 alpha, and CCL4/macrophage inflammatory protein-1 beta. Moreover, PS nanoparticles reduced interferon-γ expression. Also, treatment with PS nanoparticles stimulated ear swelling and CC-chemokine expression in the absence of allergen. These effects were greater with the smaller PS nanoparticles than with the larger ones regarding overall trend. These results suggest that exposure to PS nanoparticles under skin barrier defect/dysfunction can exacerbate AD-like skin lesions related to mite allergen in a size-dependent manner. Suggested mechanisms involved T helper 2-biased immune responses. Furthermore, PS nanoparticles demonstrated the ability to stimulate skin inflammation via the overexpression of CC-chemokines even in the lack of allergen in atopic subjects [11]. Reducing the hypersensitivity induced by nanoparticles Once detected, allergic reactions have become a point of interest in research. Effors for avoiding and reducing hypersensitivity have been done and several strategies have been imagined. Gene porter synthesis aimed for enhancement and improvement of the allergy protection has been one of the most recent strategies. A Den123-a nontoxic self-assembled dendritic spheroid nanoparticle composed of biodegradable monomers has been designed. Research showed higher and growing ratios of Ig2a/IgG1 were induced in mice receiving plasmids in combination with Den123. Also, increased gamma interferon release in splenocytes has been detected in the presence of both Den123 and DNA vaccine. IgE inhibition has been significant [12]. Designing and optimizing animal models for testing hypersensitivity has also been an intense line of research. Complement mediated hypersensitivity following liposomal nanoparticles has been studied by means of various models including pigs [13], rats [14], dogs [15]. Detected symptoms and laboratory abnormalities included: hypo/hypertension, arrhythmias, anaphylaxis, shock or even death. Dependence of symptoms on species, dosage and lipid composition has been demonstrated by various reports and is to be taken into account in animal model selection for a particular type of nanomaterial [16]. Nanoparticles as alleviating agents against hypersensitivity reactions/mechanisms. Although many types of nanostructures have been demonstrating hypersensitivity-inducing properties, some structures have been demonstrating anti-allergic effects. Recent studies reported a strong increase in NF-κB p65 in the lung tissues nuclear protein extracts at 72 hours post OVA inhalation, compared with the level in controls. Administration of silver NPs revealed efficiency in decreasing NF-κB p65 after OVA inhalation. Also, a detected decrease in cytosolic NF-κB p65 was equally attenuated by silver NPs exposure [17]. Betamethasone disodium phosphate (BP) encapsulated in biocompatible, biodegradable blended nanoparticles (stealth nanosteroids) induced significant eosinophil number decrease in bronchoalveolar lavage fluid. A single dose injection contining 40 μg BP in the form of nanosteroids induced stable anti-allergic effect for 7 days [18]. Nanoparticle technology has also been involved to design an innovative nanoparticle P-selectin antagonist with potent anti-inflammatory roles in a murine model allergic asthma. Both in vitro and in vivo studies were conducted and a significant reduction of allergeninduced peribronchial inflammation airway and airway hyperreactivity was reported, demonstrating the efficiency of newly designed structure [19]. Chitosan combined with mixtures of hyaluronic acid and unfractionated or low-molecular-weight heparin was constructed to form nanoparticles using the ionotropic gelation technique. Ex vivo experiments testing the capacity of heparin to prevent histamine release in rat mast cells indicated that the free or encapsulated drug induced a significant response suitable for treatment of allergy-driven astma [20]. Chitosan/IFN-γ pDNA nanoparticles (CIN) have been designed and their efficiency was tested. It has been demonstrated that prophylactic administration of CIN reduces sensitization to allergens, decreases allergeninduced AHR and inflammation, while therapeutic administration of CIN reverses established allergeninduced AHR [21]. General recognized as safe(GRAS)-based calcium carbonate or calcium phosphate nanoparticles that contain soft base ions have demonstrated efficiency in arresting soft acid metal ions such as nickel, being therefore useful in treatment of nickel allergy [22]. Another research group proposed a distinct design for treatment allergic inflammation in astma. Chitosan nanoparticles were mixed with Imiquimod cream. The content of nanoparticles consisted in either siRNA green indicator (siGLO) or small interference natriuretic peptide receptor A(NPRA). After topic application of designed cred formulation, measuring of airway eosinophilia, hyperresponsiveness, pro-inflammatory cytokines and lung histopathology was performed. Results showed that transdermally applied siNPRA chitosan nanoparticles can represent a safe and efficient treatment choice for allergic asthma in humans [23]. A recent report demonstrates that Cyclosporin A-loaded solid nanoparticles in topic administration relieved symptoms of in an in vivo murine model of atopic dermatitis. Involved mechanisms include the T helper (Th) 2 cell-related cytokines interleukin (IL)-4 and -5 alteration. These results suggest that the designed SNP may represent potent therapeutic agents to be applied in allergy-related skin disorders [24]. Research has shown that a nanogel containing surface modified nanoparticles (NPSO) improved skin permeation of ketoprofen and spantide II by transiting the nanostructures across the deeper skin layers. Also, by forming a thin layer on the skin surface (occlusive effect) the designed formulation improved skin contact time and hydration of the skin. Therefore, the synthetic formulation improved response in ACD. Moreover, no interaction was detected between the spantide II and ketoprofen [25]. Reformulation of an already approved drug was another strategy for diminishing and eliminating hypersensitivity following administration of nanoparticles. A good example is the first generation formulation of paclitaxel in the nonoionic surfactant Cremophor EL, a severely allergic product, which was successfully reformulated as Abraxane, namely paclitaxel-bound albumin nanoparticles. The later demonstrated hypoallergenic properties [26]. Conclusion Nanoparticles represent a promising tool for an increasing number of diagnostic, therapy and prophylaxis. However, all evidences suggest a strong immunomodulating role of nanoparticulate structures. Further individual and intensive testing is needed for all physic-chemical properties of the particles. Controlling the pro and antiallergic properties of nanoparticles represents one of the key elements towards their safe and efficient application.
2017-10-26T21:38:15.353Z
2016-04-15T00:00:00.000
{ "year": 2016, "sha1": "0db8ac9f462fedcfa8f07ac795007e082c68c797", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc4849378?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "0db8ac9f462fedcfa8f07ac795007e082c68c797", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
24319291
pes2o/s2orc
v3-fos-license
Emergence and clonal transmission of multi-drug-resistant tuberculosis among patients in Chad Background Emergence of Multidrug-resistant (MDR) strains constitutes a significant public health problem worldwide. Prevalence of MDR tuberculosis from Chad is unavailable to date. Methods We collected samples from consecutive TB patients nationwide in the seven major cities of Chad between 2007 and 2012 to characterize drug resistance and the population structure of circulating Mycobacterium tuberculosis complex (MTBC) strains. We tested drug sensitivity using Line Probe Assays and phenotypic drug susceptibility testing (DST) were used for second line drugs. We genotyped the isolates using spoligotype analysis and MIRU-VNTR. Results A total of 311 cultures were isolated from 593 patients. The MDR prevalence was 0.9% among new patients and 3.5% among retreatment patients, and no second line drug resistance was identified. The distribution of genotypes suggests a dissemination of MDR strains in the Southern city of Moundou, bordering Cameroon and Central African Republic. Conclusion Emerging MDR isolates pose a public health threat to Southern Chad, with risk to neighboring countries. This study informs public health practitioners, justifying the implementation of continuous surveillance with DST for all retreatment cases as well as contacts of MDR patients, in parallel with provision of adequate 2nd line regimens in the region. Electronic supplementary material The online version of this article (doi:10.1186/s12879-017-2671-7) contains supplementary material, which is available to authorized users. Background Tuberculosis (TB), caused by species of the Mycobacterium tuberculosis complex (MTBc), remains a major public health problem worldwide. Untreated, TB kills about half of the patients [1,2]. According to the World Health Organization (WHO) 9.6 million people developed TB in 2014, while 6 million new TB patients were reported to WHO, suggesting that worldwide 37% of new patients went undiagnosed or were not reported, likely lacking appropriate treatment [3]. In 2014, 12,305 TB cases were reported in Chad, of whom 22% died, accounting for a major proportion of morbidity and death in the country [4]. Laboratories capable of performing culture or molecular DST for TB patients are still lacking in the country, and this contributes to the spreading of TB in Chad [5,6]. The report of the National Tuberculosis Program (NTP) in 2009 stipulates a high prevalence of TB of 480/100000, although no surveys have been conducted to date [7]. The estimated prevalence of co-infection with HIV is 12%, which has increased mortality due to tuberculosis [4]. The DOTS strategy has been implemented by the NTP, and the current TB therapeutic regimen used in Chad includes 2 months of quadri-therapy with rifampicin (R), isoniazid (H), pyrazinamide (Z) and ethambutol (E), followed by 6 months with isoniazid and ethambutol (2RHZE/6HE). For retreatment patients, treatment includes 2 months of rifampicin, isoniazid, ethambutol, pyrazinamide and streptomycin followed by 1 month of rifampicin, isoniazid, pyrazinamide and ethambutol and finally 5 months of rifampicin, isoniazid and ethambutol (2RHZES/1RHZE/ 5RHE) [8]. Globally, the emergence of strains resistant to multiple antibiotics has compromised global TB management. According to WHO, about 480,000 cases of multi-drug resistant (MDR) TB have been reported worldwide in 2014, and nearly 9% of MDR-TB cases were extensively drug-resistant cases (XDR) [3,9]. In Chad, as no such data on drug-resistant TB was available, we collected samples from seven major cities between 2007 and 2012 to measure drug-susceptibility to first and second line drugs, and to study the population structure of circulating strains. This study demonstrates the emergence and clonal transmission of MDR-TB strains, originating from one of two major transmission clusters of TB strains in the Southern city of Moundou, close to the Cameroonian and Central African Republic's border. A total of 593 patients suspected of having TB were included based on clinical presentation, and two sputa from each patient were collected. Sputum was preserved in Cetyl Pyridinium Chloride (CPC) (Sigma-Aldrich) and sodium chloride (Sigma-Aldrich), and transported to the laboratory in N'djamena [10] where smear microscopy was performed using Ziehl Neelsen (ZN) method and positive sputa were cultured on Lowenstein-Jensen slopes (LJ) with glycerol. Biochemical methods such as catalase test, nitrate reduction, thiophene-2 carboxylic acid hydrazide (TCH), and smooth appearance of colonies were used to differentiate MTBC and mycobacteria other than tuberculosis (MOTT). DNA extraction and genotyping of Mycobacterial isolates DNA was extracted using the CTAB method as previously described by Van Embden et al. [11] and adjusted to a final concentration of 10 ng/μl in Tris-EDTA (Sigma-Aldrich) [11]. To assign lineages and families to mycobacterial isolates, spoligotyping [12] was performed, and binary codes were analyzed using the TB Insight online software (http://tbinsight.cs.rpi.edu/run_tb_lineage.html). For MDR isolates, 24 locus MIRU-VNTR was performed at Genoscreen (Lille, France) to confirm potential chains of transmission. A Neighbor-Joining tree was constructed using the MIRU-VNTRplus homepage (www.miruvntr-plus.org), incorporating genotypic data, as well as individual resistance patterns and mutations. Study population and M. tuberculosis strains isolated Of the 593 samples collected, a total of 326 samples were positive after culture, and 311 were available for analysis. The 311 patients included 224 (72.0%) men and 87 (27.9%) women, between 12 and 70 years of age. The majority of the samples, 236 (75.9%), were isolated from new TB patients, and the remaining 75 (24.1%) from retreatment patients (Fig. 1). Resistance to antituberculosis drugs Overall, resistance to any of the two major first line drugs (rifampicin or isoniazid) was identified in 73 patients (23.4%). Rifampicin mono-resistance was identified in 5 (1.61%) new and 12 (3.8%) retreatment patients. Isoniazid mono-resistance was observed in 14 (4.5%) new and 28 (9%) retreatment patients. MDR was identified in 3 (0.9%) new and 11 (3.5%) retreatment patients. Rifampicin resistance was caused by rpoB gene mutations H526Y, H526D or S531 L. Regarding isoniazid resistance, 7.4% was based on the S315 T1 mutation observed in the katG only, 6.1% carried the C − 15 T mutation in inhA promotor only. We did not find any association for the two type of mutations observed. (Table 1). All MDR strains were tested for second line drug resistance and we found no XDR strains. Population structure of resistant isolates We built a phylogenetic tree including all isolates with any resistance, based on spoligotyping, drug-susceptibility and resistance-conferring mutations (Fig. 3). When further stratifying the genotypic data by geographical origin, i.e. by city of isolation, we identified two major MDR transmission clusters, in Sarh/Doba and in Moundou respectively. Discussion In this first survey of drug resistance in Chad, we identified 23.4% resistance to first line drugs for all patients, and we found respectively 0.9% and 3.5% of MDR-TB strains in new and retreatment patients. Up to now, no data was available in Chad regarding MDR and extensively drug-resistant (XDR) TB, although those strains pose real public health problems. We report here such results for the first time. Several strains had the same spoligotype pattern and same resistance mutations to rifampicin and isoniazid, suggestive of two chains of transmission of MDR strains (see Fig. 3). As spoligotype analysis alone does not have sufficient resolution to identify chains of transmission, we re-typed MDR strains using high-resolution 24 loci MIRU-VNTR-typing, and added the genotypic profile of the pncA gene. In our study population, 4.5% of patients were infected with MDR-TB strains and we found several genotypically identical MDR isolates suggestive of two ongoing transmission chains of MDR strains in the towns of Moundou and between Doba and Sarh. Some of those patients were new cases (3, 0.9%). These findings suggest that MDR strains are present in Chad, but also being transmitted. In consequence, TB control measures should include the rapid implementation of continuous surveillance of rifampicin resistance in retreatment patients nationwide, as recommended by WHO, with second line resistance testing when rifampicin resistance is identified, and the availability of effective therapy for resistant TB. In respect to the population structure of the MTBc, we observed that the Cameroon family SIT61 within Lineage 4, was the most frequent. These CAM genotypes were isolated for the first time in Cameroon by Niobe-Eyangoh et al. [17]. Our study is in line with findings by Diguimbaye et al. in 2006 [18] from the Chari-Baguirmi region of Chad that reported that 33% belonged to the CAM family. Our study has identified the CAM family strains in all major cities of Chad, with highest prevalence found in N'djamena (18.9%), Moundou (10.6%) and Bongor (6.7%), which border with Cameroon. As the CAM isolates were described to be highly transmissible and were associated with an increased risk of developing drug-resistance, it is advisable to monitor the longitudinal spread of these strains in Chad. Limitations of this study include the potential selection bias, as we did not apply a formal drug resistance survey design with cluster representative sampling. Moreover, the prevalence of resistance may be underestimated due to the use of the LPA, which may have missed some rifampicin resistance (especially rpoB mutations at the positions 511, 533, and/or 572), and is only 90% sensitive for isoniazid resistance. Conclusion In conclusion, the MDR strains isolated in patients in the towns of Moundou, Sarh and Doba occur in two genotypic clusters, suggesting that most resistant TB is due to ongoing transmission. Therefore, our findings suggest that priorities for TB control in Chad should include the early diagnosis and effective treatment of MDR-TB patients, with provision for rapid second line DST testing and availability of treatment options for potential future XDR-TB patients, especially in the south of the country.
2017-10-26T03:38:47.395Z
2017-08-22T00:00:00.000
{ "year": 2017, "sha1": "1f539be2e7d4ebb3f0600e7576e83eb3a21a5dff", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12879-017-2671-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f539be2e7d4ebb3f0600e7576e83eb3a21a5dff", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17108284
pes2o/s2orc
v3-fos-license
Overt attention in natural scenes: objects dominate features Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this compa- rably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection – and by inference object-based atten- tional guidance – in natural scenes. Introduction Is attention guided by objects or by the features constituting them? For simple stimuli and covert shifts of attention, evidence for object-based attention arises mainly from the attentional costs associated with switching between objects as compared to shifting attention within an object (Egly, Driver, & Rafal, 1994;Moore, Yantis, & Vaughan, 1998). Such benefits extend to search in visual scenes with 3D objects (Enns & Rensink, 1991). For more natural situations, however, the question as to when a cluster of features constitutes an ''object'' does not necessarily have a unique answer (Scholl, 2001) and it may depend on the context and task. In the context of visual working memory, Rensink (2000) suggested that ''proto-objects'' form pre-attentively and gain their objecthood (''coherence'') through attention. Extending the notion of objects to include such proto-objects, attention can be guided by ''objects'', even if more attentional demanding object processing has not yet been completed. While for covert attention an object-based component to attention seems rather undisputed, for the case of overt attention, defined as fixation selection, in natural scenes two seemingly conflicting views have emerged, referred to as the ''salience-view'' and the ''object-view''. The ''salience-view'' states that fixated locations are selected directly based on a salience map (Itti & Koch, 2000;Itti, Koch, & Niebur, 1998;Koch & Ullman, 1985) that is computed from low-level feature contrasts. The term ''salience'' or ''early salience'' in this context is used in a restrictive sense to denote feature-based effects, and is thus not equivalent, but contained in ''bottom-up'', ''stimulus-driven'' or ''physical'' salience (Awh, Belopolsky, & Theeuwes, 2012). Put to the extreme, the salience-view assumes that these features drive attention irrespective of objecthood (Borji, Sihite, & Itti, 2013). The salience-view appears to be supported by the good prediction performance of salience-map models (Peters et al., 2005) and the fact that features included in the model (e.g., luminance contrasts) indeed correlate with fixation probability in natural scenes (Krieger et al., 2000;Reinagel & Zador, 1999). The ''object-view'', in turn, states that objects are the primary driver of fixations in natural scenes (Einhäuser, Spain, & Perona, 2008;Nuthmann & Henderson, 2010). As a corollary of this view, the manipulation of an object's features should leave the pattern of preferably fixated locations unaffected, as long as the impression of objecthood is preserved. The object-view is supported by two independent lines of evidence. One of them is based on the prediction of fixated locations within a scene, whereas the second one derives from distributional analyses of eye fixations within objects in a scene. With regard to the former, it is important to note that the robust correlation between fixations and low-level features, which seem to argue in favour of the salience-view, does not imply causality. Indeed, when lowering local contrast to an extent that the local change obtains an object-like quality, the reduced contrast attracts fixations rather than repelling them (Einhäuser & König, 2003), arguing against a causal role of contrast. Even though this specific result can be explained in terms of second-order features (texture contrasts, Parkhurst & Niebur, 2004), objects attract fixations and once object locations are known, early (low-level) salience provides little additional information about fixated locations (Einhäuser, Spain, & Perona, 2008). Together with the finding that object locations correlate with high values in salience maps (Elazary & Itti, 2008;Spain & Perona, 2011), it seems that salience does not drive fixations directly, but rather that salience models predict the locations of objects, which in turn attract fixations. This support for the object-view has, however, recently been challenged. In a careful analysis of earlier data, Borji, Sihite, and Itti (2013) showed that more recent models of early salience outperform the naïve object-based model of Einhäuser, Spain, and Perona (2008). This raises the question whether a slightly more realistic object-based model is again at par with early-salience models. The second line of evidence for the ''object-view'' arises from the analysis of fixations relative to objects. Models of early salience typically predict that fixations target regions of high contrasts (luminance-contrasts, colour-contrasts, etc.), which occur on the edges of objects with high probability. Although the density of edges in a local surround indeed is a good low-level predictor of fixations (Mannan, Ruddock, & Wooding, 1996) and even explains away effects of contrast as such (Baddeley & Tatler, 2006;Nuthmann & Einhäuser, submitted for publication), fixations do not preferentially target object edges. Rather, fixations are biased towards the centre of objects (Foulsham & Kingstone, 2013;Nuthmann & Henderson, 2010;Pajak & Nuthmann, 2013). As a consequence of this bias, for edge-based early-salience models fixation prediction improves when maps are smoothed (Borji, Sihite, & Itti, 2013) and thus relatively more weight is put from the edges to the objects' centre (Einhäuser, 2013). Quantitatively, the distribution of fixations within an object is well-described by a 2-dimensional Gaussian distribution (Nuthmann & Henderson, 2010). The distribution has a mean close to the object centre, quantifying the so-called preferred viewing location (PVL), and a standard deviation of about a third of the respective object dimension (i.e., width or height). Since a PVL close to object centre in natural-scene viewing parallels a PVL close to word centre in reading (McConkie et al., 1998;Rayner, 1979), it seems likely that the PVL is a general consequence of eye-guidance optimizing fixation locations with respect to visual processing -at least when no action on the object is required: fixating the centre of an object (or word) maximizes the fraction of the object perceived with high visual acuity. A possible source for the variability in target position, as quantified by the variance or standard deviation of the PVL's Gaussian distribution, is noise in saccade programming (McConkie et al., 1998;Nuthmann & Henderson, 2010). Taken together, the existence of a pronounced PVL for objects in scenes suggests that fixation selection, and by inference attentional guidance, is object based. Both lines of evidence for the object-view assume that object locations are known prior to deploying attention and selecting fixation locations. This does not require objects to be recognized prior to attentional deployment. Rather, a coarse parcellation of the scene into ''proto-objects'' could be computed pre-attentively (Rensink, 2000). If models of early salience in fact predict the location of objects or proto-objects, they could reach indistinguishable performance from object-based models, even if attention is entirely object based. The explanatory power of low-level feature models, like Itti, Koch, and Niebur (1998) salience, would then be explained by them incidentally modelling the location of objects or proto-objects. In turn, the existence of a PVL would be a critical test as to whether proto-objects as predicted by a model indeed constitute proto-objects that can guide attention in an objectbased way. An early model that computed proto-objects in natural scenes explicitly in terms of salience (Walther & Koch, 2006) failed this test and showed no PVL for proto-objects, except for the trivial case in which proto-objects overlapped with real objects and the observed weak tendency for a central PVL for these proto-objects was driven by the real objects (Nuthmann & Henderson, 2010). In a more recent approach along these lines, Russell et al. (2014) developed a proto-object model that directly implements Gestalt principles and excels most existing models with respect to fixation prediction. Although a direct comparison of this model with real objects is still open, Russell et al.'s approach shows how object-based salience can act through proto-objects and can thus be computed bottom-up (and possibly pre-attentively) from scene properties. In the present study, we test the object-view against the salience-view for overt attention in natural scenes. Two predictions follow from the object-view hypothesis. (I) A model of fixation locations that has full knowledge of object locations in a scene and adequately models the distribution of fixations within objects (''PVL-model'') does not leave any additional explanatory power for early salience. That is, salience-based models cannot outperform objectbased models. (II) Early-salience models that reach the level of object-based models do so, because they predict object (or proto-object) locations rather than guiding attention per se. Under the object-view hypothesis, any manipulation of low-level features that neither affects the perceived objecthood nor the location of the objects in the scene, will decrement the performance of the early-salience model more dramatically than that of the object-based model. Here we test these predictions directly: using the object maps from Einhäuser, Spain, and Perona (2008) and a canonical PVL distribution from Nuthmann and Henderson (2010) we predict fixated locations for the images of the Einhäuser, Spain, and Perona (2008) stimulus set (S. Shore, uncommon places, Shore, Tillman, & Schmidt-Wulffen, 2004). In a first experiment, prediction (I) is tested on an independent dataset of fixations from 24 new observers who viewed the same Shore, Tillman, and Schmidt-Wulffen (2004) images. We compare an object-based model that incorporates the within-object PVL (PVL map) to the prediction of the Adaptive Whitening Salience Model (AWS, Garcia-Diaz et al., 2012a, 2012b, which is the best-performing model identified in the study by Borji, Sihite, and Itti (2013). In a second experiment, prediction (II) is tested by reducing saturation and contrast of the objects and testing how PVL map and AWS predict fixations of 8 new observers viewing these modified stimuli. In experiment 3, we repeat experiment 2 with a freeviewing task to rule out that object-based instructions biased the results in experiment 2. Materials and methods Stimuli for all experiments were based on 72 images from the Steven Shore ''Uncommon places'' collection (Shore, Tillman, & Schmidt-Wulffen, 2004;Fig. 1A), which constitute a subset of the 93 images used in Einhäuser, Spain, and Perona (2008) and correspond to the subset used in an earlier study ('t Hart et al., 2013). Our object-based modelling used the annotation data from the original study by Einhäuser, Spain, and Perona (2008), while all fixation data was obtained from an independent set of 40 new observers (24 in experiment 1, 8 in experiments 2 and 3, see Sections 2.2-2.4). Models All object-based models were computed based on the keywords provided by the 8 observers of Einhäuser, Spain, and Perona (2008) and the object outlines created in the context of this study. A list of all objects is available at http://www.staff.uni-marburg.de/~einhaeus/download/ObjectRecall.csv. From the outlines, bounding boxes were computed as the minimal rectangle that fully encompassed an object. In case an object had more than one disjoint part or more than one instantiation within the scene, separate bounding boxes were defined for each part and/or instantiation (Fig. 1B). Hereafter, both cases will be referred to as object ''parts'' for simplicity of notation. In total, the 72 images used for the present study contained 785 annotated objects consisting of a total of 2450 parts. Original object map (OOM) To test whether we could replicate the finding by Borji, Sihite, and Itti (2013) that AWS outperformed the trivial object map representation proposed in Einhäuser, Spain, and Perona (2008) on our new fixation dataset, we used the maps as defined there: the interior of each object named by any observer received a value of 1, the G adaptive whitening saliency (AWS) exterior a value of 0 and the resulting object maps were added per image. That is, each pixel provides a count of how many objects cover this pixel (Fig. 1C). Here and for the following models, we ignored how many observers had named an object in the original study. This ''unweighted'' use of the maps had the rationale that the original data serves to provide a more or less exhaustive representation of objects in the scene, rather than providing their ''representativeness'' for the scene. Using ''weighted'' maps, however, yielded qualitatively similar results. Normalized original object map (nOOM) For comparison with the other models described in Sections 2.1.3 and 2.1.4, we normalized each object of the original object map to unit integral by dividing its contribution to the OOM by the number of pixels covered by it. This resulted in a normalized original object map (nOOM, Fig. 1D). Uniform object map (UNI) To provide a baseline for the PVL-based maps as described below (Section 2.1.4), we modelled the distribution of fixations within each bounding box to be uniform. To compute a uniform object map for each image, for each object we assigned each pixel within its bounding box the value of 1/A, where A denotes the area in pixels (i.e., bounding box width w times its height h). If an object o consists of P o parts, the contribution of each part was in addition multiplied by 1/P o . The maps obtained for each object were then added to obtain the map for the scene (Fig. 1E). By definition, the sum over all pixels of an object is 1, irrespective of the number of its parts; and each object makes the same contribution to the map, irrespective of its size or number of parts. PVL-based object maps (PVL) To model fixations within an object adequately, we started with the observation that fixations can be described by a 2-dimensional Gaussian distribution (Nuthmann & Henderson, 2010). We modelled the fixation distribution for each object as a Gaussian centred at the bounding box centre, with a vertical standard deviation of 34% of the bounding box height and a horizontal standard deviation of 29% of the bounding box width, using the numbers provided in Table 2 of (Nuthmann & Henderson, 2010). Following the procedure for the uniform maps, the Gaussians for each object were normalized to unit integral or -if there were P o parts per object -the Gaussian for each part was normalized to integral 1/P o . Maps of the objects within each image were then added (Fig. 1F). As with the uniform maps, each object makes the same contribution to the map, irrespective of its size or number of parts. Formal description of the models Formally, we can write the description of Sections 2. where (x 0 , y 0 ) is the bounding box centre. Except for the analysis of Section 3.1.5, the standard deviations followed the Nuthmann and Henderson (2010) data: r x = 0.29w and r y = 0.34h with h and w denoting bounding box width and height for the respective object part. Then we summed as above to obtain the PVL-based object map. As an early-salience model we used the model that Borji, Sihite, and Itti (2013) identified to achieve best performance on our earlier data: adaptive whitening salience (AWS; Garcia-Diaz et al., 2012a, 2012b. We applied the matlab implementation as provided by the authors at http://persoal.citius.usc.es/xose.vidal/research/ aws/AWSmodel.html using a scaling factor of 1.0 to the unmodified version of each image in its pixel intensity representation (Fig. 1G). Except for the scaling factor, which has a default value of 0.5 to reduce computation time for large images, default parameters as set in the authors' implementation were used. The effect of decreasing the scaling factor is explored in Appendix B. Combined maps To test whether adding an early-salience model to the objectbased model improved fixation prediction, we combined normalized versions of the PVL and the AWS map. For each image, we computed a set of combined maps as COM a ðx; yÞ ¼ a AWSðx; yÞ P x;y AWSðx; yÞ þ ð1 À aÞ PVLðx; yÞ P x;y PVLðx; yÞ In this equation a parameterizes the weight given to the early-salience model, with a = 0 corresponding to the pure PVL map and a = 1 to the pure AWS map. For comparison, we also tested a multiplicative interaction between the maps AWSðx; yÞ P x;y AWSðx; yÞ To test the models described herein, we recorded a new eyetracking dataset using 24 observers (mean age: 24.6 years; 13 female, 11 male). Images were used in 3 different conditions, in their original colour (Fig. 1A) and in two colour-modified versions. Each observer viewed each of the 72 images once, 24 stimuli in each condition (24 unmodified, 24 with clockwise colour rotation and 24 with counter-clockwise colour rotation). Each condition of each image was in turn viewed by 8 observers. For the present study, only the unmodified images were analyzed; for completeness, the details of the colour modification and the main analysis for the colour modified stimuli are given in Appendix A. Stimuli were presented centrally at a resolution of 1024  768 pixels on a grey background (18 cd/m 2 ) using a 19 0 EIZO FlexScan F77S CRT monitor running at 1152  864 pixel resolution and 100 Hz refresh rate, which was located in 73 cm distance from the observer. Eye position was recorded at 1000 Hz with an Eyelink-1000 (SR Research, Ottawa, Canada) infrared eye-tracking device, and for fixation detection the Eyelink's built-in software with default settings (saccade thresholds of 35 deg/s for velocity and 9500 deg/s 2 for acceleration) was used. Observers started a trial by fixating centrally, before the image was presented for 3 s. The initial central (0th) fixation was excluded from analysis. After each presentation, observers were asked to rate the aesthetics of the preceding image and to provide five keywords describing the scene afterwards. Neither keywords nor ratings were analyzed for the present purposes. All participants gave written informed consent and all procedures were in accordance with the Declaration of Helsinki and approved by the local ethics committee (Ethikkommission FB04, Philipps-University Marburg). Experiment 2 -modified stimuli, original task In natural scenes, low-level salience and object presence tend to be correlated, which presents one possible explanation for good performance of salience models. To dissociate low-level salience from object presence, in experiment 2 we used a modified version of each stimulus. Specifically, we calculated the median value of the PVL map, and all pixels in the stimulus that exceeded this median (i.e., half of the image with largest PVL map values) were desaturated (transformed to greyscale) and halved in luminance contrast, while keeping the mean luminance unchanged (Fig. 1H). For these stimuli, any salience model that is based on low-level features such as colour-contrasts or luminance-contrast will therefore predict fixations in the unmodified (normal saturation, high luminance contrast) area or at the boundaries between saturated and unmodified regions (Fig. 1I). The PVL map remains unchanged by the experimental manipulation (all objects remain visible). Therefore, the salience model and the PVL-based object model now differ in their predictions with regard to fixation selection in scenes. Eight new observers participated in experiment 2 (mean age: 26.5, 4 male, 4 female). Other than using the modified stimuli, the experimental methods were identical to experiment 1. Experiment 3 -modified stimuli, free viewing Experiment 3 was identical to experiment 2 with the exception that observers were not asked to provide keywords after each stimulus, but the next trial started with a central fixation on a blank screen after each stimulus presentation. Observers received no specific instructions except that they were free to look wherever they liked as soon as the stimulus appeared. Eight new observers participated in experiment 3 (mean age 25.3; 6 female, 2 male). Data analysis To quantify how well a given map (AWS, OOM, nOOM, UNI, PVL) predicted fixation locations irrespective of spatial biases, we used a measure from signal-detection theory (SDT). For a given image i, we pooled the fixations of all observers and measured the values of the map at these locations. This defined the ''positive set'' for this image. We then pooled the fixations from all other images and measured the values at these locations for the map of image i. This defined the ''negative'' set for image i. This negative set includes all biases that are not specific for image i, and thus presents a conservative baseline. In some analyses, we restricted the dataset (e.g., to one colour condition, or to the nth fixation). In these cases, restrictions were applied to positive and negative set alike. To quantify how well the negative set could be discriminated from the positive set, we computed the receiver operating characteristic (ROC) and used the area under the ROC curve (AUC) as measure of prediction quality. Importantly, this measure is invariant under any strictly monotonic scaling of the maps, such that -except for combining maps -no map-wise normalization scheme needed to be employed for making the different maps comparable. AUCs were obtained independently for each of the 72 images. Since AUCs were obtained by pooling over observers, all statistical analysis in the main text (ANOVAs, t-tests) was done ''by-item''. This by-item analysis allows for a robust computation of AUCs pooled across observers; however, for completeness we also report ''by-subject'' analyses (Appendix C). In addition to parametric tests (ANOVAs, t-tests), for the main comparisons we also performed a sign test. The sign test is a non-parametric test that makes no assumptions on the distributions of AUCs across images. It tests whether the sign of an effect (i.e., is model ''A'' or model ''B'' better for a given image) is consistent across images, but ignores the size of the effect (i.e., by how much is model ''A'' better than model ''B''). To analyze the effect of salience on object selection (Section 3.1.6), we used a generalized linear mixed model (GLMM). For the GLMMs we report z-values, that is, the ratio of regression coefficients to their standard errors (z = b/SE). Predictors were centred and scaled. For the GLMM analysis we used the R system for statistical computing (version 3.1; R Core Team, 2014) with the glmer programme of the lme4 package (version 1.1-7; Bates et al., 2014), with the bobyqa optimizer. Data processing and all other analyses were performed using Matlab (Mathworks, Natick, MA, USA). Experiment 1 3.1.1. Object maps that consider the PVL are at par with the best earlysalience model Using all data from the fixation dataset, we test whether the prediction of fixated locations depended on the map used (AWS, OOM, nOOM, UNI, PVL). We find a significant effect of map type (F(4, 284) = 37.0, p < 0.001, rm-ANOVA) on prediction. Post-hoc tests show that there are significant differences between all pairs of maps (all ts(71) > 3.7, all ps < 0.001) except between AWS and PVL (t(71) = 1.29, p = 0.20) and between AWS and UNI (t(71) = 1.54, p = 0.13). This confirms that AWS significantly outperforms naïve object-based maps. However, once the withinobject PVL is taken into account, object-based maps are at par with state-of-the-art early salience maps; numerically, they even show a slightly better performance, though this is not statistically significant for the by-item analysis ( Fig. 2A). PVL is a necessary factor for the prediction of fixations Already the UNI maps achieve better performance than the OOMs and reach indistinguishable performance from AWS. This raises the question whether the bulk of the benefit compared to the naïve object model arises from using bounding boxes rather than object outlines or from the normalization by object area. In other words, is there any true benefit of modelling the PVL within an object in detail? To address this question, we compare UNI maps to PVL maps image by image. We find that in 59/72 images, the maps that take the PVL into account outperform the UNI maps, with the reverse being true in 13/72 cases only (Fig. 2B). This fraction of images is significant (p < 0.001, sign-test). Similarly, the PVL outperforms the nOOM map (57/72, p = 0.003) and the OOM map (63/72, p < 0.001) for the vast majority of images. This shows that the benefit of considering the within-object PVL is robust across the vast majority of images. Early salience provides little extra explanatory power, once object locations are known Provided the location of objects are known, how much extra information does early salience add with regard to fixated locations? To address this question, we combine the PVL and AWS maps additively. We screen all possible relative weights (''a'') of AWS relative to PVL in steps of 1%. When enforcing the same a for all images, as would be required for a model generalizing to unknown images, we find that even at the best combination (a = 52%, AWS adds only 2.2 percentage points to the PVL performance alone (69.4% as compared to 67.2%, Fig. 3). Even when allowing for adapting the weight for each image separately, the maximum AUC reaches 71.2%, such that the maximum possible gain by adding AWS to PVL is less than 4%. A multiplicative interaction (i.e., PVL ''gating'' AWS) is in the same range as the additive models (69.1% AUC). Object salience and early salience are similar from the first free fixation on In order to test whether the relative contributions of objects and early salience vary during prolonged viewing, we measured the fixation prediction of AWS and PVL separated by fixation number (Fig. 4). Even using a liberal criterion (uncorrected paired ttests at each fixation number), we do not find any difference between PVL and AWS for any fixation number (all ps > 0.17; for fixation 0-9: all ts(71) < 1.38; for fixation 10 only 69 images contributed data: t(68) = 0.82). Neither do we find any clear main effect of fixation number (excluding fixation 0 and fixation 10) on the AUC for either PVL (F(8, 568) = 1.73, p = 0.09) or AWS (F(8, 568) = 1.69, p = 0.10). Thus, there is little evidence that fixation prediction by either early salience or objects changes over the course of a trial, and there is no evidence of any difference between PVL and AWS at any point in the trial. Optimal PVL parameters generalize across datasets For the analysis so far, we used the parameters of Nuthmann and Henderson (2010) to model the phenomenon of a PVL within objects. These were obtained on an entirely different stimulus set with observers performing distinct tasks (memory, search and preference judgements) and with a different setup. This raises the question, whether the average PVL generalizes across datasets. To test this, we modelled the PVL by 2-dimensional Gaussians with horizontal standard deviations ranging from 0.10 to 0.60 of bounding box width and vertical standard deviations with the same fraction of bounding box height, varied in 0.01 steps (i.e., we set r x = bw and r y = bh, with h and w denoting bounding box height and width, and varied b). We find that prediction indeed reaches a maximum around b = 0.31 (Fig. 5A), in line with the values found in Nuthmann and Henderson (2010) and used throughout this paper. Interestingly, even the optimum value (67.15% AUC at an sd of 33% of bounding box dimensions) is very close to but slightly below the 67.19% found for the anisotropic Nuthmann & Henderson values. To test whether the result improves further for anisotropic (relative to the bounding box) PVL distributions, we vary r x and r y independently (r x = b x w and r y = b y h) in 0.01 steps Fig. 5B, circle) indicating an optimal PVL that is slightly anisotropic relative to the object's bounding box. This optimal AUC value is only 0.01% (percentage points) larger than the result for the original parameters (0.29, 0.34). Since these values were obtained on a different data set and experimental setup, it is tempting to speculate that the fraction of about 1/3 of bounding box dimensions, possible with a slight anisotropy, for the PVL might reflect a universal constant for object-viewing. Prioritisation of objects by low-level salience Is the prioritisation as to which object is selected, once the objects are given, biased by early salience? First, we test whether the probability that an object is fixated at all is related to its salience. To avoid obvious confounds with object area, we quantify an object's low-level salience by the maximum value of the normalized AWS map within the object surface (''peak AWS''). To keep all measures well defined, we restrict analysis to those 366 objects that consist of only one part. In addition to peak AWS, we consider two object properties, which could potentially confound peak AWS effects: object size (the number of pixels constituting the object), and object eccentricity (the distance of the object's centre of mass from the image's centre). For each observer, we allocate a ''1'' to a fixated object and a ''0'' to a non-fixated object, yielding a binary matrix with 366  8 (number of objects  number of observers, who viewed the respective image in unmodified colour) entries. We use a GLMM to determine the impact of the object properties on the thus defined fixation probability (cf., Nuthmann & Einhäuser, submitted for publication). The model includes the three object properties as fixed effects. With regard to the random effects structure, the model includes random intercepts for subjects and items as well as and by-item random slopes for all three fixed effects. We find a significant effect of peak AWS (z = 4.55, p < 0.001) above and beyond the effects of object size (z = 5.09, p < 0.001) and eccentricity (z = À4.70, p < 0.001). This indicates that among all objects, the objects with higher low-level salience are preferentially selected. The analysis so far asks whether an object is fixated at all in the course of a 3 s presentation. For an infinite presentation duration, it seems likely that all objects would be eventually fixated; in turn, the salience of an object may be especially predictive for fixations early in the trial. To quantify this, we modify the analysis such that, rather than assigning a 1 to an object that is fixated at any time in the trial, we assign a 1 only to objects that are fixated at or before a given fixation number n. Computing the same GLMM for this definition and for each n, we find a significant prediction of fixation duration for each n (all z > 3.3, all p < 0.001). Z-values tend to increase with fixation number (i.e., with increasing n; Table 1). The effects of object size and eccentricity are also significant for all n, with the effect of eccentricity declining over fixation number (Table 1). These results offer a role for early salience that complements object-based fixation selection: attention is guided to objects, but among all the objects in a scene those with higher early salience may be preferentially selected. Experiments 2 and 3 -modified natural stimuli The PVL map performs as well as AWS, but it does not outperform the salience-based model. While this result already invali- dates a key argument put forward against object-based fixation selection, namely that AWS was better than the naïve object-based approach (Borji, Sihite, & Itti, 2013), experiment 1 alone does not show a superiority of object-based models. In experiments 2 and 3, we dissociate the effect of objects from the effect of the features that constitute objects by manipulating low-level features at object locations. Modification de-correlates AWS and PVL maps The object-view states that early-salience models predict fixation selection through correlations of their features with object locations. To test this, we measure the correlation between AWS and PVL map values. To obtain sufficiently independent samples, we sample values from both maps on a central 11  8 grid of pixels 100 pixels apart (i.e., at (13, 35), (13, 135) The aim of the experimental manipulation in experiments 2 and 3 is to disentangle AWS from PVL predictions by reducing this correlation. We therefore generated new stimuli by halving contrast and removing saturation from the half of the image in which PVL was highest (Fig. 1H). This manipulation was effective in that the correlation between PVL and AWS for the modified stimuli is now negative (r(6334) = À0.06, p < 0.001) and -when individual images are considered -smaller than for the original image in 71/72 cases. On modified images, PVL outperforms AWS Using the fixation data of experiment 2 and computing AWS on the modified stimuli used, the PVL map now significantly outperforms AWS with respect to fixation prediction (AUC: 63.0 ± 1.3% vs. 56.6 ± 1.1% (mean ± s.e.m.); Fig. 6A, t(71) = 3.41, p = 0.001). On the level of individual images, the prediction of PVL is better for 51/72 images, a significant fraction (p < 0.001, sign-test). In the free-viewing task of experiment 3, the prediction by the PVL map remains virtually unchanged (AUC: 63.1 ± 1.3%) and is significantly better than the AWS performance (AUC: 58.1 ± 1.2%) both on average (Fig. 6B, t(71) = 2.43, p = 0.02) and for individual images (46/72, p = 0.02, sign-test). Both experiments show that when the predic-tion of an object-based model and a salience-based model are dissociated by experimentally manipulating the correlation between early salience and objecthood, object-based models outperform early salience. The result of experiment 3, in which observers had no specific instruction, furthermore rules out that the precedence of object-based fixation selection over low-level salience is a mere consequence of an object-related task. Dependence on fixation number As for experiment 1, we analyzed the time course of PVL and AWS predictions. In experiment 2 (Fig. 7A), with the exception of the initial (0th) fixation, prediction is above chance for all fixations and both maps (all ps < 0.007, all ts > 2.8). Excluding the initial (0th) fixation and including all fixation numbers for which data from all images is available (1st through 9th), we find no effect of fixation number on AUC, neither for AWS (F(8, 568) = 0.53, p = 0.83) nor for PVL (F(8, 568) = 1.37, p = 0.31). In experiment 3, fixation durations were longer than in the other two experiments (270.6 ± 2.0 ms vs. 244.6 ± 1.8 ms and 243.3 ± 1.5 ms, excluding the initial fixation), such that from the 9th fixation on, data for some images are missing, and we only analyze fixations 1 through 8 further. For those fixations, AUCs are significantly different from chance for both maps (all ps < 0.001, all ts > 4.0). Again, we find no main effect of fixation number for AWS (F(7, 497) = 0.45, p = 0.87). However, we find a main effect of fixation number for the PVL map (F(7, 497) = 4.79, p < 0.001). Surprisingly, however, the prediction is best for the early fixations (Fig. 7B). The PVL model performs significantly better than AWS only for the 2nd and 3rd fixation (t(71) = 3.30, p = 0.002 in both cases), while for the other fixations performance is indistinguishable from AWS (ts < 1.9; ps > 0.06). Hence, especially early fixations, though not the first one, are guided rather by objects than by low-level salience if no objectrelated task is to be performed. At no time point during viewing a fixation is guided primarily by low-level salience. AWS as object model The object-view explains the performance of early-salience models by the correlation of their features to objects in natural scenes. Hence, if an experimental manipulation dissociates objects from their natural low-level features -like in our experiments 2 and 3 -the prediction performance of early-salience models should drop. Notably, we can derive an additional prediction from the object-view hypothesis: if the early-salience model is computed on the original (i.e., unmodified) stimulus, it predicts object locations. These object locations remain unaffected by the experimental manipulation. Consequently, the early-salience model computed on the unmodified image should still predict fixations on the modified image. We tested this hypothesis and found that AWS applied to the original image indeed predicts fixations on the modified image in experiment 2 better than AWS applied to the modified image itself (AUC: 66.0 ± 1.2% t(71) = 7.41, p < 0.001). The same holds for experiment 3 (AUC: 66.8 ± 1.2%; t(71) = 6.15; p < 0.001). This shows that the AWS model incidentally captures attention-guiding properties of natural scenes that still predict fixations when their correlation to the low-level features that are captured by low-level salience are removed. Table 1 GLMM results: z-values for the fixed effects peak AWS, object size and object eccentricity on the probability of fixating labelled objects in scenes. Each column, labelled as fixation number n, reports data for a model that considers objects that are fixated at or before a given fixation number n. Discussion In this study we show that an object-based model that adequately models fixation distributions within objects (i.e., the preferred viewing location, PVL) performs at par with the best available model of early salience (AWS). The prediction by the object-based model is robust to small variations of the PVL's standard deviation and not substantially improved by any combination with the AWS model. Notably, when low-level features are manipulated while keeping objecthood intact, the object-based model outperforms the early-salience model. Together, these findings provide further support for the object-view of fixation selection: objects guide fixations and the prediction by early salience is mediated through its correlation with object locations. If attention is indeed object-based, the question arises up to which level of detail object processing has to be performed prior to fixation selection and how such information can be extracted from the visual stimulus. The degree of object-knowledge required prior to attentional deployment has frequently been associated with ''proto-objects'' (Rensink, 2000). In the context of salience maps, Walther and Koch (2006) define proto-objects by extending the peaks of the Itti, Koch, and Niebur (1998) saliency map into locally similar regions. In this conception, proto-objects are a function of saliency (Russell et al., 2014). In principle, proto-objects can guide attention in two ways. First, proto-objects can be a proxy for real objects that is computed from stimulus properties. In this case, proto-objects, just like low-level salience, predict fixations through their correlation with object locations. Alternatively, proto-objects could constitute a ''higher-level'' feature that is causal in driving attention. Yu, Samaras, and Zelinsky (2014) provide indirect evidence for the latter view by showing that proto-objects are a proxy for clutter, and clutter is a possible higher-level feature for attention guidance (Nuthmann & Einhäuser, submitted for publication). In the present study, we show, however, that PVLbased maps outperform other object maps. For proto-objects that do not exhibit a PVL it seems therefore unlikely that they predict fixations better than real objects. An analysis testing proto-objects as defined by Walther and Koch (2006) showed that there was little evidence for a PVL for human fixations within these protoobjects (Nuthmann & Henderson, 2010). Importantly, there was no evidence for a PVL when only saliency proto-objects that did not spatially overlap with annotated real objects were analyzed. Therefore, proto-objects of that sort are not a suitable candidate for the unit of fixation selection in real-world scenes. In addition, AWS generates some notion of objecthood and can be used to extract proto-objects from a scene (Garcia-Diaz, 2011), presumably since the whitening aids figure-ground segmentation (see Russell et al., 2014, for a detailed discussion of this issue). Again, as shown by experiments 2 and 3, the features of AWS are dominated by object-based selection (PVL-based object maps), indicating that the implicit ''proto-objects'' of AWS do not match real objects with respect to fixation prediction. It is conceivable that the phenomenon of a PVL indeed constitutes an important property that distinguishes proto-objects from real objects, at least with respect to fixation selection. Consequently, the question whether protoobjects, whose computation is stimulus-driven, but not based exclusively on low-level features (Russell et al., 2014;Yu, Samaras, & Zelinsky, 2014), exhibit a PVL is an interesting question for future research. Attention is likely to act in parallel with object processing rather than being a mere ''pre-processing'' step. There is a high structural similarity of salience-map models and hierarchical models of object recognition. Already the archetypes of such models, Koch and Ullman's (1985) salience map and Fukushima's (1980) Neocognitron, shared the notion of cascading linear filters and non-linear processing stages in analogy to simple and complex cells of primary visual cortex (Hubel & Wiesel, 1962). The computational implementation of the salience map (Itti, Koch, & Niebur, 1998) and the extension of the Neocognitron idea into a multistage hierarchical model (HMAX, Riesenhuber & Poggio, 1999) allowed both models to extend their realm to complex, natural scenes. Given the similarity between the salience map and HMAX, it is not surprising that more recent descendants of salience-map models, such as Itti and Baldi's (2005) ''surprise'', model human object recognition to a similar extent as HMAX itself (Serre, Oliva, & Poggio, 2007), and that in turn HMAX is a decisive ingredient in a state-of-the-art model of attentional guidance in categorical search tasks (Zelinsky et al., 2013). This modelling perspective -together with its roots in cortical physiology -argues that attentional selection and object recognition are not separated, sequential processes, but rather object processing and attention are tightly interwoven. A challenge for both model testing and experimental research is that an object is not necessarily a static entity, but rather a perceptual and hierarchical construct that can change depending on the task and mindset of the observer. In the present study, we took a pragmatic approach, using all the keywords provided by at least one of the 8 observers in our original study (Einhäuser, Spain, & Perona, 2008). These ranged from large background objects (''sky'', ''grass'', ''road'', ''table'') over mid-level objects (''car'', ''house'', ''woman'', ''cantaloupe'') to objects that are part of other objects (''roof'', ''window'', ''purse'', ''door''). Treating all of the objects equally, as done here, makes several simplifying assumptions. First, it assumes that the parameters of the PVL are indepen- A B color rotated stimulus (cw) color rotated stimulus (ccw) C AUC 50% 70% AWS OOM UNI PVL nOOM color rotated stimuli dent of object size. Second, it puts more weight to objects that consist of multiple parts, provided that parts and object are named. Third, objects that are disjoint by occlusion are treated as separate objects. Forth, it does not respect any hierarchy of parts, objects or scene. With regard to object size, Pajak and Nuthmann (2013) reported wider distributions of within-object fixation locations (i.e., larger variance) for smaller objects. Since -especially for very small and very large objects -the details of the presentation and measurement conditions may also have an effect on the exact distribution, we refrained from modelling this size dependence explicitly here. Since the PVL results are rather robust against the exact choice of the width of the Gaussian distribution, it is unlikely that the effects would be substantial, and -if anything -they should improve fixation prediction by the PVL maps further. Putting more weight on objects with multiple named parts seems reasonable, at least as long as no clear hierarchy between parts and objects is established and both are likely to follow similar geometric rules to gain objecthood. By normalizing each object to unit integral, very large background objects in any case have a comparably small contribution, except in regions where no other (foreground) objects are present. In an extreme case, where the background object spans virtually the whole scene, the PVL for the background resolves to a model of the central fixation bias (Tatler, 2007), which in this view corresponds to a PVL at scene level. Indeed, the central bias is fit well by an anisotropic Gaussian for a variety of datasets (Clarke & Tatler, 2014). Note, however, that the present analysis is unaffected by generic biases through its choice of baseline. Since disjoining objects by occlusions is rare in the present data set, this is more a technical issue than a conceptual one. Whether, from the perspective of fixation selection, occlusions are processed prior to attentional deployment (e.g., by means of estimating coarse scene layout prior to any object processing, cf. Hoiem, Efros, & Hebert, 2007;Schyns & Oliva, 2004) remains, however, an interesting question for further research in databases with substantial occurrences of such occlusions. Finally, the issue concerning the relation between parts and objects has frequently been addressed in parallel in computational and human vision. Dating back to the works of Biederman (1987), human object recognition is thought to respect a hierarchy of parts. On the computational side, mid-level features seem ideal for object recognition (Ullman, Vidal-Naquet, & Sali, 2002), and many algorithms model objects as constellation (Weber, Welling, & Perona, 2000) or compositions (Ommer & Buhmann, 2010) of generic parts. The interplay between objects and parts is paralleled on the superordinate levels of scene and object: Humans can estimate scene layout extremely rapidly and prior to object content (Schyns & Oliva, 2004) and scene layout estimation aids subsequent computational object recognition (Hoiem, Efros, & Hebert, 2007). For human vision, this provides support for a ''reverse hierarchy'' (Hochstein & Ahissar, 2002) of coarse to fine processing after an initial quick feed-forward sweep (Bar, 2009). Transferring these results to the question of attentional guidance and fixation selection in natural scenes might provide grounds for some reconciliation between a pure ''salience-view'' and a pure ''object-view''. It is well conceivable that several scales and several categorical levels (scene, object, proto-objects, parts, features) contribute to attentional guidance. Indeed, recent evidence shows that the intended level of processing (superordinate, subordinate) biases fixation strategies (Malcolm, Nuthmann, & Schyns, 2014). The appropriate hierarchical level might then be dynamically adapted, and -for sufficiently realistic scenarios -be controlled by task demands and behavioural goals. The present data show, however, that for a default condition of comparatively natural viewing conditions, object-based attention supersedes early salience. 1A) at different scaling factors of the AWS model (given above each panel); scaling factor 1 (no scaling) is depicted in Fig. 1G. (B) AWS map of example modified stimulus (Fig. 1H). Scaling factors as in panel (A); scaling factor 1 is depicted in Fig. 1I. (C) AUC for scaling factors 1/64, 1/32, . . . , 1 for the three experiments. Rightmost datapoint in each panel (factor 1) corresponds to AWS data of the main text ( Fig. 2A, Fig. 6A and Fig. 6B for exp. 1, 2 and 3, respectively). A B ''Priority'') for providing the inspiring environment in which part of this study was conceived. Appendix A. Colour conditions in experiment 1 Experiment 1 used stimuli in 3 different colour conditions: in their original colour (Fig. 1A) and in two colour-modified versions (Fig. A.1A and B): for the colour modification, images were transformed to DKL colour space (Derrington, Krauskopf, & Lennie, 1984) and each pixel was ''rotated'' by 90°(either clockwise or counter clockwise) around the luminance (L + M) axis. This manipulation (Frey, König, & Einhäuser, 2007) changes the hue of each pixel, but keeps saturation (or rather chroma) and luminance unchanged. Effectively, the manipulation swaps the S À (L + M) axis (roughly: ''blue-yellow'') with the L À M axis (roughly: ''redgreen'') and therefore keeps the sum over these two ''colour-contrasts'', as used in most models of early salience, intact. That is, while the global appearance of the stimuli changes dramatically, for most commonly used salience models (including AWS) the effect of the modification is by definition negligible or absent. As the modification neither affected the AWS maps nor the PVL maps, we only used data from the unmodified stimuli for the present study. For completeness, we repeated the main analysis for the modified stimuli. As expected, the results are qualitatively very similar (Fig. A.1C) to the original colour data ( Fig. 2A). This indicates that modifications to hue, at least if saturation and luminance are preserved, has little effect on the selection of fixated locations. Appendix B. Other models and AWS parameter For the main analysis, AWS has been chosen as reference, since Borji, Sihite, and Itti (2013) had identified it as best performing low-level salience model on the Einhäuser, Spain, and Perona (2008) data. Our data, especially the result that the AWS model applied to the original image predicts fixations better than the model applied to the actual modified stimulus (Section 3.2.4), however, casts doubt on the characterization of AWS as an ''early'' salience model. We therefore tested a series of other models (Fig. B.1A and B): Graph based visual saliency (GBVS; Harel, Koch, & Perona, 2007), saliency maps following the Itti, Koch, and Niebur (1998) model in the latest (as of September 2014) implementation available at http://ilab.usc.edu (''ITTI''), the ''SUN'' model (Zhang et al., 2008) and the ''AIM'' model (Bruce & Tsotsos, 2009). Since optimizing the model parameters for our dataset is not within the scope of the present paper, we used the default parameter settings as suggested by the respective authors throughout. With the exception of the AWS model for experiment 1 (t(71) = 1.29, p = 0.20) and the AIM model for experiment 3 (t(71) = 1.03, p = 0.31), all models perform significantly worse than the PVL map (Fig. B.1C; all other ts > 4.0, ps < 0.001). However, even in experiment 3 and similar to AWS (Fig. 7), the AIM model still performs significantly worse than PVL for the 2nd and 3rd fixation (Fig. B.1D, right; t(71) = 2.30, p = 0.02 and t(71) = 2.06, p = 0.04, respectively). This indicates that our results are not specific to the AWS model, and further supports the view that early in the trial fixation selection is object-based even in the free-viewing task of experiment 3. The implementation of the AWS model has one parameter, the factor by which the input image is scaled (Fig. B.2). For the unmodified images of experiment 1 (Fig. B.2A), a reduction by a factor of 0.5 does not change the prediction (t(71) = 0.14, p = 0.89), if anything, it yields a tiny improvement. With further reduction of the scaling factor, prediction performance monotonically decreases, but remains above chance (ts(71) > 6.2, p < 0.001) for all tested scales down to 1/32 (Fig. B.2C). At a factor of 1/64, prediction is indistinguishable from chance (t(71) = 0.05, p = 0.96). In experiments 2 and 3, a similar picture emerges: prediction performance decreases monotonically with decreasing scaling factor and becomes indistinguishable from chance at factors of 1/16 (exp. 2) or 1/32 (exp. 3, Fig. B.2C). Appendix C. Alternative analyses by subject For the main analysis, we first pool fixations within an image across all observers and then perform a ''by-item'' analysis, with the images being the items. Pooling over observers allows us to obtain a robust estimate of AUCs for each image. This is especially critical for those analyses that separate data by fixation number, as without pooling over observers the ''positive set'' for the AUC would contain only a single data point. For the main analysis, which aggregates over fixations, we alternatively could compute the AUC individually for each observer, then average over images and finally perform the statistical analysis over observers for these means. For completeness, we tested this ''by-subject'' analysis for the comparison between AWS and PVL for all three experiments as well as for all the object models for experiment 1 (Fig. C.1). The pattern of data looks similar to the main analysis (Figs. 2 and 7). For experiment 1, there is a significant effect of object model (F(4, 92) = 19.4, p < 0.001, rmANOVA). Unlike in the main analysis, all pairwise comparisons, including the one between AWS and PVL, show significant differences (all t(23) > 3.3, all ps < 0.003): PVL performs better than any other model, followed by AWS, UNI, nOOM and OOM (Fig. C.1). Similarly, the difference between PVL and AWS for experiment 2 and 3 is significant (exp. 2: t(7) = 8.96, p < 0.001; exp. 3: t(7) = 4.00, p = 0. forms AWS. This analysis not only supports our conclusions of object-based salience outperforming AWS, but also shows this effect already for experiment 1.
2016-10-25T01:01:05.561Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "72a977ba59e0e2067f239d192c8761d972f293ea", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1016/j.visres.2014.11.006", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "e711b33afda07de703c4b52169e8222581605389", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
259664655
pes2o/s2orc
v3-fos-license
Double-edged effect of sodium citrate in Nile tilapia (Oreochromis niloticus): Promoting lipid and protein deposition vs. causing hyperglycemia and insulin resistance Citrate is an essential substrate for energy metabolism that plays critical roles in regulating glucose and lipid metabolic homeostasis. However, the action of citrate in regulating nutrient metabolism in fish remains poorly understood. Here, we investigated the effects of dietary sodium citrate on growth performance and systematic energy metabolism in juvenile Nile tilapia (Oreochromis niloticus). A total of 270 Nile tilapia (2.81 ± 0.01 g) were randomly divided into three groups (3 replicates per group, 30 fish per replicate) and fed with control diet (35% protein and 6% lipid), 2% and 4% sodium citrate diets, respectively, for 8 weeks. The results showed that sodium citrate exhibited no effect on growth performance (P > 0.05). The whole-body crude protein, serum triglyceride and hepatic glycogen contents were significantly increased in the 4% sodium citrate group (P < 0.05), but not in the 2% sodium citrate group (P > 0.05). The 4% sodium citrate treatment significantly increased the serum glucose and insulin levels at the end of feeding trial and also in the glucose tolerance test (P < 0.05). The 4% sodium citrate significantly enhanced the hepatic phosphofructokinase activity and inhibited the expression of pyruvate dehydrogenase kinase isozyme 2 and phosphor-pyruvate dehydrogenase E1 component subunit alpha proteins (P < 0.05). Additionally, the 4% sodium citrate significantly increased hepatic triglyceride and acetyl-CoA levels, while the expressions of carnitine palmitoyl transferase 1a protein were significantly down-regulated by the 4% sodium citrate (P < 0.05). Besides, the 4% sodium citrate induced crude protein deposition in muscle by activating mTOR signaling and inhibiting AMPK signaling (P < 0.05). Furthermore, the 4% sodium citrate significantly suppressed serum aspartate aminotransferase and alanine aminotransferase activities, along with the lowered expression of pro-inflammatory genes, such as nfκb, tnfα and il8 (P < 0.05). Although the 4% sodium citrate significantly increased phosphor-nuclear factor-kB p65 protein expression (P < 0.05), no significant tissue damage or inflammation occurred. Taken together, dietary supplementation of sodium citrate could exhibit a double-edged effect in Nile tilapia, with the positive aspect in promoting nutrient deposition and the negative aspect in causing hyperglycemia and insulin resistance. Introduction Citrate is an important metabolic intermediate product of the tricarboxylic acid (TCA) cycle in mitochondria with two main sources: (1) in mitochondria, citrate can be produced by citrate synthase (CS) which catalyzes the synthesis of oxaloacetate and acetyl coenzyme A (AcCoA); (2) it is carried from the circulatory system into the cytoplasm via a sodium-dependent transport carrier (named SLC13A5) on the plasma membrane. Citrate is metabolized by two routes: (1) to generate a-ketoglutarate for the TCA cycle in the mitochondria by isocitrate dehydrogenase (IDH); or (2) to produce AcCoA in the cytoplasm by ATP-citrate lyase (ACLY). The latter step is the main source of cytoplasmic AcCoA for de novo lipogenesis (DNL), cholesterol and protein acetylation modifications (Infantino et al., 2019;Mosaoa et al., 2021). In addition to its well-known role as a metabolic intermediate, studies in mammals have also revealed that citrate in the cytoplasm is involved in several biological processes, such as glycolysis (Marinho-Carvalho et al., 2009;Sola-Penna et al., 2010), gluconeogenesis (Wang and Dong, 2019), inflammation, cancer, insulin secretion and histone acetylation (Iacobazzi and Infantino, 2014), suggesting that citrate can be involved in metabolic pathways in various manners. In mammals, Branco et al. (2021) found that adding 40 mg/kg citrate to the normal diet resulted in glucose intolerance and insulin resistance in mice, along with liver lipid deposition and inflammatory damage. However, the addition of 1% citrate to the diet of mice alleviated a variety of metabolic disorders caused by highfat diet, such as glucose intolerance, abnormal hepatic lipid deposition and inflammation (Fan et al., 2021). Similarly, the inclusion of citrate in a high-fat diet activated AMP-activated protein kinase (AMPK) signaling and inhibited mechanistic target of rapamycin (mTOR) signaling, resulting in a low-energy metabolic state that increased lifespan and egg production in Drosophila (Fan et al., 2021). Other studies in Drosophila have shown that mutating the cytosolic citrate transporter (Indy/SLC13A5) improved metabolic capacity and extended lifespan (Schwarz et al., 2015;Wang et al., 2009). These results suggest that the supplementation of exogenous citrate is important for healthy metabolism in organisms, which is of increasing interest in mammals. Yet, few studies have been conducted on citric acid/sodium citrate effects in fish. Studies have shown that dietary citric acid supplementation significantly improved growth performance and intestinal digestibility, and enhanced the absorption and deposition of metal ions and phosphorus in fish Hosseini, 2012a, 2012b;Li et al., 2009;Sarker et al., 2007). However, the results of studies on different fish species are contradictory. Studies on juvenile yellowfin seabream (Acanthopagrus latus) found that the addition of 0.5% sodium citrate improved growth performance and promoted protein and lipid deposition (Sotoudeh et al., 2020). A similar growth-promoting effect appeared on goldfish (Carassius auratus) with a dietary dose of 1% to 2% sodium citrate (Azari et al., 2021). Nevertheless, a study on hybrid tilapia (Oreochromis sp.) found that the supplementation of 2% to 4% sodium citrate reduced growth performance and nutrient deposition, along with liver and intestine damage (Romano et al., 2016), suggesting sodium citrate could be toxic in some fish species like tilapia. It is noteworthy that only a few studies have been designed to examine the effect of dietary citrate supplementation on metabolic processes of nutrients in fish, which largely hinders the understanding of sodium citrate function and its possible application in aquaculture. Therefore, more mechanistic investigations related to the metabolic regulation of sodium citrate on nutrients utilization, especially the potential toxic effects in fish are necessary. In the present study, we chose Nile tilapia, a globally-farmed fish, as a fish model to investigate the precise effects of dietary sodium citrate on systemic nutrient metabolism. In order to avoid possible disturbance from dietary ingredients, we used gelatin and casein to prepare purified diets. During an 8-week feeding trial, the juveniles Nile tilapia were fed a control diet (Control, protein approximately 35%, lipid approximately 6%) and the control diet supplemented with 2% and 4% sodium citrate, respectively. After the trial, the growth performance, biochemical indicators and gene/ protein expressions involved in energy metabolism were systemically analyzed. Our study could provide more in-depth information in understanding the roles of citrate in the regulation of nutrient metabolism in fish, and is helpful to precisely use citrate in aquafeeds as a potential feed additive. Animal ethics All the experimental procedures were performed under the Guidance of the Care and Use of Laboratory Animals in China and approved by the Committee on the Ethics of Animal Experiments of East China Normal University (Approval ID: F20200101). All animal experiments complied with the ARRIVE guidelines. Fish, diets, and sampling Juvenile Nile tilapia were purchased from Huihai Aquaculture Company (Guangzhou, Guangdong, China). These fish were acclimated in 50 L tanks for 2 weeks before the start of the experiment and fed a commercial diet (!35% total protein and !5% total lipids; Tongwei Co., Chengdu, China). Three isonitrogenous and isolipid experimental diets (35.56% total protein, and 6% total lipid) containing 0% (control group), 2% (2% sodium citrate group) and 4% sodium citrate (Sinopharm Chemical Reagent Co., Ltd, Shanghai, China) (4% sodium citrate group) were used, respectively. These sodium citrate doses were chosen according to the literatures in which 2% to 4% sodium citrate could change growth or metabolic progresses in goldfish or tilapia (Azari et al., 2021;Romano et al., 2016). Details of the experimental diets are shown in Table 1. Gelatin and casein were used as protein sources, and soybean oil and corn starch were used as lipid source and carbohydrate source, respectively. After acclimation, 270 Nile tilapia of similar mean weight (2.81 ± 0.01 g) were selected and randomly assigned to 9 tanks (0.3 m  0.3 m  0.3 m, 30 fish per tank, 3 tanks per diet). During the feeding period, the fish were hand-fed twice daily (08:30 and 17:00) at a feeding rate of 4% body weight. The amount of feed intake was recorded to determine feed conversion ratio (n ¼ 3). All fish were weighed every two weeks and the daily feeding mass was adjusted accordingly. During the feeding period, water temperature was 27 to 29 C, dissolved oxygen was 5.6 to 6.6 mg/L, pH was 7.4 to 7.9, while ammonia nitrogen was less than 0.02 mg/L. After the 8-week feeding trial, all fish were fasted overnight, then counted and weighed to determine weight gain rate (3 tanks per treatment, n ¼ 3). Then, six fish per group were anesthetized with MS-222 (20 mg/L) and sacrificed to collect tissue samples. Blood samples (pooled sample per tank, n ¼ 3) were drawn from the tail vein of the fish using a 1-mL sterile syringe (Klmedical, China), and the serum was separated after centrifugation at 1200  g at 4 C for 15 min. The obtained serum samples were stored at À80 C until analysis. The individual fish weight, body length, carcass, viscera, mesenteric fat and liver weight were measured to calculate condition factor, carcass ratio, viscerosomatic index, hepatosomatic index and mesenteric fat index (n ¼ 3 for all indexes). Afterwards, liver and muscle samples were collected and immediately frozen in liquid nitrogen and stored at À80 C for further analysis (pooled sample per tank, n ¼ 3). Six fish per group were further collected for whole-body composition analysis (pooled sample per tank, n ¼ 3). Glucose tolerance test Twenty-four Nile tilapia from each group were starved overnight, anesthetized with MS-222 (20 mg/L), and then injected intraperitoneally with D-glucose (500 mg/kg BW, 20% in 0.85% NaCl) (Sigma, USA). Blood from Nile tilapia was collected at 0, 0.5, 1 and 3 h after injection as six fish per point, the serum was separated immediately and frozen until further analysis was required. Biochemical parameters analyses Experimental feeds and whole-body samples were dried at 105 C until constant weight and then used for the determination of crude protein and crude lipid (Horwitz, 2010). Crude protein contents of feeds, whole-body and muscle samples were analyzed using a semi-automatic Kjeldahl system (FOSS, Sweden). Crude lipid contents of feeds, whole-body samples were determined by the chloroform/methanol (2:1, vol/vol) method (Folch et al., 1957). Hepatic AcCoA concentration measurements Determination of hepatic AcCoA concentration was conducted according to the existing method (Li et al., 2020). To minimize the possibility of baseline drift, an isocratic mobile phase at constant room temperature (25 C) was used in this study. The mobile phase used consisted of 100 mmol/L monosodium phosphate and 75 mmol/L sodium acetate. The pH was adjusted to 4.6 by using concentrated phosphoric acid. Acetonitrile (ACN) was added to the prepared mobile phase at the ratio of 6 (ACN) to 94 (mobile phase) (vol/vol). The freshly prepared mobile phase was filtered and degassed before high-performance liquid chromatography (HPLC) analysis. The wavelength for UV detection was set to 259 nm. The column was kept at room temperature with a flow rate of 0.5 mL/ min and an injection volume of 30 mL. Under these conditions, acetyl CoA eluted after 9.0 min. 2.6. Total RNA extraction, cDNA synthesis, and quantitative realtime PCR (qRT-PCR) analysis Total RNA was extracted from the liver of Nile tilapia using Trizol reagent (Megen, China), and 1% denaturing agarose gel electrophoresis was used to verify the integrity of total RNA. The quality and quantity of total RNA were assessed by NanoDrop 2000 Spectrophotometer (Thermo, United States). The RNA with a 260/280 ratio of 1.9 to 2.0 was used to synthesize cDNA using HiScript III RT SuperMix for qPCR reagent Kit (R323-01, Vazyme, China) according to the manufacturer's instructions. The details of primer sequences used in the present study are provided in Table 2. The qRT-PCR reaction was performed in a 20-mL mixture containing 8 mL diluted cDNA (with nuclease-free water), 10 mL 2 ChamQ Universal SYBR qPCR Master Mix (Q711-03, Vazyme, China) and 2 mL forward and reverse each gene-specific primer. The reaction conditions were as follows: 95 C, 30 s for pre-deformation; 95 C, 10 s followed by 60 C, 30 s, 40 cycles for amplification reaction; and then, 95 C, 15 s, 60 C, 60 s, 95 C, 15 s for melt curve. The melting curves of amplified products were generated to ensure the specificity of assays at the end of each qRT-PCR. The qRT-PCR efficiency was between 98% and 102% and the correlation coefficient was over 0.97 for each gene. Elongation factor 1 alpha (ef1a) and b-actin were used as the reference genes. Three replicate qRT-PCR analyses were performed per group. The relative expression levels of the target genes were calculated following the 2 -△△CT method (Livak and Schmittgen, 2001). Western blotting The liver and muscle homogenates after lysis (RIPA lysis buffer, Beyotime Biotechnology, China) were used for Western blot (WB) analysis. Briefly, 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis was used to separate the proteins (P2012, New Cell & Molecular Biotech Co., Ltd, Jiangsu, China), and then electrophoresis was transferred to the nitrocellulose membrane. Membranes were incubated for 20 min in a quick blocking buffer (Beyotime Biotechnology, China). Then, membranes were washed briefly in a mixture of tris-buffered saline and polysorbate 20, and incubated overnight at 4 C with antibodies (Table 3). Afterward, the blots were washed to remove excessive primary antibody binding and finally were incubated for 1 h with horseradish peroxidase-conjugated secondary antibody. The WB images were obtained using Odyssey CLx Imager (Licor, USA). Calculations and statistical analysis Weight gain ratio, condition factor, feed conversion ratio, carcass ratio, viscerosomatic index, hepatosomatic index and survival rate were calculated using the following formulas: Weight gain ratio (%) ¼ 100  (final mean body weight e initial mean body weight)/initial mean body weight Condition factor (g/cm 3 ) ¼ final body weight/body length 3 Table 2 Primer nucleotide sequences of the genes used for qRT-PCR. Gene Primer sequences (5 0 e 3 0 ) Accession no. Before conducting the analysis of sample differences, we have checked the normal distribution using D'Agostino & Pearson normality test, and the criterion for passing the normality test is a P-value > 0.05. The samples passed the normal distribution check and then one-way ANOVA and independent t-tests were conducted. Tukey's multiple comparisons were performed in the case of a significant overall difference between the experimental group and the control (P < 0.05). Independent-samples t test was performed to evaluate the significant differences among measured variables between treatments. Values are means ± SEM (one culture tank as one treatment, n ¼ 3 for all indexes). Difference from group C: *P < 0.05, **P < 0.01, ***P < 0.001 (Independent-samples t test). All data were analyzed using the GraphPad Prism 8.0 software (GraphPad Software Inc., San Diego, CA, USA). Dietary sodium citrate had no effects on growth performance, but induced crude protein and hepatic glycogen deposition At the end of the 8-week feeding trial, there were no significant differences in final body weight, weight gain ratio, feed intake, survival rate, condition factor, viscerosomatic index and carcass ratio of Nile tilapia among treatments (Table 4). Dietary supplementation of sodium citrate decreased the hepatosomatic index (HSI), and the 2% sodium citrate group had significantly lower HSI value than the control group (Table 4). Additionally, the 4% sodium citrate group significantly increased the mesenteric fat index, while there was no significant difference between the 2% sodium citrate group and the control group (Table 4). Compared to the control group, the supplementation of 4% sodium citrate also significantly increased crude protein content, serum triglyceride and liver glycogen content of tilapia (Table 4). To further investigate the mechanism of altered metabolic homeostasis by sodium citrate, only the 4% sodium citrate group and the control group were selected for the following assays. Dietary sodium citrate activated glycolysis and gluconeogenesis while caused insulin resistance The 4% sodium citrate-treated fish showed remarkably higher levels of serum glucose (Fig. 1A), however, serum insulin level was also significantly elevated by 4% sodium citrate treatment compared with control group (Fig. 1B). Besides, hepatic pyruvate contents of the 4% sodium citrate-treated fish were significantly higher than those of control (Fig. 1C), while 4% sodium citrate treatment reduced hepatic lactate content (Fig. 1D). Nevertheless, 4% sodium citrate supplementation markedly enhanced hepatic PFK activity compared to the control group (Fig. 1E). The glucose tolerance test results showed that the serum glucose in the 4% sodium citrate group was higher than that in the control group after glucose injection at all time points (Fig. 1F). The glucose tolerance test results also showed that serum insulin in the 4% sodium citrate group was significantly higher than that in the control group at all time points after glucose injection (Fig. 1G). The gene and protein expression of glucose metabolism showed significant differences between control and 4% sodium citrate groups (Fig. 1H, I and J). The expression of the genes involved in glucose transport, including glucose transporter 2 (glut2) and 4 (glut4), were not changed between groups (Fig. 1I). Moreover, the expressions of the glycolysis-related genes, including glucokinase (gk) (þ7.5 times), pyruvate dehydrogenase E1 component subunit alpha (pdhe1a1) (þ1.6 times), pyruvate dehydrogenase kinase 2 (pdk2) (þ6 times) and pdk4 (þ1.8 times), were remarkably increased in the 4% sodium citrate group (Fig. 1H). Accordingly, the 4% sodium citrate group had lower protein expression levels of p-Pdhe1a and Pdk2, which are key enzymes that drive the conversion of pyruvate to AcCoA, than those in the control group (Fig. 1J). Meanwhile, compared with the control group, the 4% sodium citrate group had higher expressions of glycogen synthase (gs) (þ6 times) and glucose 6-phosphatase (g6pase) (þ2.5 times), which are glycogen synthesis-and gluconeogenesis-related genes, respectively, whereas the expressions of fructose 1,6-bisphosphatase (fbpase) and phosphoenolpyruvate carboxykinase (pepck) were not changed (Fig. 1H). The protein expression level of G6pc was also significantly increased in the 4% sodium citrate group (Fig. 1J). In addition, 4% sodium citrate treatment significantly increased the expressions of TCA cycle-related genes, including citrate synthase (cs) (þ1.5 times) and isocitrate dehydrogenase (idh) (þ1.5 times), while the citrate carrier subunit b (cicb) was not affected (Fig. 1I). These results suggest that the ingestion of 4% sodium citrate led to systemic insulin resistance in Nile tilapia, although both hepatic glycolysis and gluconeogenesis were enhanced. Dietary sodium citrate triggered lipid accumulation through inhibition of lipid catabolism Sodium citrate treatment significantly altered the pattern of hepatic lipid metabolism of tilapia (Fig. 2). The concentrations of triglyceride in the liver and cholesterol in serum were significantly increased in the 4% sodium citrate-treated fish ( Fig. 2A and B), while there was no significant difference in hepatic cholesterol contents between groups (Fig. 2C). Hepatic AcCoA level, a substrate for DNL, in the 4% sodium citrate group was significantly higher than that of control (Fig. 2D). Interestingly, the expressions of the genes involved in lipogenesis, including sterol regulatory elementbinding protein 1 (srebp1) (À0.5 times), peroxisome proliferatoractivated receptor gamma (pparg), fatty acid synthase (fasn) (À5 times), acly (À5 times), and fatty acid b-oxidation, including carnitine palmitoyltransferase Ia (cpt1a) (À0.5 times), were notably decreased in the liver of 4% sodium citrate group ( Fig. 2E and F), whereas the expressions of ppara and the genes involved in lipolysis, including adipose triglyceride lipase (atgl) and hormonesensitive lipase (hsl), were not changed between treatments (Fig. 2F). Correspondingly, the 4% sodium citrate group had significantly lower protein expression levels of Acly and Cpt1a, while the total-acetyl-CoA carboxylase/phosphor-acetyl-CoA carboxylase (t-Acc/p-Acc) and mSrebp1 level was not affected by 4% sodium citrate treatment (Fig. 2G). These results suggest that the 4% dietary sodium citrate probably induced hepatic lipid accumulation by the inhibition of lipid oxidative catabolism. Dietary sodium citrate improved protein deposition in muscle Sodium citrate treatment also significantly affected the protein metabolism of Nile tilapia (Fig. 3). Supplementation of 4% sodium citrate prominently increased crude protein content in the muscle compared with the control group (Fig. 3A). The expressions of the genes related to muscle proteolytic metabolism in Nile tilapia, such as glutamate dehydrogenase 1 (gdh1), was down-regulated after sodium citrate ingestion, while the glutaminase (gls2) and branched-chain alpha-keto acid dehydrogenase (bckdha) expressions were not affected (Fig. 3B). The gdh1 gene expression, which promotes glutamate catabolism into the TCA cycle, was significantly decreased compared to the control group (Fig. 3B). Meanwhile, the expression of glutamine synthetase (glns), a protein synthesis-related gene, showed an approximately 4-fold increase in 4% sodium citrate group compared with the control group. The expression of asparagine synthetase (asns), another protein synthesis-related gene, decreased by nearly half after 4% sodium citrate treatment (Fig. 3B). Protein expressions of t-mTor and t-S6 in the muscle of 4% sodium citrate group were significantly higher than those of the control, although the protein expressions of p-mTor and p-S6 showed no difference between two groups (Fig. 3C). Accordingly, the protein expression of t-AMPK and p-AMPK were significantly decreased by approximately 50% in the muscle of the 4% sodium citrate-treated fish compared with the control (Fig. 3C). These results suggest that the 4% sodium citrate treatment could inhibit protein catabolism and lead to protein deposition by activating the mTOR signaling pathway in muscle. Dietary sodium citrate changed liver inflammatory status The effect of dietary supplementation of 4% sodium citrate on liver health was also examined (Fig. 4). Serum AST and ALT activities were significantly reduced by 4% sodium citrate treatment compared to the control group ( Fig. 4A and B). The histological results showed that the dietary supplementation of sodium citrate resulted in more pronounced vacuolization in the liver cells, suggesting more lipid deposition in the liver, than the control group (Fig. 4C). In addition, no significant signs of tissue damage or inflammation were observed (Fig. 4C). The expression of antiinflammatory-related gene, transforming growth factor beta (tgfb) did not differ significantly between the two groups, while the expression of interleukin 10 (il10) in the 4% sodium citrate group was significantly down-regulated (Fig. 4D). On the other hand, the expressions of liver pro-inflammatory-related genes, such as nuclear factor kappa B (nfkb), tumor necrosis factor alpha (tnfa) and il8, were significantly lower in the 4% sodium citrate group than in the control group (Fig. 4E). However, there was a near twofold elevation in il1b mRNA expression in the 4% sodium citrate group compared to the control (Fig. 4E). Given the divergence in proinflammatory factor expressions, we further examined the protein expression of p65, an important downstream signaling element of the NFkB pathway, and found that the 4% sodium citrate treatment significantly activated the phosphorylation of p65 in the liver (Fig. 4F). These results indicate that although dietary supplementation with 4% sodium citrate caused a mild inflammatory response in the liver of Nile tilapia, no significant liver injury was found. J.-X. Wang, F. Qiao, M.-L. Zhang et al. Animal Nutrition 14 (2023) 303e314 in liver damage in tilapia. The results of present study revealed no significant differences in growth performance and feed utilization of Nile tilapia between control group and sodium citrate groups. Romano et al. (2016) and the present study suggest that supplementing sodium citrate to Nile tilapia did not act as a growth promoter and the growth-promoting effect of sodium citrate in farmed fish is highly species-specific. In addition, previous studies on citrate have focused on the deposition of metal elements and trace elements in fish, and there is a lack of research on nutrient metabolism (Azari et al., 2021;Hosseini, 2012a, 2012b;Romano et al., 2016;Sarker et al., 2007;Sotoudeh et al., 2020). Therefore, it is important to investigate how sodium citrate affects nutrient metabolism in fish to assess whether sodium citrate can be used as a potential feed additive for aquaculture. Liver plays an important role in the removal of citrate (Li et al., 2016) by breaking down it into AcCoA as the substrate for lipid synthesis (Mosaoa et al., 2021). In the present study, we found that 4% sodium citrate treatment significantly increased the AcCoA and triglyceride contents in the liver of Nile tilapia. Further study found that the cpt1a gene and Cpt1a protein expression, the enzyme that normally exerts flux and controls mitochondrial fatty acid b-oxidation (Li et al., 2020), were significantly decreased when lipid was accumulated. Malonyl-CoA inhibits the oxidative decomposition of fatty acids through its allosteric inhibition of Cpt1 (Saggerson, 2008). Although malonyl-CoA levels were not examined in the present experiment, we found no decrease in protein expression of t/p-Acc, which catalyzes the generation of malonyl coenzyme A from Difference from group control: *P < 0.05, **P < 0.01, ***P < 0.001 (Independent-samples t test). fasn ¼ fatty acid synthase; m-Srebp1/srebp1 ¼ (mature) sterol regulatory elementbinding protein 1; ppara/pparg ¼ peroxisome proliferator activated receptor a/g; Acly/acly ¼ ATP-citrate lyase; hsl ¼ hormone-sensitive lipase; atgl ¼ adipose triglyceride lipase; Cpt1a/cpt1a ¼ carnitine palmitoyltransferase 1a; t/p-Acc ¼ total/phosphor-acetyl-CoA carboxylase 1. AcCoA, in the liver of Nile tilapia after 4% sodium citrate treatment. These results indicate that 4% sodium citrate treatment could reduce mitochondrial fatty acid b-oxidation in the liver of Nile tilapia. Meanwhile, we also found decreased expression of the lipogenesis-related genes such as srebp1, acly, pparg and fasn in the 4% sodium citrate group, along with the decreased protein expression of Acly and mSrebp1. Numerous studies have shown that the reduced expression of triglyceride synthesis-related genes may be caused by a feedback mechanism of lipid overaccumulation in the liver (Kim et al., 2004;Kreeft et al., 2005;Yan et al., 2015). Therefore, these results illustrate that the sodium citrate-induced lipid accumulation in the liver of Nile tilapia was mainly caused by the inhibition of lipid catabolism rather than the upregulation of lipogenesis (Fig. 5). In the present study, whole-body and muscle crude protein contents of Nile tilapia in the 4% sodium citrate group were significantly increased. Dietary sodium citrate supplementation was found to improve the metabolic status and extend lifespan in Drosophila fed high-fat diet. These effects depend on the activated AMPK and inhibited mTOR signaling pathways, suggesting a direct or indirect regulatory relationship between citrate and both pathways (Fan et al., 2021), which has not yet been confirmed in fish. Studies in fish demonstrated that activation of mTor and its downstream target protein S6 has been shown to promote protein deposition (Li et al., 2020). In the present experiment, we found that 4% sodium citrate treatment significantly down-regulated the expression of proteolytic-related gene gdh1 and up-regulated the expression of protein synthesis gene glns. As expected, we also observed the Nile tilapia fed 4% sodium citrate has significantly higher protein expressions of t-mTor and t-S6 and significantly lower protein expressions of t-Ampk and p-Ampk. The above results suggest that the sodium citrate treatment could change the overall muscle protein metabolism by inhibiting catabolism and promoting deposition (Fig. 5). It has been reported that sodium citrate treatment led to a significant decrease in whole-body crude lipid content (approximately 0.4%) and whole-body crude protein content (approximately 5%) of hybrid tilapia (Romano et al., 2016). In contrast with this, we found an increasing trend in whole-body crude lipid content, and significantly higher whole-body and muscle crude protein content in Nile tilapia treatment with 4% sodium citrate. We hypothesize that this difference could be due to the high percentage of soybean meal (54.87%) in the feed of hybrid tilapia (Romano et al., 2016), because dietary soybean meal above 22.5% has been reported to cause liver damage and reduced growth in hybrid tilapia (Lin and Luo, 2011). This suggests that the effects of dietary sodium citrate on fish physiological process could largely depend on the dietary components. Values are means ± SEM. Difference from group control: *P < 0.05, **P < 0.01, ***P < 0.001 (Independent-samples t test). gls2 ¼ glutaminase 2; bckdha ¼ branched chain keto acid dehydrogenase E1 subunit alpha; gdh1 ¼ glutamate dehydrogenase 1; glns ¼ glutamine synthetase; asns ¼ asparagine synthetase; t/p-S6 ¼ total/phosphor-ribosomal protein s6; t/p-mTor ¼ total/phosphor-mammalian target of rapamycin; t/p-Ampk ¼ total/phosphor-AMP-activated protein kinase. Citrate induced hepatic gluconeogenesis and insulin resistance but enhanced pyruvate oxidation for energy supply Intracellular citrate is an important regulator of energy production, as citrate inhibits and/or induces important strategic enzymes located at the entrance and/or exit of glycolysis, TCA cycle, gluconeogenesis and fatty acid synthesis (Iacobazzi and Infantino, 2014). In the present study, the serum glucose level was significantly increased in Nile tilapia after 4% sodium citrate treatment, while no significant changes were found in the mRNA expression of glucose transport-related genes (glut2 and glut4). This indicates that 4% sodium citrate treatment did not affect glucose uptake in the liver of Nile tilapia. We next observed that the 4% sodium citrate-treated group of Nile tilapia showed an approximately 7fold increment in liver gk mRNA expression, as well as a significant enhancement in PFK activity, which is considered to be the rate-limiting enzyme of glycolysis and is critical in determining glycolytic flux (Mor et al., 2011). In addition, glucose tolerance experiments revealed that serum glucose and insulin levels were higher in the 4% sodium citrate group of Nile tilapia than in the control group after glucose injection. These results suggest that sodium citrate treatment in basal metabolic state led to glucose intolerance and insulin resistance in Nile tilapia. On the other hand, citrate is a potent allosteric activator of Fbpase, an enzyme that converts fructose-1,6-bisphosphate to fructose 6-phosphate in gluconeogenesis (Wang and Dong, 2019). Although there was no significant change in mRNA expression of liver fbpase and pepck of Nile tilapia in this experiment, we found a nearly 2.5-fold increase in g6pase mRNA expression, a key enzyme that controls the gluconeogenic flux (Wang and Dong, 2019), after 4% sodium citrate treatment. Accordingly, there was also a significant increase in protein expression of G6pc in 4% sodium citrate group. Meanwhile, Fig. 4. Effects of sodium citrate treatment on the liver health of Nile tilapia. (A) Serum AST activity, n ¼ 3; (B) serum ALT activity, n ¼ 3; (C) H&E staining of the liver; (D and E) expression of genes related to inflammation, n ¼ 3; (F) expression of p-NF-kB p65 (Ser536) protein, n ¼ 3. Values are means ± SEM. Difference from group control: *P < 0.05, **P < 0.01, ***P < 0.001 (Independent-samples t test). AST ¼ aspartate aminotransferase; ALT ¼ alanine aminotransferase; nfkb ¼ nuclear factor kappa B; tnfa ¼ tumor necrosis factor alpha; il1b ¼ interleukin 1 beta; il8 ¼ interleukin 8; tgfb ¼ transforming growth factor beta; il10 ¼ interleukin 10. liver pyruvate was significantly accumulated, as well as glycogen content and gs mRNA expression (approximately 6-fold increase), while lactate content was notably decreased by 4% sodium citrate treatment. These results suggest that 4% sodium citrate treatment prominently enhanced gluconeogenesis and hepatic glucose synthesize in the liver of Nile tilapia. In the present study, the protein expression of p-Pdhe1a and Pdk2, which inactivates Pdh activity, were significantly downregulated. In addition, the rate-limiting enzymes of the TCA cycle cs and idh mRNA expression were significantly increased. These data collectively suggest that the supplementation of 4% sodium citrate to Nile tilapia activated Pdh complex and enhanced the flux of hepatic pyruvate into the TCA cycle after oxidation to AcCoA, suggesting the enhanced TCA cycle. Citrate treatment caused inflammation in the liver The development of inflammation has been regarded as one of the main causes of systemic insulin resistance and severe lipid accumulation in the liver (Cao et al., 2020). Although the reduced serum AST and ALT activities in 4% sodium citrate group indicate that the liver was not significantly damaged during the feeding trial. 4% sodium citrate treatment resulted in a significant decrease in the expression of the pro-inflammatory-related genes tnfa and il8 in the liver of Nile tilapia, while elevated gene expression of il1b. Recently, a close relationship between citrate and inflammation has been reported (Branco et al., 2021;Liu et al., 2022;Liu et al., 2022). Under normal conditions, dietary supplementation of citrate leads to polarization of mouse liver macrophages toward the M1 type and promotes inflammation (Branco et al., 2021). On the other hand, blocking the citrate shuttle can alleviate inflammation by reshaping cellular metabolic patterns (Liu et al., 2022) and blocking inflammatory signaling pathways (Liu et al., 2022). Meanwhile, the significant activation of hepatic p65 phosphorylation in the sodium citrate-treated group reinforced the conviction that 4% sodium citrate treatment could induce liver inflammation in Nile tilapia. However, compared to the control group, no significant tissue damage or cellular infiltration was found in the liver sections of the 4% sodium citrate group. These results imply that 4% dietary sodium citrate supplementation could induce inflammation signals in the liver of Nile tilapia, but did not lead to liver injury. Conclusion The results of this study showed that sodium citrate (4%) promoted lipid and protein deposition in Nile tilapia by inhibiting their catabolism but had no effect on growth performance. Sodium citrate treatment was able to enhance both hepatic glycolysis and the entry of pyruvate into the TCA cycle for catabolic energy supply. Although no significant liver injury was found in the sodium citrate-fed fish, dietary sodium citrate could induce hepatic gluconeogenesis, systemic insulin resistance and activate inflammatory signaling pathways. The present study suggests that at least in tilapia, dietary sodium citrate has a double-edged effect in fish, with the positive aspect in promoting nutrient deposition and the negative aspect in causing hyperglycemia and insulin resistance (Fig. 5). Our study brings novel information in understanding the roles of citrate in nutrient metabolism in fish and is helpful for the precise application of citrate in aquafeed formulation. Author contributions Zhen-Yu Du, Yuan Luo, and Jun-Xian Wang: conceived the study and designed the experiments. Jun-Xian Wang: carried out the experiments, analyzed the data, and wrote the manuscript. Fang Qiao, Li-Qiao Chen and Mei-Ling Zhang: contributed reagents/materials/analysis tools. Zhen-Yu Du, Yuan Luo, and Jun-Xian Wang: wrote and revised the manuscript. All authors read and approved the final manuscript. Availability of data and material The datasets and materials used in this study are available from the corresponding author on reasonable request. Declaration of competing interest We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, and there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the content of this paper.
2023-07-12T06:42:17.692Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "562e258ad1a894e8976eb8d0858fe00ebc5ae480", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.aninu.2023.06.005", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0272fe3119690f9541f188516dd39b4a5b943601", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
221754153
pes2o/s2orc
v3-fos-license
MONITORING AND ENERGY MANAGEMENT APPROACH FOR A FUEL CELL HYBRID ELECTRIC VEHICLE Recently, the reduction of fuels consumption is a global challenge, in particular for significant investments in the automotive sector, in order to optimize and control the parameters involved for the partial or total electrification of vehicles. Thereby, the energy management system remains the axis of progress for the development of fuel cell hybrid electric vehicles. The fuzzy controller has been widely adopted for energy monitoring, where the determination of its parameters is still challenging. In this work, this problem is investigated through a secondary development of a fuzzy energy monitoring system based on the Advisor platform and particle swarm optimization. The latter is used to determine, for different driving conditions, the best parameters that increase the fuel economy and reduce the battery energy use. As a result, five tuned fuzzy energy monitoring system models with five sets of parameters are obtained. Evaluation results confirm the effectiveness of this strategy, they also show slight differences between them in terms of fuel economy, battery state of charge variations, and overall system efficiency. However, the fuzzy energy monitoring system tuned under multiple conditions is the only one that can guarantee the minimum of the state of charge variations, no matter the driving conditions. INTRODUCTION Despite the predominance of the conventional car in the transportation industry, the number of eco-friendly cars manufactured became recently more significant. The fuel cell hybrid electric vehicle (FCHEV) is one of the best candidates to lead the future automotive market. Besides the zero CO2 emission of the FCHEVs, they have many advantages that make them more competitive. Even though they are in need of improvement, they are more efficient than conventional vehicles, they have a short refueling time, and they have a large driving range [16]. The FCHEV requires a fuel cell as the main power source and an additional storage source. It also requires an energy management system (EMS) that plays a crucial role in controlling the power between sources in order to ensure the vehicle's performance. It has a major impact on improving the fuel economy [13]. Numerous studies focusing on these strategies used in fuel cell hybrid systems have been reported in the energy storage industry literature. Recently in 2020, Cao Y. et al. in [2] realized a hybrid PV / fuel cell system optimized by metaheuristic techniques for the electric supply of a wind turbine and Radaideh MI. et al. in [14] have optimized the design of hybrid fuel cell systems under operating constraints for the purposes of energy production and cooling. Also in 2019, Wang Y. et al. in [22] proposed an energy management system for a hybrid fuel cell / battery vehicle, integrating the degradation of fuel cells and the battery in the maintenance aspect of this type of system. Indeed, Guo J. et al. in [5] proposed a modern method of energy management in real time for a hybrid electric vehicle in the same direction Mehta R. et al. in [10] realized an intelligent system for the energy management of electric vehicles and Wu X. et al. in [24] applied an optimal control strategy for the energy management of electric vehicles and Bhatti AR. et al. in [1] carried out a technicaleconomic study based on inference rules for the energy management of electric vehicles at constant price using a photovoltaic network system. Regarding the studies of energy management systems, they can be classified into two main categories: rules based and optimization based. The EMS based on fuzzy logic control (FLC) is one of the most popular methods, it has attracted many researchers [7, 9, 11 25-26]. It has many advantages, being simple, robust and flexible when the model is nonlinear. However, this strategy cannot be efficient if the driving profile is not previously known, therefore it cannot have optimal results in practice. In addition, most of the research results are based on simulations and implementation on hardware is still lacking [18]. Li Q. et al. in [9] implemented an FLC model in the Advisor platform. They proposed two structures: FC/BAT and FC/BAT/SC in order to increase the fuel economy and the mileage considering driving modes to design the fuzzy parameters. Results show better performances for these two proposed structures than the power tracking controller (PTC_ADV) embedded in the Advisor software. Hemi et al. in [6] propose three configurations: FC/BAT, FC/SC and FC/BAT/SC. These models are implemented in a modified Simulink model proposed by Tremblay et al. in [21]. They are evaluated in real time driving cycles, simulation results show that the proposed models satisfy the power requirement. The last configuration enables a fast charging and discharging which improves the battery lifespan. Zhang G. et al. in [25] propose a fuzzy EMS for a FC/BAT hybrid locomotive based on the Advisor software. They use the Advisor auto-optimization method to optimize the size of the FC and the Acid battery in order to improve their efficiencies and the fuel economy. They use a fuzzy model to control the power flux between these sources. A comparison of results between the developed models with the power following strategy (PFS) under a real operating subway's driving cycle shows an improvement of fuel economy, dynamic priorities, and a large increase of FC efficiency. However, the battery, as well as other components' efficiencies, decrease in comparison with PFS. In these previous studies, the selection of the fuzzy parameters is not focused enough. However, due to the high number of the fuzzy parameters, the selection of the best parameters of the controller by trial and error is very problematic [11]. Much research focuses on this subject; Chun-Yan L. et al. in [4] propose an optimized Fuzzy EMS for a FC/BAT hybrid vehicle, where the fuzzy EMS parameters are optimized by the direct algorithm to maximize the efficiency of the fuel cell under three profiles. Results show that these parameters are close. The authors conclude that the proposed controller can be adopted on real driving conditions. Caux S. et al. in [3] present an EMS based on fuzzy and their parameters are tuned by genetic algorithm (GA) to maximize the energy economy, where the results are higher when compared to the results obtained by using dynamic programming (DP) which is considered as a reference. In order to validate these results for a real driving profile, authors apply the optimized parameters obtained for each driving profile on another profile. They conclude that the given results are to some extent acceptable. The authors interpret this as the similarity of the harmonic contents on the power profiles. Odeim F. et al. in [12] propose two real time EMSs: PI controller based on Pontryagin's minimum principle (PMP) with three parameters and fuzzy controller. They use GA to optimize its ten parameters to minimize the fuel consumption while maintaining the state of charge (SOC) deviation, with six driving cycles are engaged in the optimization process to have a further robustness on real time conditions. Results DIAGNOSTYKA, Vol. 21, No. 3 (2020) Tifour B., Boukhnifer M., Hafaifa A., Tanougast C.: Monitoring and control of energy management system ... 17 show the out performance of the PI strategy over the fuzzy strategy. Ravey A. et al. in [15] apply GA to define the set of fuzzy parameters to reduce the fuel consumption and maintain the SOC of the lead acid batteries at the end of the cycle the same as the SOC initial, the optimization is processed under one driving cycle. Simulation results are close to DP strategy results. However, they are lower when this strategy is applied in a real FCHEV. Kandidayeni M. et al. in [7] propose an FLC for FCHEV, they use particle swarm optimization (PSO) to find the optimal fuel cell and battery sizes and it is used again to define the optimal fuzzy parameters over different traffic patterns to reduce the fuel consumption. The optimization results are compared with GA optimization results, this optimal controller keeps the fuel cell work at its high efficiency. At a second stage, in order to be applicable to different conditions, the fuzzy controller is tuned over the TEH-CAR driving cycle to attain the optimal fuel consumption. This study shows good results in terms of fuel consumption, despite the SOC results are not shown in this work. Zhang R. et al. in [26] develop an EMS for an FC/CS vehicle based on fuzzy system, where the fuzzy parameters are tuned by GA to minimize the fuel consumption and the fuel cell current fluctuations. The NEDC cycle, as it has the highway and urban profiles, is delegated in the optimization process. This bi-objective problem is converted to a single objective by using the weighted sum of objectives (WSO) method. Results over many driving profiles show a reduction of the current fluctuations and the fuel consumption. In this work an energy management and monitoring strategy is presented, based on fuzzy first order of Sugueno, its membership function and weights are selected by PSO. The main objectives of this development are to tune the fuzzy EMS under different conditions to maximize the fuel economy mile per gallon (MPG) and maintain the SOC variation in the lowest range at the end of the trip while keeping the vehicle performance. The development of the fuzzy EMS model is implemented in the Advisor simulator software. Four driving cycles are engaged independently and altogether in the optimization process. Therefore, five tuned fuzzy are obtained and evaluated under the considered driving cycles. In this paper, the second section introduces initially an overview of the proposed approach followed by a description of the FCHEV model in the Advisor. The third part will be dedicated to the development of the fuzzy EMS and the optimization process. The results will be discussed and a conclusion with future work will be presented in the last section. ENERGY MANAGEMENT AND MONITORING SYSTEM The electric vehicle is emerging as a strategic environmental solution to tackle one of the biggest energy challenges. However, these processes require an energy storage system, in order to improve the environmental performance of urban areas. The state of charge of the battery (SOC) is dependent on its state of aging, given by the following equation [1]: is the initial state charged to 100%, is the current flowing through the battery, The SOC is expressed as a percentage and it is referenced to 100% when the charging current has not changed for two hours ( 0 t ), for charging at constant voltage and constant temperature. In a standardized way, it most often corresponds to the ratio between the residual capacity and the current nominal battery capacity. Also, the state of health of the battery (SOH) makes it possible to define its aging, it is an indicator which quantifies the reduction in performance due to the degradation of its capacity or the increase in its internal resistance, which can be determined with the following formula: System description As previously mentioned, this research focus on fuzzy inference variables selection and how they can be impacted by the driving conditions as well as how they can impact on the fuel consumption and the battery energy use. The Fig.1 presents an overview of this proposed methodology, through the Advisor environment, we develop and implement a fuzzy energy management system to split the power between the fuel cell and the battery of the FCHEV. The Advisor provides a powerful tool for a fast design and analysis of different powertrains [23]. The optimization tool PSO is introduced to interact with the FCHEV model. The PSO is responsible for monitoring the fuzzy variables for different driving conditions, the objective function is designed to give the optimal fuel economy while maintaining the battery state of charge variation and ensuring the vehicle dynamic performance [17,19]. Recently, the main axis of progress for the development of electric vehicles is based on the development of storage systems for on-board energy providing solutions for improving the autonomy, mass and lifespan of this type of system. Indeed, the time cycle used by the vehicle is equivalent to minimizing the discharge of the battery compared to its maximum state of charge which is the objective function, given by: is the associated energy cost. However, electric vehicles are equipped by several electronic components and are exposed to direct electrical risks, even in the event of a breakdown. A good monitoring strategy, based on the control of their energy variables is necessary to eliminate and minimize breakdowns and the presence of failures. The implementation of such objectives requires detailed studies in order to define, optimize and control these parameters involved. In this work, the best parameters for each case are saved in the Advisor for assessment. Fuel cell battery model The FCHEV, shown in Fig. 2, is mainly composed of the traction motor, the fuel cell (PEM), the battery pack, the DC/DC power converter and the EMS [23]. Fig. 2. FCHEV configuration in Advisor The traction motor provides the propulsion power of the vehicle, the size of the motor should be selected by calculating the maximum power required from the equation:  are respectively the required power, the vehicle speed, the transmission efficiency, the mass of the vehicle, the acceleration due to gravity, the rolling resistance coefficient, the slope angle of the road, the density of air, the aerodynamic drag coefficient, the cross-sectional area of the vehicle, and the rotational inertia factor. The energy consumed is obtained by integrating the power of the vehicle with respect to the travel time ( cy t ), given by: The power and energy consumptions of an FCHEV depend on the driving cycle, but also on the characteristics of the vehicle in terms of weight, volume, coefficient of penetration into the air [12]. The fuel cell is the principal source providing the most power required by the traction motor, the battery as an additional source assists the fuel cell to improve its performances and stores the braking energy, which increases the system efficiency. In this paper the specifications of the FCHEV in Advisor are summarized in Table 1 [20]. It should be noted the sizing of the components has not been considered in this study. Fuzzy logic model The Fuzzy logic has been commonly used as a controller in energy management systems, it has many advantages, it is based on If-Then rules and membership functions which imply the knowledge of human expertise, it doesn't require the exact model of the system and it has an inherent robustness [19]. The proposed EMS model is the first order Sugeno fuzzy that is implemented with two inputs and one output. The inputs variables are the required propulsion power where N is the number of rules. The model is set up to be compatible with Advisor blocks and it can be embedded into the hybrid vehicle model as shown in Fig. 3. To make the model work properly and loaded from the setup screen, the following steps are necessary: Step1: Create the proposed model with Simulink blocks and match the inputs and the outputs the same as the existing control models Step2: Create a fuzzy system with the interpreted Simulink Matlab function block Step3: Unlock the Advisor control library to log the new subsystem Step4: Create a new file in Advisor control folder including the name of the new controller and all power train control parameters, including gearbox, clutch, hybrid, and engine controls Step5: Make changes to the existed Advisor function by adding the new controller in the block choice list, this function is responsible to set all the configurable subsystems to their proper choices for the current block diagram [20]. Optimization of the model by particle swarm As previously stated, it is critical to determine by trial and error the fuzzy parameters to improve the EMSs performances. However, using tools of optimization should be effective [18]. Many DIAGNOSTYKA, Vol. 21, No. 3 (2020) Tifour B., Boukhnifer M., Hafaifa A., Tanougast C.: Monitoring and control of energy management system ... 20 researchers have been introducing these tools to optimize the fuzzy parameters such as genetic algorithm GA and PSO. Hence PSO is chosen to tune the fuzzy parameters. It is extremely simple and powerful [8]. Problem formulation The average fuel economy that is given by MPG is chosen as the objective function that has to be maximized, it has been defined as: To satisfy the vehicle performances, PSO must evaluate the objective function with constraints, the PSO requires the prior setting of several monitoring parameters depending on the problem considered. Hence, the performance of PSO has a strong correlation with the adjustment of these parameters. The mathematical model of PSO is simple by defining these concepts on every iteration of the PSO position and the velocity of every particle is updated according into this simple mechanism. The particle moves toward a new position by using all these vectors. The new updated position denoted by it's the new position with a new velocity The mathematical model of motion of particles in the PSO can be described as follows: With: Where i is the index of the particle, ) (t i  is the velocity of particle i at time t and ) (t x i is the position of particle i at time t , the parameters w , is the individual best candidate solution for particle i at time and ) (t g is the swarm's global best candidate solution at time t . In this work, the behavioral influence of each parameter is analyzed using the PSO, in order to determine the optimal set of parameters. This represents a means to measure the quality of each solution of the objective function by seeking the optimal value. The fitness function will take this form: is the objective function that should be negative, ) (x g i is a group of nonlinear inequality constraints that will be evaluated by the penalty technique. Hence two constraints are considered, the difference between the state of the charge SOC at the start time and at the end of the cycle SOC has to be up to 0.005, the second constraint is the difference between the available speed and the required speed that must be up to 0.62m/h. To determine SOC values of battery current mathematically is given by: I is the battery current, loss I is the consumed loss current, rated C is the optimal capacity Optimization under multi driving cycle As mentioned earlier, optimal parameters selected under one driving profile may not be optimal under another one or it may affect the vehicle performances in worst cases. In this experiment, we attempt to select the best parameters that can be applied for many driving conditions, to achieve this goal, the objective function should be the sum of the objective function for each n driving cycle, and therefore the fitness function should be: Fuzzy inference variables selection To find the best fuzzy parameters that maximize the fitness function, 32 particles are implemented in the optimization process, representing the fuzzy membership functions and the weights; (P1 to P4), (P5 to P12) and (P13 to P17) represent the inputs SOC, req P and the output PFC, respectively, with the trapezoidal distribution of MFs. The weights are coded into 15 particles; (P18 to P32), in each iteration, each particle represents a potential solution and it is evaluated during a test drive cycle with the objective function. Driving cycles selection It was already known that the driving condition is one of the factors that has a significant impact on the vehicle performance. Four standard driving cycles are selected in this study for optimization and evaluation of the model, they describe different conditions, speeds and high accelerations. The UDDS describes the drive in the city, it has a low speed and large periods of stops. The NEDC DIAGNOSTYKA, Vol. 21, No. 3 (2020) Tifour B., Boukhnifer M., Hafaifa A., Tanougast C.: Monitoring and control of energy management system ... 21 consists of four repeated urban sequences and one extra-urban cycle. The HWFET has a high speed and no stops periods through all the cycle. The WLTP that recently replaced the NEDC for approval testing of light duty in Europe vehicles, it includes urban, extra urban and highway driving sequences. Curves, histograms and some statistical features are shown in Fig. 4-5 and Table 3, respectively. These driving cycles are commonly used for official fuel economy test. The first three are available in the Advisor library. Hence the WLTP class 3a is selected and added to the Advisor library as well. Optimization results The proposed algorithm is implemented and executed in Matlab. The results of the fitness function for each driving cycle and multiple driving cycles are given in Fig. 6. Table 4 summarizes the optimal variables of inputs, output and weights for the five tuned fuzzy EMSs obtained after optimization. The Fig. 7-8 respectively show the tuned inputs, the control surfaces of the five obtained models. Results and discussions The obtained models are tested through the advisor software over the same driving cycles with the same initial conditions. In comparison, the FCHEV with the embedded PTC_ADV EMS based on thermostat strategy is also carried out. The Fig. 9-12 respectively present the power distribution over UDDS, NEDC, HWFET and WLTP cycles for all models, as shown in this figure, at the beginning, the battery completely provides the power to the motor because of the slow response of the fuel cell. The fuel cell power response time for all the tuned fuzzy EMSs (FUZ.TUN) is faster than the PTC.ADV. This is due to the fact that the required power is not counted in the thermostatic strategy. The battery also assists the fuel cell during the high power demand at each acceleration. The battery absorbs the energy from the fuel cell during the low power demand. It also absorbs the negative energy from the motor during the braking phases. Table 5. summarizes the fuel economy MPG, the Standard Deviation of the SOC (SD SOC), the  SOC and the overall efficiency of the whole system from the different models. The overall efficiency is calculated by the following formula [20]: where Aero is the loss due to aerodynamic drag on the vehicle in Kilo-joules, Rolling is the total energy required for the vehicle to overcome the rolling resistance, Fuel_in is the total energy delivered by the fuel cell over the drive cycle and ess_stored is the useful energy leaving the batteries over the drive cycle. To avoid the effect of the initial SOC on the fuel economy calculation, advisor provides a tool to correct the fuel economy within a tolerance that can be specified by the user. Hence the tolerance is chosen to be identical to the SOC constraint. CONCLUSION In this work, an EMS based first order Sugeno fuzzy for FCHEV was developed, a PSO was used for monitoring and optimization of the fuzzy parameters under different conditions in order to find the best sets that can achieve the best fuel economy considering the battery SOC maintenance. This study focused on the results issued from optimization under a particular condition and the results issued from optimization under multiple conditions. Results showed that the fuel economy improved in comparison with the PTC.ADV under all conditions, as well as the overall efficiency in most conditions. It also showed that tuning the fuzzy EMS under one condition cannot guarantee the same performance in terms of battery SOC when it is tested under another condition. However, if it is tuned under multiple conditions, it can achieve a good fuel economy with smooth SOC variations and a low SOC which are beneficial to extend the battery life. In future work, a multi objective particle swarm optimization (MOPSO) will be applied to the proposed model with consideration of the sources' sizing. As this methodology was successful for the considered driving conditions, more driving profiles will be involved in the optimization process. Moreover, a real implementation to validate the fuzzy EMS controller will be performed. Benali TIFOUR PhD student at the University of Djelfa. His research interests are the energy management sys-tems, power systems, Fuzzy logic, optimizations, and embedded systems design. Moussa BOUKHNIFER is an Associate Professor HDR (SMIEEE) at Lorraine University (France). His main research interests are focused on energy management, diagnosis and FTC control with its applications to electrical and autonomous system. Ahmed HAFAIFA Professor at the University of Djelfa. His research area of interests includes the modelling and control in industrial systems, the diagnosis and new reliability engineering, fault detection and isolation in industrials process, intel-igent system based on fuzzy logic and neural networks. Camel TANOUGAST Professor at the University of Lorraine France. His interests include reconfigurable systems and NoCs, design and implem-entation real time processing architectures, computing vision, image processing, cryptography and the Digital Television Broadcast.
2020-09-16T13:53:36.147Z
2020-06-17T00:00:00.000
{ "year": 2020, "sha1": "ec92073b04b41ac65f962feb9bdd1c78dcbe632a", "oa_license": "CCBY", "oa_url": "http://www.diagnostyka.net.pl/pdf-123996-53066?filename=Monitoring%20and%20energy.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec92073b04b41ac65f962feb9bdd1c78dcbe632a", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
211547303
pes2o/s2orc
v3-fos-license
Optical Properties Comparison of Carbon Nanodots Synthesized from Kangkung ( Ipomoea aquatica ) with Deep Frying and Roasting Techniques Carbon nanodots (Cdots) have many unique properties such as luminescence that can be utilized in various fields. The purposes of this study are to synthesize Cdots from kangkung (Ipomoea aquatica) through frying and roasting techniques and compare the optical properties of the Cdots using UV-Vis, PL, and FTIR. Three stages of synthesizing process of Cdots, i I. INTRODUCTION One of popular technologies which is being developed in the 21 st Century by several countries around the world is nanotechnology.Nanotechnology is the engineering or creation of materials, functional structures, and devices on nanometer scales [1], i.e.: 1 nm to 100 nm.This trend is believed to be in line with the increasing awareness of the society with environmentally-friendly technology, namely the higher expectation of the society to the eco-friendly or green product commodities to ensure that the product is good for humans and the environment [2]. One of product produced through nanotechnology is carbon nanodots (Cdots) [3].Cdots are made of carbon elements that are available and generally non-toxic in nature [4].Some characteristics of the Cdots are namely having a size of 1 nm to 10 nm, an amorphous structure, and spherical shape [5,6].The discovery of the Cdots has become popular topic, and they have been widely studied in the world due to their their unique size and benefits [7].Cdots have potentials to replace semiconductor quantum dots and also can be used in a variety of applications such as biomedical imaging, analytic detection, fullcolor display, and light-emitting devices [8,9].One prominent application of the Cdots is in biological application because Cdots can be made of organic materials that are easy to find and available in nature [10]. At the moment, Cdots based on environmentally friendly technology are still being under development, especially in Indonesia [11].As Indonesia has diverse organic materials of plants and animals, it is believed to have huge potentials as the carbon sources of Cdots.In previous studies, carbon sources that have been used for synthesizing Cdots include candle soot [12], tobacco leaf [13], orange peel [14], banana juice [15], and so forth.Despite the fact that there have been several carbon sources to produce Cdots as mentioned above, there is still a need to synthesize Cdots from other carbon sources to enrich the possibility of producing Cdots.In this study, the synthesis of Cdots would use kangkung plants (Ipomoea aquatica) are used as the alternative carbon source of Cdots.Kangkung have roots, stems, fruits, flowers, and seeds.Kangkung plants are popular in tropical countries like Indonesia as green vegetables because they have a chlorophyll structure [16].In addition, kangkung plants are easy to obtain, and the utilization of this plant in the area of technology needs to be explored.Furthermore, kangkung plants have never been utilized as the alternative carbon source for Cdots material. Innovation in synthesis of Cdots is expected to produce Cdots materials with superior, efficient, and satisfying properties [17].The innovation is mainly from the methods used in producing the Cdots.Different synthesis methods can produce different amount of carbon, oxygen, nitrogen, and other functional properties of the Cdots.In addition, it causes differences in carbogenic core and surface structures of the Cdots [18]. To date, the simplest methods that are used in the synthesis of Cdots are hydrothermal and microwave [1].These methods can produce different characteristics of the Cdots.Therefore, it is interesting to study new methods in the synthesis of the Cdots using frying [19] and roasting.These methods are simple, effective, and inexpensive, and can produce Cdots on a large scale; these methods do not require sophisticated equipment and hazardous chemicals during the synthesizing process. Hence, the objectives of this study are to give information about preparing, synthesizing, and characterizing Cdots made of kangkung plants through frying and roasting techniques.The Cdots are characterized using UV-Vis spectrophotometer, photoluminescence (PL), and Fourier Transform Infrared spectroscopy (FTIR) to compare the optical properties of the Cdots by frying and roasting technique. II. METHOD Preparation of kangkung powder Preparation of making kangkung powder was started by cleaning the plants by separating the parts of the plants based on their roots, stems, and leaves.Then, each part of kangkung was heated in an oven at 250 o C for 2 hours and then mashed using a mortar into powder.The kangkung powder consists of stem powder, root powder, and leaf powder. Synthesis of Cdots by frying technique Synthesis of C-dots by frying technique was done by frying 15 g of stem powder in 120 ml cooking oil for 5 minutes at 88 o C.Then, the sample was filtered using a filter paper number 40 to separate the solution and the remaining powder.5 ml of the solution was mixed with 30 ml n-hexane and stirred until homogenous.The same procedures were conducted for leaf and root powders. Synthesis of Cdots by roasting technique Synthesis of C-dots by roasting technique was done by roasting 15 g of stem powder for 5 minutes.The roasted step powder was then mixed with 120 ml of distilled water.It was then stirred until homogenous and filtered to separate the solution and the remaining powder.Then, the same procedures were conducted for leaf and root powders. UV-Vis spectrophotometer The UV-Vis spectrophotometer was used to determine the wavelength of Cdots at maximum absorbance peak.The range of the wavelengths was selected from 200 nm to 800 nm.This characterization was done by preparing 5 ml of each sample solution.The n-hexane and distilled water were used for blank solutions in this characterization.Photoluminescence (PL) The PL characterization was used to determine the emission of Cdots.The result of this characterization shows the wavelength of the emission at maximum intensity.This characterization was done by preparing 5 ml of each sample solution, and there is no need for blank solution.Fourier Transform Infrared (FTIR) FTIR was used to determine the functional groups contained in the Cdots solutions.The result of this characterization shows a graph of % transmittance vs. wave number.The wave number indicates the functional groups contained in the sample.This characterization was done by testing the C-dots solution as the sample. III. RESULTS AND DISCUSSION Cdots have been synthesized from kangkung with two different techniques, i.e.: roasting and frying techniques.The Cdots solutions by frying and roasting techniques are given in Figures 1 and 2, respectively.Based on Figure 1, the Cdots solution from the stem of kangkung by frying technique is turbid.This indicates that more Cdots are formed from stem of kangkung than other parts which have clear solutions [1].Moreover, the Cdots solutions by frying technique are generally clearer compared to the Cdots solution made by roasting technique.It can be observed that the Cdots solutions by roasting technique are transparent brown color solution (Figure 2).Therefore, it may be deduced that different techniques in producing Cdots produce different color solutions.This may be affected by the amount of Cdots contained in the samples.Moreover, this also indicates that the roasting technique produces the most Cdots.In this study, the Cdots solution is characterized using UV-Vis.The UV-Vis results show the absorbance patterns of Cdots by frying technique presented in Figure 3.The Cdots samples from leaf, stem, and root have one absorbance peak.The peak is at the range of 293-296 nm.This is in accordance to the study that Cdots generally show optical uptake on the wavelength of UV with the tail extending on the wavelength of visible light [15,20].Furthermore, the peaks indicate the core of Cdots [21].Based on Figure 3, the stem of kangkung provides the highest absorbance peak.The higher the absorbance value, the more Cdots will be produced, indicating that the highest production of Cdots is from the stem of kangkung plants. Cdots by Frying Technique using UV-Vis The results of UV-Vis test for Cdots by roasting technique is shown in Figure 4.The Cdots samples from the leaf, stem, and root have one absorbance peak in the ranges of 262-282 nm.This is in accordance with the study that shows the absorbance peak of Cdots is at 260-360 nm [18].As in the previous result, the samples have one absorbance peak that shows the core of Cdots [21].Moreover, the root of kangkung has the highest absorbance peak.This shows that the production of Cdots via the roasting technique is mostly produced from the root of kangkung plants. The frying and roasting techniques produce the Cdots solutions with one absorbance peak in different ranges of wavelength.The Cdots by frying technique has longer wavelength at the absorbance peak compared to the Cdots by roasting technique.The results from the UV-Vis tests show that the absorbance peaks of the Cdots via frying and roasting are obtained at 293-296 nm and 262-282 nm, respectively.This clearly shows that the absorbance peak of the Cdots by frying is located at longer wavelength compared to the Cdots via roasting.The absorbance peak is attributed to the →* electronic transitions or excitation of the core of Cdots [15,21]. Cdots by Roasting Technique using UV-Vis The next characterization was using PL to determine the emission of the Cdots.The PL detects the electronic transitions from the excitation to ground states and show it on a graph of intensity vs. emission wavelength.The result of PL for the Cdots by frying technique is presented in Figure 5.The samples of Cdots by frying technique have two intensity peaks in the ranges of 674-677 nm and 500-510 nm, respectively.This means that the Cdots solution emits red and green colors, respectively [15,22].According to Li et al the red emission indicates the porphyrin structures contained in the sample [22], which is the surface state of the Cdots and the green emissions indicates the core of the Cdots.Based on Figure 5, the Cdots made of the stem of kangkung has the highest intensity peak of emissions showing that the electronic transition from the excited to ground states are mostly produced by the stem of kangkung [15].This is in accordance with the results of UV-Vis tests.The results of the PL characterization for the samples of Cdots by roasting technique are presented in Figure 6.The Cdots have an intensity peak in the range of 511-519 nm, so that the Cdots emit green color [15,21,23,24].Based on Figure 6, the highest intensity peak of emission is obtained by Cdots from the leaf of kangkung plants which indicates that the particles of Cdots from the stem of kangkung are mostly the transition from the excited to ground states [15].The frying and roasting techniques produce different color emissions.The Cdots produced by frying technique emit red and green colors, while the Cdots by roasting technique only emit green color; different synthesis methods will produce different characteristics of the Cdots produced.The difference in the procedure brings forth different Cdots properties, one of which is the luminescence color. The final characterization was using FTIR to determine the functional groups contained in the Cdots.The result of FTIR shows the relation of transmittance and wave number.Every functional group has different wave number that depends on the vibration and absorption of infrared energy.The results of FTIR tests for Cdots by frying technique made of the leaf, stem, and root are presented in Figure 7. Based on Figure 7, the functional groups that can be identified from the samples are C=C, C=O, C-H and O-H [15,25].The presence of C=C indicates the core of Cdots [1,26].The C=O bonds in the results of FTIR tests show the surface state of Cdots material [26].The C=O functional groups confirm the red color luminescence exhibited from the Cdots by frying technique.[27][28][29], and the presence of C=C also indicates the core of Cdots [1,26].In addition, there is also another functional group around 2300 cm -1 which belongs to CN.The Cdots solutions from kangkung are successfully synthesized through frying and roasting techniques containing C=C bond that forms the core of Cdots.Moreover, the frying technique produces higher percentage of transmittance of C=C bonds compared to the roasting technique, i.e.: 75 % and 20 %, respectively.This means that the absorbance of the Cdots with frying technique is lower than the Cdots with roasting technique. Based on the discussion, the frying and roasting techniques can produce Cdots with different optical properties.Both of these methods are simple, inexpensive, non-toxic, and can produce Cdots massively.Therefore, this study contributes to the many Cdots preparation with alternative procedures which gives impact the way Cdots are massively produced. IV. CONCLUSION Cdots have been synthesized from kangkung plants with frying and roasting techniques. The Cdots have been characterized using UV-Vis, PL, and FTIR.Cdots samples obtained from frying and roasting techniques have different optical properties.The frying technique produces Cdots with longer wavelength at the absorbance peak in the UV-Vis test compared to the roasting technique.Moreover, the frying and roasting techniques produce different luminescence colors, namely red and green, respectively.The FTIR characterization shows the presence of C=C and C=O which are the core and surface state of the Cdots by frying technique, while the samples by roasting technique show only the core of Cdots.Further studies can be conducted by providing additional characterizations such as TEM or DLS which are important to further verify the existence of the Cdots material in the samples. Figure 4 . Figure 4.The Characterization Results of the Figure 5 . Figure 5.The Characterization Results of the Cdots by Frying Technique using PL Figure 6 .Figure 7 .Figure 8 . Figure 6.The Characterization Results of the Cdots by Roasting Technique using PL
2020-02-27T09:21:06.507Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "f032a90e8182b4874464ec339895d89bc2998110", "oa_license": "CCBYNC", "oa_url": "https://journal.unesa.ac.id/index.php/jpfa/article/download/5551/3294", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b57f48c723d312c6a3b0e7b0a5ce12c6b573831f", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
248353144
pes2o/s2orc
v3-fos-license
Integrated Framework for Bus Timetabling and Scheduling in Multi-Line Operation Mode : Bus service is of great significance to urban residents. With the convergence of bus lines and the formation of bus hubs, a multi-line operation mode which can realize the centralized management of vehicles is applied to the daily bus service planning. To solve the bus service planning problem systematically in the multi-line operation mode, we propose an integrated framework for bus timetabling (TT) and vehicle scheduling (VS), which are the two fundamental processes of bus service planning. Firstly, the determination processes of TT and VS are correlated by constructing the multiple vehicles’ trip-link chains with departure time information to facilitate simultaneous optimization. Secondly, a multi-objective optimization model is constructed, which considers higher service quality and lower operating costs as objectives. Logic and operational rules are also considered as constraints to ensure the implementation of the solutions. Thirdly, we propose and implement a heuristic solution algorithm based on neighborhood search to achieve high-performance solutions. Finally, we validate the efficiency and effectiveness of our framework under the actual bus operation scenario in Chongqing, China. The Pareto frontier solutions are provided to bus operators as alternative operation schemes. Introduction With urban development, travel demand has grown considerably. The ground bus system is considered a feasible solution to provide a green and efficient travel service. As for the transit operational enterprise, the fundamental problem is how to provide efficient and feasible services. A bus operation mode includes the single-line operation mode and the multi-line operation mode. The single-line operation mode means that each vehicle only serves a particular line and does not allow the vehicle to be assigned to other lines. The single-line operation mode is widely used in daily operations because it is easy to manage. But this mode has made transportation resource-scarce [1]. Unlike the single-line mode, the multi-line operation model plans multiple lines at the same depot in a unified manner to realize the sharing of timetables and vehicles. Figure 1 shows a typical multi-line scenario where all three lines have a joint depot. Vehicles obey centralized management in the joint depot and can be flexibly assigned to all the lines. Based on this characteristic, some scholars have studied the vehicle scheduling problem in the multi-line mode. Zhao et al. [1] demonstrated through experiments that vehicle resources are saved due to the improvement of vehicle turnover efficiency in the multi-line operation model. Recently, Petit et al. [2] proposed a dynamic bus substitution strategy in the multi-line operation model to enhance system reliability. Thus, the existing resources show a significant benefit by centralized management in the multi-line operation mode. Timetabling in multi-line operation mode is also a research hotspot. Fouilhoux, P. et al. [3] proposed the Synchronization Bus Timetabling Problem (SBTP) that favors passenger transfers and avoids the congestion of buses at common stops. Seman, L.O. et al. [4] proposed a headway control strategy in bus transit corridors served by multiple lines. Although bus operating systems are integrated, there are few studies that consider bus timetabling and vehicle scheduling together, especially in multi-line operation mode. Therefore, we will review the methods of bus operation planning considering the research in single-line mode. Public transit resource management involves two fundamental processes: timetabling (TT) and vehicle scheduling (VS). TT defines departure times of trips at all depots, and VS determines the trips-vehicles assignment to cover all the planned trips [5,6]. Traditionally, TT and VS are solved independently as they have different decision objects [1,2,7,8]. However, systemically, TT and VS are interdependent because the output of TT becomes the input for VS [6]. Indeed, a slight change in TT output could render an initial VS output infeasible or create options for less costly VS output [9]. Therefore, sequential optimizing possibly produces suboptimal solutions [10]. Compared with solving these problems sequentially, integrated optimization often achieves superior performance [11]. Thus, this study is devoted to proposing an optimization framework for integrated TT and VS problems in multi-line operation mode (M-TT/VS). It should be noted that bus route design [12] and driver shift [13] are part of the bus operation planning processes, but they are beyond the scope of this study. They will be included in further research. Some recent research focuses on methods for integrated TT and VS problems to construct the optimal scheme systematically. Some of these methods are used to solve integrated TT and VS problems in single-line operation mode (S-TT&VS) [10,14,15]. Teng et al. [14] developed a multi-objective particle swarm optimization algorithm to get the Pareto optimal solution set for an electric bus line. Shen et al. [15] proposed a multi-objective optimization approach for the S-TT&VS problem with uncertainty. Some research applies to the multi-line operation mode [16][17][18][19][20]. Ibarra-Rojas et al. [16] present two integer linear programming models for the TT and VS problems and combine them in a bi-objective integrated model. Schmid and Ehmke [17] use a weighted sum approach to combine both objectives and propose a hybrid metaheuristic framework to decompose the problem into a scheduling component and a balancing component. Mitra et al. [18] propose a bi-objective multi-period planning model for the synchronization of TT and VS. Jiang et al. [19] offer a two-level planningmodel for the M-TT/VS problem and solve it by tabu and enumeration algorithms. Lieshout [20] considers optimizing the timetable and the vehicle circulation schedule jointly. In these studies, recursive [9,17] and hierarchical [19,20] methods are the most used. These methods obtain satisfactory solutions by adjusting the TT decision within limits according to the feedback of the VS result. That is, the TT decision and VS decision are not solved and optimized simultaneously. Therefore, it is hard to find the global optimum solution with these methods, and repeated recursive calculations also increase solving time. In addition, most researchers stipulate the periodicity [18] or the frequency [16,17] of timetables within a period. Under these rules, changes to the plans are limited. With the development of computer and information technology, bus travel demand information requires more details to be known. When we need to satisfy known demand patterns, the timetable-based operation is more appropriate than the frequency-based operation [5]. Therefore, the design of an integrated approach to relate timetable-based TT decision and VS decision is a vital part of the research. Targeting the above research contents and difficulties, this paper proposes a new integrated framework for the M-TT/VS problem. The M-TT/VS problem we studied is a static planning problem, in which passenger flow demand and trip running time are estimated in advance as inputs. Firstly, the determination processes of TT and VS are correlated by constructing multiple vehicles' trip-link chains with departure time information to facilitate simultaneous optimization. The departure time can be adjusted in minutes to accommodate fluctuating demand. Secondly, a multi-objective optimization model is constructed for the M-TT/VS problem. The multi-objective optimization model considers higher service quality and lower operating costs as objectives [14,16,21]. Logic and operational rules are considered as constraints to ensure the implementation of the solutions. Thirdly, a heuristic solution algorithm is designed to find the Pareto frontier solutions, including two main sub-algorithms: a feasible solution generation algorithm based on greedy rules (CEG) and an optimization algorithm based on neighborhood search (HNS). Finally, a case study of Chongqing, China, is carried out to analyze and verify the effectiveness and efficiency of the framework. The contributions of this paper are summarized as follows: (1) A new multi-objective integrated framework for the M-TT/VS problem is proposed, which constructs a vehicle-based solution structure to facilitate simultaneous optimization. (2) A heuristic algorithm, including a feasible solution generation algorithm, CEG, and an optimization algorithm, HNS, is designed to find the Pareto frontier solutions. A heuristic method and a comprehensive tabu table are designed to improve optimization efficiency. (3) A case study of Chongqing, China, is carried out to analyze and verify the effectiveness and efficiency of the framework. The Pareto frontier solutions are provided to bus operators as alternative operation schemes. Problem Description In this section, we introduce the basic description of the M-TT/VS problem. We introduce the mathematical description of the multi-line operation mode and the processing methods of input data, including passenger flow data, trip runtime data, and other operational parameters. In addition, we construct the solution structure (i.e. output) for the M-TT/VS problem. Description of the Multi-Line Operation Mode The single-line operation mode and multi-line operation mode are two different operation modes, as described in Section 1. Figure 2 is a schematic diagram of different operation modes, taking three physical lines as an example. We use set D = {d 1 , d 2 , . . . , d nd } to represent all the bus depots, and set L = {l 1 , l 2 , . . . , l nl } to represent all the lines. Among set L, line l k contains three elements: starting depot sd k , terminal depot ed k , and operating mileage m k (represented as l k = {sd k , ed k , m k |ls k , lt k ∈ D}). In the topology of Figure 2, these three lines have a joint depot d 1 . Those vehicles used to operate these lines are defined as set C = {c 1 , c 2 , . . . , c nc }. In the single-line operation mode, vehicles are assigned to a specific line in advance and cannot shift during operation. In the multi-line operation mode, vehicles are centrally managed at the depot d 1 . That is, the vehicles can operate the plans among different lines. It should be noted that vehicles are not allowed to shift until they arrive at the joint depot. Description of Input Data Passenger flow data, trip runtime data, and operational parameters are known information for the M-TT/VS problem. They are key inputs for subsequent model solving. It should be noted that our research focuses on bus operation planning. We expect to find the rules of passenger flow and trip run time through historical data and design an operation scheme set with the rules. Therefore, we use the statistical and estimated data as input and evaluate the planning results based on the same data. The accuracy of subsequent implementation effect evaluation and data prediction is beyond the scope of this paper. A brief introduction to them is given below. • Passenger flow data is obtained from historical IC card data. The IC card data enumerates passengers' boarding times and trip numbers. At the same time, according to the operation data of vehicles, we can obtain the departure time and the line number of this trip. Therefore, we can count the total number of passengers carried on each trip. We assume that passengers are evenly distributed within the interval of two departure times. For example, suppose there is a trip carrying 100 passengers, and the departure time of this trip is 10 min apart from the previous trip. In this case, the arrival rate of passenger flow is 10 passengers/min. We have calculated the arrival rate of passenger flow f np l t of each line l at time t. In addition, we restore the passenger's alighting station data from IC card data through an alight inference method. Based on the complete passenger on-and-off action, we can calculate the highest section flow of the trips. The highest section flow of each line l at time t is represented as f mnp l t . • Trip run time data is obtained from historical GPS data. For each line, we calculate the mean run time f rt l t of the trips whose departure time is t in a historical period. We also smooth the data and supplement the missing values, as shown in Equation (1). The operational parameters are determined based on actual operation scenarios. Part of these come from labor laws or regulations, and others from the experience of bus operators. The known operational parameters in this study include the set of all depots, set of all lines, number of standby vehicles, range of departure time, vehicle's total operating time daily, minimum idle time between trips, and maximum departure time interval. These are listed in Table 1. Solution Structure (Output) As mentioned above, the departure times of trips and the vehicles of trips should be decision variables for the integration of TT and VS. We relate these two groups of decision variables by constructing the multiple vehicles' trip-link chains with departure time information, as shown in Figure 3. A complete vehicle-based solution structure TS contains multiple vehicles' trip-link chains VC, represented as TS = {VC 1 , VC 2 , . . . , VC nc }. Every vehicle i has a trip-link chain VC i , which contains multiple chronologically ordered trips Tr, represented as VC i = Tr i 1 , Tr i 2 , . . . , Tr i ntr c . Both decision variables and servo variables can be included as elements in Tr.The jth trip of vehicle i is represented as Figure 3. Among them, the line numbers l and the departure times st are decision variables. The line numbers satisfies l ∈ L unless l = ∅. The vehicles of trips are implicit in the structure as the indexes. Thus, an integrated solution that includes both TT decisions and VS decisions is organized. The servo variables include run time rt, passenger number np, and chronological order number o. rt and np are determined by the input data f rt l t and f np l t (described in Section 2.2), which satisfy the Equations (2) and (3). Thus, a vehicle-based solution structure can be Compared with the line-based structure used in other typical papers [14,16], the vehicle-based structure can significantly reduce search complexity and improve optimization efficiency, because we can calculate and judge the performability of trips and availability of vehicles more directly with the vehicle-based structure, as shown in Figure 4. This will be explained in detail as follows. In the optimization process, there are two indispensable and frequent processes: (1) judging whether the operation plan of each vehicle can be implemented; (2) assessing the feasible range for adjustment through two continuous trips of a vehicle. These processes are based on a sub-search process: for any trip j of vehicle i, find the next trip j + 1 of vehicle i. In the line-based structure, the lines' trip-link includes the decision variables, ., ntr l } for the single line k. A complete line-based solution structure LS contains multiple lines' trip-link chains LC. We should do additional searches and calculations for the sub-search process with this line-based structure, as shown in Table 2. The time complexity with the line-based structure is estimated to be O(nl × ntr l + nl + ntr c × ntr c ) for a sub-search task. But in the vehiclebased structure, we can get the next trip Tr i j+1 directly by indexing with time complexity O(1). Obviously, the use of a line-based structure will cause a significant time cost in the optimization process, especially in the multi-line operation mode. Because the search task's time complexity, the line-based structure increases with the number of lines nl. Therefore, we use the vehicle-based structure to improve search and calculation efficiency in the optimization process. Of course, after the optimization process, the vehicle-based solution can be re-organized as the line-based solution. Methodology This section constructs a multi-objective optimization model and a heuristic solution algorithm for the M-TT/VS problem. The optimization model considers higher service quality and lower operating costs as objectives. Logic and operational rules are considered as constraints. A heuristic solution algorithm is designed to find the Pareto frontier solutions, which are provided to bus operators as alternative operation schemes. Optimization Model The primary trade-off faced in bus service planning is between the level of service faced by the passengers and the operating costs for agencies [5,22]. The improvement of service quality requires higher operating costs, and the reduction of operating costs will always affect the service quality. Owing to the existence of different business models, there is no unified standard to balance the two goals. It isn't easy to quantify the weights of the two goals. Therefore, we establish a multi-objective optimization model to systematically optimize these two goals. The objective function is expressed in Equations (5) and (7). The objective function Z 1 represents the total waiting time of passengers. As a basic transport service, operating rules usually specify the maximum departure time interval. We assume that the maximum waiting time that passengers can tolerate is larger than the maximum departure time interval. Therefore, the total waiting time of passengers can represent the service quality. The objective function Z 2 represents the operating cost, which is calculated by multiplying the operating cost per kilometer by the total operating kilometers. Solutions are also required to be constrained by logic and operating rules. The constraints are expressed in Equations (8)- (16). Equation (8) specifies that the departure time st i j is an integer. That is, the departure time is a minute node. Equation (9) stipulates that the departure time st i j is within the specified range, which represents the bus operation period of the day. In the other case, if the jth trip of vehicle i is not scheduled, the departure time st i j is marked as 0. Equation (10) stipulates that the departure time interval is not greater than the specified value. Equations (11) and (12) stipulate that the first departure time and the last departure time of each line are given values, which are consistent with Equation (9). Equation (13) stipulates that the idle time between two connected trips of each vehicle is not less than the given value. Equation (14) stipulates the preceding arrival station be the same as the subsequent departure station. That is, deadhead is not allowed. Equation (15) stipulates the total operating time of each vehicle within a given range. The on-and-off behavior of passengers results in the accumulation of passenger flow at sections. Formula (16) guarantees that the maximum section flow is not greater than the maximum seating capacity of the vehicle. Solution Algorithm The M-TT/VS problem to be solved in this study is complex. It requires a lot of time to solve the problem accurately. Therefore, we propose a heuristic method. As mentioned above, the solution structure is a vehicle-based structure TS, where l i j and st i j are decision variables. Namely, we need to determine each trip's operation line and departure time. Our goal is to find the Pareto frontier solutions under the multi-objective model. Therefore, we design a feasible solution generation algorithm based on greedy rules (CEG), an optimization algorithm based on neighborhood search (HNS), and a method to screen the Pareto frontier solutions. The algorithm flowchart is shown in Figure 5. Phase 1: Generate Feasible Solutions A real feasible solution needs to meet complex operational constraints. If there is no intervention in the solution process, it will cause the continuous generation of invalid solutions. Further, this will result in a waste of solving resources and the reduction of solving efficiency. Therefore, we design an efficient feasible solution generation algorithm CEG. The CEG algorithm flowchart is shown in Figure 6. The departure time of trips is determined by the randomly generated number of trips and the uniform interval rule (UI). The UI rule is a greedy rule, which ignores the change of passenger flow and requires the departure interval to be consistent [17,23]. Since constraints (11) and (12) specify the first and last departure times, we can determine the unique departure interval by dividing the number of trips by the length of the operating period. Thus, the minimum value of the number of trips on each line can be specified to satisfy the constraint (10). Then, the entire trip-link chain group is determined based on departure time and the first in, first out rule (FIFO). The FIFO rule is a greedy selection rule when multiple vehicles are scheduled. Early arrivals are always selected first [6]. We randomly determine the first trips of vehicles, then traverse the remaining trips in chronological order, and assign them to the trip-link chain according to the FIFO rule. For the generated initial solution, we judge whether it meets all constraints. Finally, the Pareto frontier solution is selected by pairwise comparison of feasible solutions. Phase 2: Optimize Solutions The neighborhood search algorithm is a typical local search algorithm [24]. The design of neighborhood search rules is flexible and can be easily combined with heuristic information related to the problem. Therefore, to optimize the M-TT/VS problem, we design a heuristic optimization algorithm based on the neighborhood search algorithm HNS. The HNS algorithm flowchart is shown in Figure 7. The differences in heuristic search algorithms are mainly reflected in the search rules. For example, a genetic algorithm generates candidate solutions by crossover and mutation, while a neighborhood search algorithm produces candidate solutions by detriment and repairment. We designed the "destroy" rule and "repair" rule. The "destroy" rule refers to the rule of selecting decision variables that need to be changed in a solution. We fused heuristic information cp i j to guide the selection probability. Heuristic information cp i j refers to the difference between the passenger number of trip carries and the base value. The more significant the difference is, the more likely it is to be selected. A roulette wheel is used to choose the target trip Tr i j . We define those similar trips near the target trip as needing change. For example, suppose the trip Tr i j carries more than the base value. In that case, the trips [k i j − R, k i j + R] of line l i j , which also carries more than the base value, are the similar trips. The "repair" rule refers to the method of trip adjustment. For trips that need to be adjusted, we designed a comprehensive tabu table, requiring trips to be adjusted without breaking constraints. In addition, we require trip adjustment in line with practical experience limitations. If the number of passengers on a trip exceeds the baseline, we will adjust the departure interval as little as possible. The heuristic information and the comprehensive tabu table are described in detail below. Heuristic information cp i j . Based on practical operational experience, service supply should conform to changes in demand. If the number of passengers on a trip exceeds the baseline, our optimization direction should reduce the number. The specific operation method is to reduce the departure interval, which means we should push forward the departure time of the next trip in the same direction and vice versa. Based on this idea, we propose heuristic information cp i j . The numerical value of cd j i provides the probability the trip will be destroyed, and cp j i plus or minus tells us how to adjust trips. Comprehensive tabu table. In the optimization process, it is essential to ensure the feasibility of the solution, especially when there are many external constraints. We construct a comprehensive tabu table to confirm the validity of our solution. We can adjust solutions to the extent that the constraints allow. These constraints can be divided into two categories: , Case Study and Discussion This section verifies the effectiveness of the multi-objective integrated framework through practical cases. We verify that the algorithm can solve the Pareto frontier effectively, and prove the advantages of the algorithm through comparative experiments. Finally, we also compare the difference of operation service schemes that can be provided under different operation modes with the same number of vehicles. Case Description and Data Preparation To verify the effectiveness of the model and algorithm above, a typical multi-line scenario is taken as the case. The case is in Chongqing, China, and contains four depots and three lines, as shown in Figure 8 Passenger flow data and trip run time data are the main input data of this study. Based on the IC card data and vehicle GPS data of Chongqing in June 2020, we figure out which trip the passenger was on and when the trip departed. About 780 trips are operated and around 55,000 passengers are recorded per day. Through data cleaning, calculating, and smooth processing (as mentioned in Section 2.2), the final input data f np and f rt are obtained, as shown in Figures 9 and 10. The values of parameters are set according to the actual operation requirements, and they are listed in Table 3. Results and Discussion By using the multi-objective optimization model and heuristic solution algorithm method proposed before, we try to solve the above cases. MATLAB version R2021b is used, run on a computer with macOS Monterey, Intel Core I9, and 32GB random access memory. Description of Results The number of initial solution sets is set as 100, and the number of optimization iterations is set as 500. We get 46 Pareto frontier solutions when the computation time is less than 15 min. All the solutions satisfy the constraint. Figure 11 shows the objective function value distribution of the Pareto frontier solutions. These solutions are all better than the actual operational plan. We use Euclidean distance to express the effect of Pareto frontier solutions in each iteration. The calculation is shown in Equation (22). The smaller the Euclidean distance, the closer the position of the solution is to the origin. Figure 12 shows how minimum Euclidean distance changes with the number of iterations. The results eventually tend to converge. Figure 13 shows a solution in the Pareto frontier solutions. A row represents a trip-link chain, and a panel represents a trip. The number before the "−" is a line number, and the number following the "−" is a departure time (minute timestamp). The object function value (passenger waiting time) is 25,8084 min and the operating cost is CNY 94,045. All operational constraints are met. For Solution-A, the optimized objective function value of passenger waiting time is 25,8084 min, and the total passenger number is 52,195. We map the passenger waiting time as a histogram distribution in Figure 14. It can be seen that the waiting time of most passengers is less than 5 min, and the number of passengers waiting for five to twenty minutes gradually decreases. Our solution ensures that supply and demand are matched not only on different routes but also over different time periods, as shown in Figure 15. As we can see, the more passengers on the line/time, the higher the heat, the more trips supplied. The Gantt diagram corresponding to this solution is shown in Figure 16. The diagram provides a visual representation of Solution-A and clearly shows the vehicle's daily operation schedule. As can be seen from the color transformation, vehicles can shift between multiple lines. Contrast Experiment of Generation Algorithm A feasible solution generation algorithm CEG is proposed in Section 3.2. CEG requires vehicle scheduling compliance with the FIFO rules and timetabling compliance with UI rules. To verify the coverage of feasible solutions' objective function value and the efficiency of generating feasible solutions, we compare CEG with commonly used algorithms: random algorithm (CRD) and partly greedy algorithm (CERD). CRD is completely random, scheduling vehicles randomly based on the random frequency in each period. CERD is partly random, scheduling vehicles randomly but timetabling in compliance with UI rules. Basic logical rules are considered in these algorithms. We changed the total number of solutions G max , and did six comparative experiments. The feasible solution generation efficiencies by different algorithms are shown in Table 4. The number of feasible solutions is recorded as N es and the proportion of feasible solutions is recorded as P es . It can be seen from the results that the efficiency of CRD is the lowest, at less than 4%. CERD shows a big improvement in efficiency, of above 50%. CEG has the highest efficiency of 65%, which is 15% higher than CERD. We also compare the range of the objective function values by different algorithms. The results are shown in Table 5, where ROC represents the range of operating costs and RWT represents the range of passenger waiting time. Figure 17 shows the result of objective function values when G max = 1000. Obviously, the ROC and RWT of CEG is the broadest. CEG and CERD have a greater improvement in ROC and RWT compared with CRD. It should be mentioned that CEG and CERD can find more low-passenger-waiting-time solutions, which is meaningful for the operation enterprises pursuing service quality. In general, CEG not only has higher P es , but also has broader ROC and RWT. Contrast Experiment of Optimization Algorithm An optimization algorithm based on neighborhood search HNS is proposed in Section 3.2. Special neighborhood search rules are designed to solve the problem. We compare heuristic neighborhood algorithm HNS with the random neighborhood algorithm (RNS) and greedy neighborhood algorithm (GNS) in optimization efficiency and solution quality through experiments. The three algorithms differ in their "destroy" and "repair" rules. RNS has random "destroy" and "repair" rules without using heuristic information, while GNS has greedy "destroy" and "repair" rules using the same heuristic information as HNS. We kept the initial solution consistent and conducted a comparative experiment. Figure 18a shows the iterative optimization effects of the three algorithms, and Figure 18b shows the distribution of the final solution set of the three algorithms. It is obvious that GNS easily falls into local optimum, and the result of the final solution set is obviously worse than the other two methods. The final solution set distribution of RNS and HNS is similar, but the optimization efficiency of RNS is inferior to HNS. In general, HNS improves the optimization efficiency on the condition that the quality of the final solution set is guaranteed. In order to compare the solutions' differences through these three algorithms, we draw operating cost distribution and the passenger waiting time distribution of the Pareto frontier solution set, respectively. The distributions of operating costs are similar, consistent with Figure 19a. It can be seen from Figure 19b that the HNS and RNS have less highwaiting-time solutions. In addition, we compared the matching degree of supply and demand under the same line and different trips, as shown in Figure 20. It can be seen that the supply of each period is in line with the demand, although the total supply is different. Therefore, solutions in the Pareto frontier are meaningful and can be selected by bus operation managers according to their needs. Contrast Experiment of Operation Mode As mentioned above, the multi-line mode saves resources because vehicles are centrally scheduled at hubs. We verify this by comparing the results in different operation modes. Figure 21a,b respectively show the minimum passenger waiting time and maximum operating cost that can be achieved in different modes, i.e., the best service and maximum supply that can be achieved in different modes. They vary with vehicle number. With the same number of vehicles, the multi-line operation mode can always achieve better service and provide more supply. In other words, the multi-line operation mode can achieve the same effect as the single-line operation mode with fewer vehicles. Robustness Test For a robustness test, we conduct a series of experiments towards the peak run time (7:00-8:00) fluctuated by 1% to 20% keeping the number of vehicles and operating costs constant. The optimization results are shown in the Figure 22. From the results, we evaluate that when the disturbance is less than 5% in the peak period, the model is still robust. When the disturbance is greater than 5%, the optimization results begin to be affected. When the disturbance reaches 20%, it is reduced by 0.125% compared with the result without perturbation. Conclusions This paper presents an integrated framework for timetabling and vehicle scheduling in multi-line operation mode. A vehicle-based solution structure constructing the multiple vehicles' trip-link chains with departure time information is established to facilitate simultaneous optimization. The departure time can be adjusted in minutes freely to accommodate fluctuating demand. And we propose a multi-objective optimization model considering the actual operation constraints, such as the number of vehicles and the range of vehicle total operating time. The multi-objective optimization model considers higher service quality and lower operating costs as objectives. In addition, we design a heuristic algorithm for this model, which contains a feasible solution generation algorithm based on greedy rules and an optimization algorithm based on neighborhood search. CEG can generate feasible solutions on a large scale and efficiently. HNS accelerates the optimization through heuristic operators and tabu tables. A case study of Chongqing, China is carried out to analyze and verify the effectiveness and efficiency of the framework. Results show that the Pareto frontier solutions can be obtained by this framework. Using the CEG algorithm, the ratio of effective solutions can reach 65%, and the coverage range of the objective function value is more extensive than other methods. The optimization efficiency and results of the HNS algorithm are better than other methods. Finally, we compare the extreme utility of the multi-line operation mode and the single-line operation mode. The multi-line operation mode can consistently achieve better service and supply when using the same number of vehicles. Several opportunities for future research exist, such as further integration of driver scheduling and TT&VS, because driver salary costs are also essential for bus operations. It is also challenging to consider the constraints associated with drivers' driving rules. In addition, M-TT/VS is also practical for use in emergencies, as occasional congestion or vehicle damage conditions prevent vehicles from running as planned.
2022-04-24T15:24:26.605Z
2022-04-21T00:00:00.000
{ "year": 2022, "sha1": "f38268046e1acb7e1f7c2e7e7e17ba5170a27d28", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/12/9/4210/pdf?version=1650602926", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "182cbb95bc6c8db983895b989212e24be925462b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
235292618
pes2o/s2orc
v3-fos-license
Development criteria for gluten-free foods The paper presents marketing research of the gluten-free food market in the Saratov region. As a result of the data obtained, it has been established that gluten-free foods are mainly supplied by foreign brands such as “Dr. Korner “(Germany)” and Dr. Schar” (Italy), supplying the market with a wide range of flour confectionery, pasta and bakery products. The market of domestic producers is mainly represented by LLC “Garnets” (RF) and LLC “Dietika” (RF). The work investigated the quantitative content of gluten in several developed dietary cereal culinary and flour confectionery products, namely: a casserole made from rice groats with almond milk and corn flour, a casserole made from rice groats with almond milk and flax flour, buckwheat pudding with goat milk and rice flour, cookies made from a mixture of flax and corn flour, gluten-free cake with added flax and rice flour, gluten-free cake with added corn and pumpkin flour, and gluten-free cake with added corn and rice flour. The fractional composition of the types of flour and its mixtures used in the developed products was calculated and the gluten level therein as experimentally confirmed (less than 20 mg/kg). Introduction Celiac disease is a chronic hereditary disease characterized by persistent intolerance to proteins of cereals with the development of atrophy of the mucous membrane of the small intestine and the associated malabsorption syndrome. It is characterized not only by intestinal damage, being a reaction of the whole organism to gluten. As a result of the disease, almost all human organs or systems are damaged [1]. It is impossible to recover from this disease; one needs a certain diet excluding irritant foods (containing gluten) to improve his/her life quality. The main components of gluten are prolamins (wheat gliadin, rye sekalin, barley hordein and oat avenin), which constitute from 5 to 50% of the total amount of protein and are soluble in 60-80% ethanol solution, and glutelins, soluble in 0.1-0.2% alkali solutions. It has been proven by now that glutelins have toxic amino acid sequences identical to the peptides of prolamins; therefore, usually only one common name for toxic cereal proteins is used -gluten [2] (table 1). As can be seen from table 1, such cereals as wheat and barley have the maximum amount of alcohol-soluble (prolamins) and alkali-soluble (glutelins) protein fractions, while rye has lower ones, and the alkali-soluble fraction predominates in oats. These crops are widely used in the production of flour confectionery and bakery products; however, they are not acceptable in the production of glutenfree products. Nowadays, in most developed countries, the development and production of gluten-free products is at a high level and is widely developed. At the same time, this market in the Russian Federation is at 2 an early stage, therefore, the issue of creating a wide range of products from gluten-free raw materials, ensuring the production of high-quality, competitive products, is urgent for domestic specialists. During our study, it was necessary to solve the following tasks:  Conduct marketing research of the gluten-free products market;  Analyze the gluten level in our developed foods. The aim of the study was to estimate the level of gluten in our developed cereal and flour confectionery products for their introduction into the food industry. Materials and methods Marketing research was carried out in accordance with GOST R ISO 20252 -2014 "Research of the market, public opinion and social problems", determination of the quantitative content of gluten in developed products was carried out using the immunoassay test, the RIDASCREEN Gliadin competitive system [9]. Results and Discussion We carried out marketing research of the gluten-free food market. The following distribution networks of Saratov and the Saratov region were surveyed: LLC "O'key group", LLC "Lenta", the network of health food stores "Spinat", JSC "Trading house Perekrestok", LLC "Metro Cash and Carry". As a result of our research, it was revealed that the manufacturers of gluten-free products were mainly foreign trademarks: "Dr. Korner" (Germany), "Dr. Schar" (Italy), and such domestic ones as LLC "Garnets" (RF), LLC "Dietika" (RF). As can be seen from figure 2, the leaders in the sale of gluten-free products were Spinat (28%), followed by LLC "Lenta" (25%), LLC Metro Cash and Carry (22%), LLC O'key Group (14%) and the last place was occupied by JSC "Trading house Perekrestok" (11%) (figure 1). As a result of our studies, a narrow range of specialized products in the regional distribution network and a high specific volume of foreign manufacturing companies are noted. Previously, we have developed recipes and technologies for several gluten-free products, namely: a casserole made from rice groats with almond milk and corn flour, a casserole made from rice groats with almond milk and flax flour, buckwheat groats with goat milk and rice flour, cookies from a mixture of flax and corn flour, gluten-free cake with flax and rice flour, gluten-free cake with corn and pumpkin flour, and gluten-free cake with corn and rice flour [4][5][6]. In order for these products to be labeled as "gluten-free", it is necessary to analyze the level of gluten in accordance with TR CU 027/2012 "On the safety of certain types of specialized food products, including dietary therapeutic and preventive dietary nutrition." In accordance with this regulation, gluten-free food products must be made from one or more ingredients that do not contain wheat, rye, barley, oats or their crossbred variants and/or must consist or be made in a special way (to reduce gluten level) from one or more components obtained from wheat, rye, barley, oats or their crossbred variants, while the level of gluten in the ready-to-eat product is not more than 20 mg/kg [7]. In the course of further research, we preliminarily analyzed the fractional composition of several types of flour and its mixtures used in our developed products using the calculation method. Table 2 shows the results obtained. Based on the data in Table 2, it can be seen that the gliadin content in the types of flour we uses is less than 20 mg/kg, while gliadin is known to play the major role in the onset of the disease, since it intensively reacts with antigliadin antibodies IgA and IgG that appear in blood of patients with celiac disease [8]. Then we experimentally analyzed the gluten content in our developed food products using the RIDASCREEN Gliadin competitive enzyme immunoassay system with the optical density at 450 nm [9][10] As can be seen in figures 3-5, the use of gluten-free raw materials in the development of dietary products has reduced the gluten level in cakes by an average of 124 times, in cereal culinary products by an average of 143 times and in cookies by 130 times. Conclusion Thus, on the basis of our marketing research, a shortage of gluten-free food products of domestic production was revealed, therefore, the cereals and flour confectionery products we have developed will be in demand on the market. In addition, the gluten content in these products complies with the gluten-free labeling requirements, since it does not exceed the standard value of 20 mg/kg and is recommended by us for people with celiac disease.
2021-06-03T01:32:34.801Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "da5d550b7aab40b9dc49186946e91b46fcc649d7", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/723/3/032067", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "da5d550b7aab40b9dc49186946e91b46fcc649d7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
7454973
pes2o/s2orc
v3-fos-license
Anti-Tumor Activity of a Novel Compound-CDF Is Mediated by Regulating miR-21, miR-200, and PTEN in Pancreatic Cancer Background: The existence of cancer stem cells (CSCs) or cancer stem-like cells in a tumor mass is believed to be responsible for tumor recurrence because of their intrinsic and extrinsic drug-resistance characteristics. Therefore, targeted killing of CSCs would be a newer strategy for the prevention of tumor recurrence and/or treatment by overcoming drug- resistance. We have developed a novel synthetic compound-CDF, which showed greater bioavailability in animal tissues such as pancreas, and also induced cell growth inhibition and apoptosis, which was mediated by inactivation of NF- k B, COX-2, and VEGF in pancreatic cancer (PC) cells. Methodology/PrincipalFindings: In the current study we showed, for the first time, that CDF could significantly inhibit the sphere-forming ability (pancreatospheres) of PC cells consistent with increased disintegration of pancreatospheres, which was associated with attenuation of CSC markers (CD44 and EpCAM), especially in gemcitabine-resistant (MIAPaCa-2) PC cells containing high proportion of CSCs consistent with increased miR-21 and decreased miR-200. In a xenograft mouse model of human PC, CDF treatment significantly inhibited tumor growth, which was associated with decreased NF- k B DNA binding activity, COX-2, and miR-21 expression, and increased PTEN and miR-200 expression in tumor remnants. Conclusions/Significance: These results strongly suggest that the anti-tumor activity of CDF is associated with inhibition of CSC function via down-regulation of CSC-associated signaling pathways. Therefore, CDF could be useful for the prevention of tumor recurrence and/or treatment of PC with better treatment outcome in the future. Introduction Pancreatic cancer (PC) is one of the most lethal malignant diseases with worst prognosis, which is ranked as the fourth leading cause of cancer-related deaths in the United States [1].Over the past two decades, numerous efforts have been made in improving treatment and survival PC patients but the outcome has been disappointing.This disappointing outcome is due to many factors among which de novo resistance (intrinsic) and acquired (extrinsic) resistance to conventional therapeutics (chemotherapy and radiation therapy) including gemcitabine alone or in-combination with other cytotoxic or targeted agents.Emerging evidence suggest that the resistance could in fact be due to the enriched existence of tumor initiating cells, also classified as cancer stem-like cells (CSC) in a tumor mass [2][3][4][5][6].The CSCs have the capacity of self-renewal and the potential to regenerate into all types of differentiated cells giving rise to heterogeneous tumor cell populations in a tumor mass, which contributes to tumor aggressiveness [2][3][4][5][6].Thus, the failure to eliminate these special cells is considered to be one of the underlying causes of poor treatment outcome with conventional therapeutics, suggesting that newer and novel therapeutic strategies must be developed for the targeted killing of drug resistant CSCs in order to eradicate the risk of tumor recurrence for improving the survival of patients diagnosed with PC. In search of novel yet non-toxic agents, attention has been focused on natural agents for several years.One such agent is curcumin (diferuloylmethane), which is derived from the plant Curcuma longa (Linn) grown in tropical Southeast Asia [7][8][9].Curcumin has been shown to inhibit the growth of a variety of tumor cells; however, the poor bioavailability of curcumin limits its application in the clinic.Recently, we have developed a novel synthetic analogue of curcumin, 3,4-difluoro-benzo-curcumin [we named it as Difluorinated-Curcumin or in short CDF [10,11]], which showed greater bioavailability in pancreatic tissues, and also inhibited cell growth, DNA-binding activity of NF-kB, Akt, COX-2, and the production of PGE 2 and VEGF, and caused induction of miR-200 and inactivation of miR-21 in PC cells [12].Since miR-200 is associated with the acquisition of epithelial to mesenchymal transition (EMT), which is also believed to be associated with CSCs or cancer stem-like cells, here we investigated the effects of CDF on CSC function. Here we report, for the first time, that CDF could inactivate many functions of CSCs including self-renewal capacity as demonstrated by the inhibition of sphere-forming (pancreatospheres) ability of drug-resistant PC cells, which was consistent with inactivation of CSC biomarkers such as CD44 and EpCAM.We also showed anti-tumor activity of CDF alone and incombination with gemcitabine, which was consistent with inactivation of miR-21, and consequently increased expression of PTEN, attenuation of the DNA binding activity of NF-kB inhibition in the expression of COX-2, and activation in the expression of miR-200 in tumor remnants of a xenograft mouse model of human PC, all of which provide convincing in vivo activity of CDF which is consistent with in vitro findings. Results AsPc-1 and MIAPaCa-2 cell lines and their clones were chosen for this study because of their relatively resistant nature.The CSC characteristics of these cell lines using stem cell markers' Lin28B and Nanog by RT-PCR, and EpCAM and CD44 by western blot showed an increase in expression level in the PC-GTR cell lines compared to their parental cell lines (Figure 1).Hence we chose these to test our hypothesis that CDF is more effective than curcumin even in resistant cell lines and also their resistant clones-GTR. CDF strongly prevents clonogenicity and invasion of PC cells compared to gemcitabine and curcumin We selected the concentration of 20 nmol/L of gemcitabine and 4 mmol/L of curcumin or CDF to conduct clonogenic assay following our previous publication [12].The results demonstrated that there was a significant reduction in clonogenicity of AsPC-1 and MIAPaCa-2 cells treated with curcumin and CDF, but not with gemcitabine (Figure 2A).However, CDF treatment had a much greater and significant reduction in colony formation compared to curcumin.AsPC-1-GTR and MIAPaCa-2-GTR cells had an 80% reduction of clonogenicity with CDF treatment, whereas, only 20-30% reduction of clonogenicity was observed with gemcitabine or curcumin treatment (Figure 2A).Overall, CDF treatment showed a significant reduction in clonogenicity of human PC cells, suggesting the superiority of CDF. CDF or curcumin treatment decreased PC cell migration and invasion.The results showed that 4 mmol/L of curcumin had minimal inhibition of invasion whereas similar concentration of CDF showed significant inhibition of invasion (Figure 2B).The basal level of ABCG2 expression was found in parental cell lines (de novo drug resistant cells); however, the level of expression of ABCG2 was further increased in drug resistant (acquired drug resistant) cell lines (Figure 2C). CDF inhibited viability of human PC cells more than curcumin and gemcitabine as evaluated by MTT assay Initially MTT assay was conducted to examine the effect of different concentrations of gemcitabine (1 to 50 nmol/L), and curcumin or CDF (2-6 mmol/L) on cell survival after 72h of treatment (data not shown).Subsequently, 4 mmol/L of CDF or curcumin, and 20 nmol/L of gemcitabine were used to treat individually as well as in combination with gemcitabine for 72h.The results showed that CDF treatment in combination with gemcitabine caused a remarkable reduction of cell survival in all four cell lines compared to curcumin and gemcitabine combination treatment (Figure 3).Furthermore, analysis of drug combination treatment showed that the combination index after treatment with CDF in combination with gemcitabine was less than 1.00 (Figure 3), suggesting the synergistic effect of CDF combination.In contrast, the combination index with curcumin and gemcitabine was more than 1.00 (Figure 3), showing non-synergistic effect.Overall, these results suggest that CDF caused a much more significant reduction of cell survival in PC cells, compared to gemcitabine/curcumin alone or their combinations compared to CDF and gemcitabine combination. CDF remarkably increased pancreatosphere disintegration of PC cells To examine the effect of treatments on the sphere forming ability of PC cells (pancreatosphere) and disintegration of pancreatospheres, we conducted sphere disintegration assay for 10 days to generate the formation of pancreatospheres, followed by 5 days of drug treatment.The results show that there was a remarkable increase of sphere disintegration by curcumin and CDF treatment, not by gemcitabine treatment (Figure 4).However, the greatest effect on disintegration was observed in response to the CDF treatment (Figure 4), once again suggesting that CDF is much more superior in inhibiting the functions of cancer stem-like cells. CDF inhibited pancreatospheres formation in PC cells To examine the effect of drugs on CSC self-renewal capacity in PC cells, we conducted sphere formation assay for 1 week and four weeks (Figure 5A and B).The results indicated that CDF in combination with gemcitabine completely eliminated pancreatospheres formation after four weeks of treatment compared to gemcitabine and curcumin combination in PC cells even in gemcitabine-resistant PC cells, suggesting that CDF may cause the pancreatospheres more sensitive to gemcitabine than that of curcumin treatment, and could be useful for targeted killing of CSCs. Figure 5C shows the effect of different concentration of gemcitabine and CDF on 2 nd passage of pancreatospheres in pre-treated primary pancreatospheres of AsPC-1 cells.CDF treatment remarkably inhibited 2 nd passage of pancreatospheres in a dose-dependent manner.Furthermore, CDF-pre-treated cells exhibited a greater effect than non-CDF-pre-treated cells. CDF decreased CD44 and EpCAM expression in pancreatospheres of PC cells We examined the effect of drugs on CSC biomarkers, CD44 and EpCAM in pancreatospheres of AsPC-1 and AsPC-1-GTR cells by confocal microscopy (Figure 6).The results indicate that CDF decreased CD44 and EpCAM expression in pancreatospheres, suggesting the inhibitory effect of CDF on pancreato- sphere formation may be associated with the inhibition of CD44 and EpCAM expression. CDF in combination with Gemcitabine inhibited Pancreatic Tumor Growth in vivo much more than curcumin combination We have used a subcutaneous xenograft tumor model where the tumor was induced by MIAPaCa-2 cells in CB17-SCID mice.CDF treatment in combination with gemcitabine significantly inhibited tumor growth in MIAPaCa-2 tumors much more than curcumin and gemcitabine combination (Figure 7A) as well as compared to either untreated controls or those treated with a single drug.The mice did not show any weight loss during the treatment period (30 days), suggesting that these treatment had no major adverse effects on animals. CDF with Gemcitabine significantly decreased NF-kB Activation in vivo NF-kB activation was determined in the CDF or curcumin, and/or gemcitabine treated tumor remnants derived from MIAPaCa-2 cells induced tumors as shown above.CDF and curcumin as single agent down-regulated NF-kB activation whereas gemcitabine activated NF-kB level, which was abrogated in combination treatment with CDF.The combination treatment of CDF with gemcitabine showed a significant decrease in NF-kB level compared to curcumin and gemcitabine treatment (Figure 7B), suggesting that the inactivation of NF-kB could be one of the molecular mechanisms by which CDF elicits its anti-tumor activity against PC tumors. CDF effects on protein expression in vivo The COX-2, PTEN, and b-actin expression was determined by Western blot.A significant down-regulation in the expression of COX-2 was observed in both the combination, but the effect was more pronounced in CDF combination group.The expression of phosphatase and tension homolog (PTEN), a tumor suppressor gene was found to be decreased in MIAPaCa-2 cells; however, the expression of PTEN was up-regulated when treated with CDF (Figure 7C).These results suggest that CDF is much more effective than curcumin.Since PTEN is a known target of miR-21, which has been reported to be up-regulated in PC [13][14][15], we assessed the expression levels of miR-21 in tumor remnants as shown below. Modulation in the expression of miR-21 and miR-200 family in vivo We determined the expression levels of miR-21, miR-200b and miR-200c in MIAPaCa-2 tumors by real time RT-PCR.Overexpression of miR-21 was observed in MIAPaCa-2 tumors whereas we found a significant reduction in the expression of miR-21 in tumors treated with either CDF alone or in combination with CDF and gemcitabine (Figure 7D).We further determined the expression levels of miRNA-200b and miR-200c in tumor tissues which are known regulators of EMT and found to be significantly low in MIAPaCa-2 cells (Figure 7D).In contrast, we found that the CDF treatment with or without gemcitabine combination showed increased expression of both miR-200b, and miR-200c, but the effect with curcumin or its combination was minimal, suggesting the superiority of CDF in suppressing the expression of miR-21, resulting in the re-expression of PTEN, and re-expression of miR-200 which could be responsible for the reversal of EMT phenotype in cells treated with CDF.Overall, these results suggest that the phenotypic characteristics of MIAPaCa-2 tumors are consistent with enriched population of CSCs and EMT characteristics, and these drug resistant cells could be killed either by CDF alone or in combination with gemcitabine. Pancreatospheres enhanced tumor growth in vivo Under traditional experimental conditions, we normally inject one million cells for assessing tumor growth; however, for investigating the greater potential of tumor growth by pancreatospheres, we injected only 5,000 cells in mice as a proof-of-concept study.The tumor weight was remarkably increased as the days progressed (Figure 8A).The level of miR-21 was increased between tumors implanted with one million of parental cells compared to pancreatospheres (Figure 8B).The animal was euthanized after 30 days because of tumor burden, and gross tumors are shown in Figure 8C indicating larger tumors as well as loco-regional lymph node metastasis whereas tumors derived from parental cells did not show any metastasis over a period of 30 days.The tumor-derived cells showed significant inhibition of pancreatospheres when treated with CDF (Figure 8D).Overall, these results suggest that CSCs (pancreatospheres) can be grown in mice and CDF could be useful for the killing of these drug resistant cells (Figure . 8). Discussion In this study, we have demonstrated that a synthetic analogue of curcumin, CDF, is significantly more effective compared to curcumin in the killing of gemcitabine-resistant pancreatic cancer (PC) cells that consists high proportion of cells with cancer stem cells (CSCs) or cancer stem-like cells characteristics.The inhibition of cell growth could in part be due to better cellular uptake, retention and reduced metabolic inactivation of CDF by PC cells, which is consistent with our published findings on cellular and animal pharmacokinetics data [10,11].Our previous reports indicate that CDF inhibits NF-kB and COX-2 activity in PC cells in vitro [12].Here we confirm these observations in vivo using a mouse xenograft model.Thus, the killing of gemcitabine-resistant PC cells by CDF is associated with inactivation of NF-kB and COX-2 signaling pathway which is very important because these Figure 8. Tumor growth pattern of pancreatospheres derived from MIAPaCa-2 cells.(A).5,000 pancreatospheres were inoculated in mice using 1:1 matrigel, progressive tumor growth over a period of 30 days.Moderate increase in the expression of miR-21 as measured by real-time RT-PCR was observed in tumors derived from pancreatospheres compared to tumors derived from parental cells by injecting one million cells and tumor was assessed over the same period of time (B).Photographs showing tumor growth, arrow points to tumor and asterisk (*) refers to loco-regional lymph node metastasis whereas we did not find any metastasis when one million parental cells were injected (C).Tumor cells harvested from the tumors derived from pancreatospheres were treated with CDF showed significant inhibition in the formation of pancreatospheres (D).doi:10.1371/journal.pone.0017850.g008pathways are known to contributes to drug-resistance of PC cells to chemotherapeutic agents [16][17][18]. CSCs comprises only a very small proportion of cells in a tumor mass and posses the ability to self-renew and give rise to differentiated tumor cells [3][4][5]19].The CSC theory has fundamental clinical implications especially because CSC has been identified in many malignant tumor tissues including pancreatic cancers and considered to be highly resistant to chemo-radiation therapy than differentiated daughter cells [3][4][5]20]; however, CSCs isolated from human tumors are usually insufficient for further mechanistic studies.The existence of CSCs provides an explanation for the clinical observation that tumor regression alone may not correlate with patient survival [21] because of tumor recurrence, which is in part due to the presence of CSCs.Therefore, targeting self-renewal pathways and the killing of CSCs might provide more specific approach for eliminating cells that are the root cause of tumor recurrence.A potential challenge in this regard is the development of therapies that selectively affect CSCs while sparing normal stem cells that may rely on similar mechanisms for self-renewal.In this study, we have demonstrated that CDF not only inhibit cell growth of PC cells, but also inhibit CSC self-renewal capacity as assessed by sphere formation (pancreatospheres) assays.Therefore, CDF could have a greater potential to inhibit cancer growth as documented by our xenograft mouse model of gemcitabine resistant PC cells, which appears to be mediated via inhibition of CSC self-renewal capacity. Emerging evidence suggests the role of microRNA (miRNA) in many biological processes [22][23][24][25].Among many miRNAs, miR-21, commonly considered as an oncogene, is over-expressed in many solid tumors including PC and has been reported to be associated with tumor progression, poor survival and drug resistance [13,14,26,27].In our previous report, we have demonstrated that the expression of miR-21 is up-regulated in gemcitabine-resistant PC cells and that its expression can be significantly down-regulated by CDF treatment in vitro [12].The increased expression of miR-21 is known to be associated with inactivation of PTEN, a know tumor suppressor gene, resulting in activation of PI3K/Akt/mTOR signaling pathway, leading to aggressive tumor growth [15,28].In this study, we confirmed that CDF treatment could results in the down-regulation of miR-21, resulting in the up-regulation of PTEN in vivo, suggesting that the anti-tumor activity of CDF is associated with up-regulation of PTEN resulting from the inactivation of miR-21 expression. In contrast to miR-21, miR-200 family is known as tumor suppressor and they are usually down-regulated in some tumors including PC and the loss of expression of miR-200 family contribute to the acquisition of EMT phenotype and drug resistance.Down-regulation of miR-200 by siRNA technique has been shown to be associated with EMT phenotype while reexpression of miR-200 can result in the reversal of EMT phenotype [29,30].In our previous publication [12], we demonstrated that CDF treatment could re-express miR-200 in PC cells.Here we showed, for the first time, that CDF can upregulate miR-200b and miR-200c in tumor remnants in vivo, consistent with significantly greater inhibition of tumor growth in the xenograft mouse model when CDF was used in combination with gemcitabine.These results suggest that the anti-tumor activity of CDF is mediated via re-expression of miR-200 which may potentially results in the reversal of EMT phenotype and could also lead to overcome drug resistance in PC. In conclusion, CDF showed much more pronounced growth inhibitory effect, inhibited CSC self-renewal consistent with inactivation of CSC biomarkers (CD44 and EpCAM) in PC cells especially in gemcitabine-resistant PC cells compared to curcumin.In xenograft mouse model of human PC tumors induced by MIAPaCa-2 cells, CDF exhibits anti-tumor activity by regulating COX-2, PTEN, miR-21, miR-200, and NF-kB in vivo.These results strongly suggest that CDF could be a novel agent for the treatment of PC in general but gemcitabine-resistant PC in particular by attenuating the behavior of CSCs. Ethics Statement This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health.Any animal found unhealthy or sick were promptly euthanized.The protocol was approved by the Committee on the Ethics of Animal Experiments of Wayne State University institutional Users Animal Care Committee (Permit Number: A-10-03-08). Cell Culture, Drugs and Reagents Human pancreatic cancer cell lines AsPC-1, and MIAPaCa-2 were purchased from ATCC (Manassas, VA).These two cell lines AsPC-1 and MIAPaCa-2 were exposed to 200 nmol/L of gemcitabine and 5 mmol/L of tarceva (erlotinib) every other week for about 6 months to create gemcitabine and tarceva resistant (GTR) cell lines, named as AsPC-1-GTR and MIAPaCa-2-GTR, respectively.As a result, AsPC-1, AsPC-1-GTR, MIAPaCa-2, and MIAPaCa-2-GTR were chosen for this study based on their differential sensitivities to gemcitabine.All the cell lines have been authenticated (Applied Genomics Technology Center at Wayne State University) on March 13, 2009 and these authenticated cells were frozen for subsequent use.The method used for testing was short tandem repeat profiling using the PowerPlex 16 System from Promega.Gemcitabine and curcumin were purchased from Eli Lilly (Indianapolis, IN) and Sigma-Aldrich (St. Louis, MO), respectively.CDF was synthesized as described in our earlier publication [10,11].Gemcitabine was dissolved in water, whereas CDF and curcumin were dissolved in DMSO with a final concentration of 0.1% DMSO in medium. Antibodies Antibodies against ABCG2, and PTEN were purchased from Santa Cruz (Santa Cruz, CA).Antibody to COX-2 and b-actin was acquired from Cayman Chemicals (Ann Arbor, MI), and Sigma Chemicals (St. Louis, MO). Clonogenic assay Clonogenic assay was conducted to examine the effect of drugs on cell growth in PC cells, as described previously [12].5610 4 cells were plated in a six-well plate.After 72h of exposure to 20 nmol/L of gemcitabine, 4 mmol/L of CDF or curcumin, the cells were trypsinized, and 1,000 single viable cells were plated in 100mm Petri dishes.The cells were then incubated for 10 to 12 days at 37uC in a 5% CO 2 /5% O 2 /90% N 2 incubator.Colonies were stained with 2% crystal violet and counted. Invasion assay The invasive activity of cells was tested by using BD BioCoat Tumor Invasion Assay System (BD Biosciences, Bedford, MA) according to the manufacturer's protocol as described previously [31].Briefly 5610 4 cells were seeded with serum free medium supplemented with curcumin or CDF into the upper chamber and bottom wells were filled with complete medium in the system.The, fluorescence was read using Microplate Reader (TECAN) at 530/ 590 nm and were photographed. Cell survival assay MTT assay was conducted using AsPC-1, AsPC-1-GTR, MIAPaCa-2, and MIAPaCa-2-GTR as described previously [12] after 72 h of treatment.Combination index and Isobologram for combination treatment were also calculated and plotted using CalcuSyn software (Biosoft, Cambridge, United Kingdom) to determine synergy based on the method of Chou and Talalay [32]. Sphere formation/disintegration assay Single cell suspensions of cells were plated on ultra low adherent wells of 6-well plate at 1,000 cells/well in sphere formation medium [33].After 7 days, the spheres termed as ''pancreatospheres'' were collected by centrifugation and counted [33].For sphere disintegration assay, 1,000 cells/well were incubated for 10 days, following 5 days of drug treatment, which examined the effect of drug treatment on disintegration of pancreatospheres as described previously [33].The pancreatospheres were collected by centrifugation and counted under a microscope. Confocal microscopy Single cell suspensions of AsPC-1 and AsPC-1-GTR cells were plated using ultra low adherent wells of 6-well plate at 3,000 cells/ well in sphere formation medium.After 7 days of treatment, the pancreatospheres were collected by centrifugation, washed with 1xPBS, and fixed with 3.7% parformaldehyde.CD44 and EpCAM antibodies were used for immunostaining assay, as described previously [29].The CD44 or EpCAM-labeled pancreatospheres were photographed by confocal microscopy (Leica TCS SP5) using software LAS AF 1.2.0 Build 4316. Protein extraction and Western blot analysis Proteins were extracted from all four cell lines and also from animal tumor tissues as described previously [12].Relative level of ABCG2 was evaluated for all four cell lines.The effects on COX-2, PTEN and b-actin expression were evaluated on tumor tissues by Western blot analysis.as described previously [12]. Figure 1 . Figure 1.Comparative expression of Lin28B (A) and Nanog (B) mRNA by qRT-PCR showed increased expression in resistant cell lines compared to parental cell lines, supporting the CSC characteristics of these cell lines.The characteristics of CSCs were further confirmed by the protein expression of EpCAM and CD44 (C).doi:10.1371/journal.pone.0017850.g001 Figure 7 . Figure 7. CDF exhibited anti-tumor activity in MIAPaCa-2 cells induced tumors in a xenograft mouse model, which was consistent with inhibition of NF-kB DNA binding, COX-2, miR-21, and caused re-expression of miR200 in tumor remnants.Anti-tumor activity and changes in tumor weight from each group of animals (A).The arrow indicates starting day of the treatment.NF-kB DNA binding activity of tumor tissues; and NF-kB competition control study with unlabeled NF-kB oligonucleotide (B).Western blots analysis of COX-2, PTEN and b-actin expression in tumor remnants (C); miR-21, miR-200b and miR-200c expression in tumor remnants as measured by real-time RT-PCR (D).P values were calculated by the paired t test.doi:10.1371/journal.pone.0017850.g007
2014-10-01T00:00:00.000Z
2011-03-09T00:00:00.000
{ "year": 2011, "sha1": "58d4174f85589669a73f63836a45348c2fd0f04a", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017850&type=printable", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4467224db0ee5123d9233ed674b5540d2e9d9cc7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252465407
pes2o/s2orc
v3-fos-license
Voluntary medical male circumcision in selected provinces in South Africa: Outcomes from a programmatic setting Introduction Voluntary medical male circumcision (VMMC) remains an effective biomedical intervention for HIV prevention in high HIV prevalence countries. In South Africa, United States Agency for International Development VMMC partners provide technical assistance to the Department of Health, at national and provincial levels in support of the establishment of VMMC sites as well as in providing direct VMMC services at site level since April 2012. We describe the outcomes of the Right to Care (RTC) VMMC program implemented in South Africa from 2012 to 2017. Methods This retrospective study was undertaken at RTC supported facilities across six provinces. Young males aged ≥10 years who presented at these facilities from 1 July 2012 to 31 September 2017 were included. Outcomes were VMMC uptake, HIV testing uptake and rate of adverse events (AEs). Using a de-identified observational database of these clients, summary statistics of the demographic characteristics and outcomes were calculated. Results There were a total 1,001,226 attendees of which 998,213 (99.7%) were offered VMMC and had a median age of 15 years (IQR = 12–23 years). Of those offered VMMC, 99.6% (994,293) consented, 96.7% (965,370) were circumcised and the majority (46.3%) were from Gauteng province. HIV testing uptake was 71% with a refusal rate of 15%. Of the newly diagnosed HIV positives, 64% (6,371 / 9,972) referrals were made. The rate of AEs, defined as bleeding, infection, and insufficient skin removal) declined from 3.26% in 2012 to 1.17% in 2017. There was a reduction in infection-related AEs from 2,448 of the 2,602 adverse events (94.08%) in 2012 to 129 of the 2,069 adverse events (6.23%) in 2017. Conclusion There was a high VMMC uptake with a decline in AEs over time. Adolescent men contributed the most to the circumcised population, an indication that the young population accesses medical circumcision more. VMMC programs need to implement innovative demand creation strategies to encourage older males (20–34 years) at higher risk of HIV acquisition to get circumcised for immediate impact in reduction of HIV incidence. HIV prevalence in the total population increased with increasing age, notably in clients above 25 years. Introduction Voluntary medical male circumcision (VMMC) remains one of the key interventions for HIV prevention in countries with high HIV prevalence [1]. Further to the three randomized controlled trials (RCTs) where medical male circumcision (MMC) resulted in a 60% reduction in the risk of female-to-male HIV transmission [2][3][4], a recent systematic review and meta-analysis demonstrated reduction in risk of HIV infection in the post-RTC follow-up, in community-based and in circumcision scale-up studies [1]. From 2008 to 2019, nearly 27 million adolescent and adult men (�10 years) had been circumcised and an estimated 340 000 new infections averted in 15 VMMC priority countries, including 260 000 infections among males and 75 000 among females (due to reduced secondary transmission from males [5]. Furthermore, VMMC programs are considered highly effective in reducing both HIV incidence and the cost of HIV prevention especially when targeting young males (20-35 years old) who are at higher risk of HIV acquisition [6]. Male circumcision only offers partial protection against HIV transmission, hence, MMC must be integrated with a comprehensive HIV prevention strategy, which includes treatment for sexually transmitted infections, HIV testing and counselling and promotion of safe sex practices [7] and provision of pre-exposure prophylaxis (PrEP). Traditional circumcision, also viewed as a "rite of passage into manhood" is a common practice among the Xhosa, Sotho, Pedi, Venda, and Ndebele cultural groups of South Africa [8]. This "traditional rite" is usually performed in the months of June, July, November, and December, commonly referred to as the circumcision season. The procedure is performed by traditional leaders without adequate infection control measures in place. As a result, traditional circumcision is often associated with multiple surgical complications and increased risk of injury and even death among young men and boys [9]. In 2012, the United States Agency for International Development (USAID) awarded Right to Care (RTC) a contract to implement a VMMC program across the provinces of Gauteng, KwaZulu-Natal (KZN), Free State, Limpopo, Mpumalanga, and North West in South Africa. RTC implemented the VMMC program in partnership with the Centre for HIV/AIDS Prevention Studies (CHAPS), ANOVA Health Institute and Maternal Adolescent and Child Health (MatCH). In this paper, we report the outcomes of the VMMC program implemented across the six provinces from 2012 to 2017. Study design This was retrospective data analysis using routinely collected program data by the South African Department of Health USAID-funded VMMC program for the period 1 July 2012 to 30 September 2017. Study setting The VMMC program operated in 149 sites across selected districts in Gauteng, Free State, KwaZulu-Natal, Limpopo, Mpumalanga and North West Provinces. The districts include City of Johannesburg, City of Tshwane, Ekurhuleni, Sedibeng and West Rand for Gauteng; Fezile Dabi and Lejweleputswa for Free State; eThekwini, Ugu, uMgungundlovu, uMkhanyakude and Zululand for KwaZulu-Natal; Capricorn, Mopani and Vhembe for Limpopo; Ehlanzeni and Gert Sibande for Mpumalanga; and Bojanala Platinum for North West. The sites in Limpopo, Mpumalanga, Free State and North West Provinces cater mainly for the rural population while those in the Gauteng and Kwa-Zulu Natal Provinces cater for the urban population. Selection of sites was determined by the funder (USAID) and the South Africa Department of Health at national and provincial levels. Participants All males aged �10 years were eligible for circumcision. This age group is in line with the South African MMC National Guidelines [10]. Written informed consent for MMC was required and given independently by all males aged �18 years. Boys aged 16-17 years provided assent to the circumcision procedure after being given information with their parent or legal guardian giving written informed consent. All boys aged 10-15 years required parental/ guardian written informed consent to undergo MMC. They also gave assent, and the parent/ guardian was required to be present on the day of the circumcision. Recruitment of participants Demand creation activities were undertaken through campaigns at community level, messaging through social media, radio, and other forms of media. All those willing were booked and referred to a facility for further screening, consenting and circumcision. At the facility, clients were recruited through walk-ins and referrals from other service points. Circumcision procedures Once a client was recruited at the facility, the client was registered and provided with general group education. Following group education, individual counselling and HIV testing was offered, and subsequently MMC to all eligible clients (Fig 1). Pre-operative history taking, physical examination and circumcision procedures were carried out among clients who accepted and consented. VMMC was deferred for new HIV positive clients when available CD4 count was <350 cells/μL but referred for antiretroviral therapy (ART) initiation. For known HIV positive clients, an assessment of the client's adherence to ART and retention in HIV care was conducted. The client was referred for care if he was lost to follow up. Eligibility for circumcision for such clients was based on recent CD4 count �350 cells/μL and/or viral load (VL) of Lower Than Detectable Limits (LTDL) with CD4 and VL results not more than six months old. Circumcision was deferred if VL was �1000 copies/ml regardless of CD4 value. VMMC site team composition The team composition was built to ensure optimization of volumes and efficiency in the delivery of services, a model called MOVE "Model for Optimizing Volume and Efficiency" [11]. The MOVE team comprised of a Surgeon (Clinical Associate or Medical Officer), Professional Nurse (PN), two enrolled nurses, two counsellors, administration clerk, a cleaner and a driver. Post-operative follow ups Following the MMC procedure, follow up visits were scheduled to assess for wound healing and any signs of AEs. The first assessment occurred immediately after the procedure to check for signs of bleeding or any event and if there was no problem, the client was discharged from the facility. The subsequent follow-up visits were scheduled on 2, 7-and 42-days postcircumcision. Data management and analysis Client data captured by RTC data capturers into RightMax (a cloud-based database for financial and programmatic reporting and monitoring purposes) for the period of 1 July 2012 to 30 September 2017 was retrieved and exported into Microsoft Excel. Client name and surname, date of birth, and ID or passport number as well as phone numbers were deleted. The de-identified data was then imported into Stata version 15 for further management and analysis. Noneligible clients were excluded from the analysis where necessary. Proportions were calculated to describe the population characteristics as well as the outcomes and consenting, age categories, circumcised, HIV testing and status of all attendees including females. Ethical considerations Informed assent/consent was obtained from clients or their parents/guardians as part of program requirements. Ethics clearance was obtained from the University of the Witwatersrand, Johannesburg Human Research Ethics Committee (#M150823). Clients who consented but did not undergo circumcision were those that either tested HIV positive on the day of circumcision and had CD4 <350 cell/uL or had one of the contra-indications as listed in the South African MMC guidelines [10]. Those that tested HIV positive were referred for ART initiation and those with either infection, uncontrolled chronic illnesses or penile anatomical abnormalities were referred for further management as per the South African National Department of Health (NDoH) referral guidelines [10]. HIV testing and linkage to care A total of 1,001,088 were offered an HIV test and 71.1% were tested, 15% declined the test and the rest had a known status. These included men and women who presented at the facility. Out of those newly diagnosed, referrals were undertaken in 86.6% ( Table 1). The HIV prevalence increased with age with the highest among those �30 years ( Table 2). Discussion In this paper, we report the outcomes and trends for a USAID-funded VMMC program over a five-year period (2012-17) across six provinces in South Africa. A majority of mobilized VMMC eligible attendees were between the ages 10 to 24 years and were offered MMC. This is indicative of a young population seeking VMMC services. A review of VMMC programs in 2018 in fifteen eastern and southern African countries including South Africa also found that about 84% of clients in twelve of the fifteen countries were young men 10 to 29 years old with majority being 10 to 14 years old [5,12]. In addition, our study shows that the program reached fewer young men aged 25-34 years, who are sexually active and at risk for HIV acquisition. To achieve immediate impact in reduction of HIV incidence in these districts, the VMMC program needs to target males 20-34 years old. This is in line with WHO and NDoH guidance on maximizing impact of VMMC in HIV prevention [10,13] by increasing uptake of VMMC services among adult men and especially those who may be at higher risk of HIV infection, such as partners of sex workers, men in sero-discordant relationships and men attending STI clinics [13]. In order to reach this age group, VMMC programs need to implement innovative demand creation strategies that may include: 1. Scale-up of sector specific approaches for work-based VMMC services like in mines, farms, military residences and other places of work; 2. Introduction of incentives and loss of income vouchers for older males working [13]; 3. Provision of male friendly services that include school based campaigns, extended hours of services and access to services over weekends, 4. Availing outreach and mobile services to sports grounds and higher institutions of learning; 5. Provision of one stop centers with a complete package of men's health services like sexual and reproductive health services, PrEP, and family planning; and 6. Policy adjustment on issues that affect the health of adolescent boys and men while seeking health care services [13] Out of the six provinces, most circumcisions were undertaken in Gauteng Province with increasing number of circumcisions over time. The Thembisa modelling 2.0 estimates that most provinces in South Africa have not reached the VMMC saturation mark of circumcising 80% of males 15-49 years old. Limpopo province is the only province with an estimate circumcision saturation of more than 80% and this could be attributed to their practice of traditional male circumcision of young males as early as 10 years of age [14]. The South African VMMC program will need to implement innovative ways of scaling up VMMC services to provinces with high HIV prevalence and low circumcision saturation like KwaZulu Natal and Mpumalanga. The COVID-19 pandemic has impacted negatively on the VMMC program in South Africa through suspension of the program for eight months and repurposing of VMMC facilities and staff for COVID-19 management. Extra efforts and commitment will be required to scale-up VMMC in South Africa to reach the set targets. There was a high acceptance of HIV testing among clients who came for VMMC services, with 14% arriving with known status results and 15% declining testing. HIV prevalence was about 2% among the newly diagnosed and 3% among participants who were known positives. HIV prevalence was high among those above 25 years and positivity increased with age, with a positivity rate of about 8% among those newly diagnosed and between the ages 35-39 and 12-14% among those above 40 years of age. The 3% among participants who were known positives was lower compared to general HIV prevalence in South Africa which is 13.7% [15]. However, the rate of positivity was high among the newly tested in the ages above 35 years. The relatively high HIV testing refusal rate of 15% is in line with findings from another study where program data from fourteen southern and east African countries indicated unknown HIV status among participants ranging from 0% to 50% [12]. The higher HIV positivity rate in older men aged >20 years is indicative of the need to focus efforts for VMMC and other HIV prevention modalities amongst this age group to create an immediate impact on reduction of HIV incidence in these communities. The VMMC program targets young healthy HIV negative males to confer to them the benefit of reduced transmission of HIV from an HIV positive female partner by 60%. To increase demand and ensure effective linkage to care and treatment of newly tested HIV positive men, VMMC programs need to collaborate with HIV care and treatment programs to implement a two-way referral system that will facilitate referral of HIV negative males for VMMC services and HIV positive men for HIV care and ART treatment. The common AE was infection, followed by bleeding. Our findings are similar to a study on a mature VMMC program in Zimbabwe where infection was the most common AE [16]. However, the occurrence of AEs dropped from 71.7% in 2012 to 3.2% in 2017. This finding agrees with the WHO recommendation that training of VMMC providers prevents occurrence of AEs [7]. The reduction in infection-related AEs from 2,448 of the 2,602 adverse events (94.08%) in 2012 to 129 of the 2,069 adverse events (6.23%) in 2017 is indicative of the improvement in infection prevention and control practices and quality of care. This is inline with a case series analysis of AEs in a large scale VMMC program in Tanzania that demonstrated reduction AEs over time [17]. The relatively higher rates of AEs and low rates of follow-up in our study could be attributed to challenges in documentation at facility level. Close monitoring and documentation of AEs are recommended to help program quality improvement. Our study showed that males circumcised by a Prepex device technique were more likely to develop an AE (OR = 6.98) compared to those circumcised by the surgical technique. This finding corroborates those of another study [16], however, the closure of CIRC MedTech in 2020, the manufacturer of Prepex may mean that the Prepex device will not be in use again. This followed WHO recommendation for tetanus-toxoid vaccine immunization for all males seeking VMMC by Prepex [18]. A majority of the identified AEs in our study were reported in the 15-to 19-year-olds, contrary to findings from another study [16] where clients 10-to 14-years-old contributed most of the AEs, especially those related to infection. The findings of our study could be attributed to the change in policy by the NDoH in 2016 requiring all clients 10-14 years old to be accompanied to the health facility by their parent/guardian on the day of the male circumcision procedure and participate in health education for wound care at home [10]. The high number of infection-related AEs across all age groups indicates need for increased emphasis on interpersonal communication measures at health facilities, clients' health education on wound hygiene while at home, education on avoiding use of herbal medicine and home remedies on the wound and returning for physical review of the wound as per programmatic schedule [19]. The large number of reported AEs with missing data emphasizes the need for VMMC programs to adequately record, report, manage and monitor all identified AEs to ensure that clients are receiving high quality care and complications are avoided. This is in line with findings from other studies where AEs were found to be poorly recorded and under reported [16,20]. While reporting AEs, standardized clear classification by severity, type and timing is important to inform the VMMC program of possible gaps in quality of services offered at these facilities. Limitations The following limitations are noted. Firstly, being routine program data, some important variables to inform relationships were missed. For example, the technique used during male circumcision procedure was only recorded as device method or surgical method. The surgical method was not disaggregated further into forceps guided, dorsal slit or sleeve resection methods. Furthermore, only three types of AEs were included (bleeding, infection, and insufficient skin removal) with 31.97% (n = 3,391) missing information on type. Client follow-up was done but this was likely not well-captured in the system. However, the large data set offered a real-world experience thus maximizing generalizability, which often is a challenge in small data sets. Secondly, being routine data, there were several processing challenges due to data capturing, which may lead to misclassification, and subsequent bias. However, manual reviews were conducted, data was verified with source documents thereby improving the quality of the data, giving more representative and generalizable results. Conclusion There was high acceptance of circumcision and low HIV prevalence among the young men. While a majority accepted HIV testing, the proportion of refusals (15%) is still high and requires intensified counselling on benefits of knowing one's HIV status. Both targeted HIV testing and a two-way linkage to care need to be part of comprehensive HIV programs. There was a relatively low uptake of VMMC services among males 20-34 years old. VMMC programs need to implement innovative demand creation strategies to encourage older males at higher risk of HIV acquisition to get circumcised. This will create an immediate impact in reduction of HIV incidence in these communities. VMMC remains one of the major HIV preventive mechanisms and more demand will help to reach HIV epidemic control. Accurate reporting, management, recording and monitoring of AEs in VMMC programs need to be strengthened to ascertain the quality of services offered at VMMC facilities. Proper training and mentorship are necessary to minimize AEs related to devices.
2022-09-24T06:18:28.322Z
2022-09-23T00:00:00.000
{ "year": 2022, "sha1": "4cc28225d02728c233ad3276d597c710c1216af2", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e581348ec817edfa63e41f9aad540b17d3b986c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17375610
pes2o/s2orc
v3-fos-license
Assessing Orographic Variability in Glacial Thickness Changes at the Tibetan Plateau Using ICESat Laser Altimetry : Monitoring glacier changes is essential for estimating the water mass balance of the Tibetan Plateau. In this study, we exploit ICESat laser altimetry data in combination with the SRTM DEM and the GLIMS glacier mask to estimate trends in change in glacial thickness between 2003 and 2009 on the whole Tibetan Plateau. Considering acquisition conditions of ICESat measurements and terrain surface characteristics, annual glacier elevation trends were estimated for 15 different settings with respect to terrain slope and roughness. In the end, we only included ICESat elevations acquired over terrain with a slope below 20 ◦ and a roughness at the footprint scale below 15 m. With this setting, 90 glaciated areas could be distinguished. The results show that most of observed glaciated areas on the Tibetan Plateau are thinning, except for some glaciers in the northwest. In general, glacier elevations on the whole Tibetan Plateau decreased at an average rate of − 0.17 ± 0.47 m per year (m a − 1 ) between 2003 and 2009, taking together glaciers of any size, distribution, and location of the observed glaciated area. Both rate and rate error estimates are obtained by accumulating results from individual regions using least squares techniques. Our results notably show that trends in glacier thickness change indeed strongly depend on the relative position in a mountain range. Introduction The Tibetan Plateau has steep and rough terrain and contains~37,000 glaciers, occupying an area of~56,560 km 2 [1]. Recent studies report that the glaciers have been retreating significantly in the last decades. The magnitude of glacial change in the last 30 years is location dependent, with the largest reduction in glacial length and area occurring in the Himalayas (excluding the Karakoram) [2]. In the Tien Shan Mountains in the northwest of the Tibetan Plateau, glacier shrinkage has also occurred during the period between 1950 and 2000 [3]. In the Qilian Mountain Region, 910 glaciers have rapidly reduced in area between 1956 and 2003, with a mean reduction of 0.10 km 2 per individual glacier, corresponding to a mean rate of 2127 m 2 a −1 [4], or a shrinkage of the total glacier area by 30% ± 8% between 1956 and 2010 [5]. In the western Nyaiqentanglha Range, the glacier area decreased by −6.1% ± 3% between 1976 and 2001 [6]. Additionally, the total glacier areas in the inner Tibetan Plateau and in the Himalayas have also retreated between 1970s and 2000s [7][8][9]. Most of the above results were estimated using topographic maps, in situ measurements, and optical remotely sensed images. In recent years, remote sensing techniques such as photogrammetry, interferometry and radar and laser satellite altimetry have been used for assessing vertical glacial and ice-sheet change both on and off the Tibetan Plateau. Regional changes in ice elevation in the central Karakoram were obtained by determining the difference between two Digital Elevation Models (DEMs), one obtained from the 2000 Shuttle Radar Topographic Mission (SRTM), and one constructed from Satellite Pour l'Observation de la Terre (SPOT5) optical images obtained in 2008 [10]. Advantage of using full DEMs like in this study is that the complete area of interest is covered and that change results are interpretable down to the resolution of the used DEMs. Availability of such DEMS is limited however, therefore it is almost impossible to estimate trends in elevation variation from full DEMs only [11]. Additional elevation data of high quality but sparse spatial coverage were obtained by the Ice, Cloud and land Elevation Satellite (ICESat) mission between 2003 and 2009. Primary mission goal was to study ice sheet mass balance over polar areas [12], but in recent years also ICESat Geoscience Laser Altimeter System (GLAS) data have been exploited to monitor glaciers in mountain regions such as Himalayas, Alps and the Tibetan Plateau. Using ICESat data for change assessment has several challenges. ICESat only sampled elevations along track, while adjacent tracks are separated by~70 km on the rough Tibetan Plateau; ICESat tracks are repeated but only in an approximate sense: in general two consecutive tracks of the same orbit have non-overlapping ground footprints; moreover, ICESat was not measuring continuously, but only in 18 campaigns of~1 month resulting in data of different quality due to variations in the laser power. In [13], ICESat measurements were combined with the SRTM 2000 DEM to obtain glacial thinning and thickening trends over the Himalayas. The SRTM DEM data were not only used as direct reference elevations but also contributed in the design of criteria on, e.g., slope in combination with Landsat data that were used to assess location and state of the glaciers. The sparse sampling of the ICESat data required to regionally group and average available ICESat elevations resulting in regional change trends. In [14], a similar approach of comparing ICESat elevations to a reference DEM was compared to direct differencing between almost repeated along-track ICESat elevations over the European Alps. A worldwide analysis of glacial change based on ICESat elevations, partly extending the results in [13], and compared to mass change estimates from Gravity Recovery and Climate Experiment (GRACE) satellite gravimetry measurements, is given in [15]. Glacial thickening and thinning estimates on the Tibetan Plateau based on ICESat measurements were first reported on in [16]. Again, the SRTM 2000 DEM was used as a basis to obtain ICESat elevation changes. The results indicated that most of the glacial sub-regions had a negative trend in glacial thickness change, excluding one sub-region in the Western Kunlun Mountains in the northwest of the Tibetan Plateau. The sampled glacial sub-regions in [16] were relative large. Consequently, we consider their glacial conditions as not being homogeneous, due to, e.g., orographic precipitation and variation in solar radiation. The significant influence of climatic parameters and spatial variability on glacial change rates has already been demonstrated for several individual glaciers on the Tibetan Plateau [6,17]. In addition, the quality of ICESat elevations is known to be strongly dependent on terrain characteristics. Therefore, in this study we exploit, as in [16], ICESat/GLAS data for monitoring glacial thickness changes on the whole Tibetan Plateau. Compared to [16] however, our division in glaciated areas is completely different, as in our study we notably incorporate the glacier orientation in the design of regions. In particular, in our study, mountain ranges are divided into two contrasting regions as we separate the ranges with respect to the main center ridge. This allows us to demonstrate the expected effects of orographic precipitation and variations in solar radiation. To do so, an explicit comparison of our results to those in [16] is included in the Discussion Section. In addition, we explore the ICESat/GLAS data by applying different criteria impacting the quality of footprints including acquisition condition and terrain surface characteristics. Materials and Methods In this section, we describe input elevation data and glacier outlines. Then, we define and build a dataset for monitoring glacier elevation changes. Finally, we clean the dataset and estimate temporal elevation trends of sampled glaciers on the Tibetan Plateau. Input Data Main data sources used to estimate glacier elevation changes at the Tibetan Plateau consist of ICESat/GLAS data, the Global Land Ice Measurements from Space (GLIMS) glacier mask and the Shuttle Radar Topography Mission digital elevation model (SRTM DEM). The ICESat/GLA14 data support land surface elevations between 2003 and 2009. The GLIMS glacier outlines represent the glacial regions on the Tibetan Plateau. The SRTM data show land surface elevations in 2000, used as a base map to be compared with later elevations derived from the ICESat/GLA14 data. To integrate them, all these data are projected onto the World Geodetic System 1984 (WGS84) in horizontal and the Earth Gravitational Model 2008 (EGM2008) in vertical. ICESat/GLA14 Data The ICESat/GLAS products are provided by the National Snow and Ice Data Center (NSIDC). Here, we exploit the level-2 GLA14 data, supporting global land surface altimetry between 2003 and 2009 [18]. The GLA14 data are distributed in binary format and are converted into ASCII columns by the NSIDC GLAS Altimetry elevation extractor Tool (NGAT). The geospatial accuracy of each footprint is reported as 5 m in horizontal and 10 cm in vertical for slopes below 1 • [19]. The vertical accuracy is strongly dependent on terrain characteristics. In this study, necessary measurements of each footprint extracted from the GLA14 data consist of acquisition time, latitude, longitude, elevation above WGS84, EGM2008 geoid height, saturation correction flag, and number of peaks. The saturation correction flag indicates if elevation data were possibly affected by saturation effects. The number of peaks in the Gaussian waveform decomposition directly relates to land surface geometry [20]. For each ICESat campaign, the ASCII data are converted into the GIS shapefile format, using the location of each footprint. Figure 1 shows the ICESat L2D campaign tracks from 25 November to 17 December 2008 crossing over the Tibetan Plateau. Materials and Methods In this section, we describe input elevation data and glacier outlines. Then, we define and build a dataset for monitoring glacier elevation changes. Finally, we clean the dataset and estimate temporal elevation trends of sampled glaciers on the Tibetan Plateau. Input Data Main data sources used to estimate glacier elevation changes at the Tibetan Plateau consist of ICESat/GLAS data, the Global Land Ice Measurements from Space (GLIMS) glacier mask and the Shuttle Radar Topography Mission digital elevation model (SRTM DEM). The ICESat/GLA14 data support land surface elevations between 2003 and 2009. The GLIMS glacier outlines represent the glacial regions on the Tibetan Plateau. The SRTM data show land surface elevations in 2000, used as a base map to be compared with later elevations derived from the ICESat/GLA14 data. To integrate them, all these data are projected onto the World Geodetic System 1984 (WGS84) in horizontal and the Earth Gravitational Model 2008 (EGM2008) in vertical. ICESat/GLA14 Data The ICESat/GLAS products are provided by the National Snow and Ice Data Center (NSIDC). Here, we exploit the level-2 GLA14 data, supporting global land surface altimetry between 2003 and 2009 [18]. The GLA14 data are distributed in binary format and are converted into ASCII columns by the NSIDC GLAS Altimetry elevation extractor Tool (NGAT). The geospatial accuracy of each footprint is reported as 5 m in horizontal and 10 cm in vertical for slopes below 1° [19]. The vertical accuracy is strongly dependent on terrain characteristics. In this study, necessary measurements of each footprint extracted from the GLA14 data consist of acquisition time, latitude, longitude, elevation above WGS84, EGM2008 geoid height, saturation correction flag, and number of peaks. The saturation correction flag indicates if elevation data were possibly affected by saturation effects. The number of peaks in the Gaussian waveform decomposition directly relates to land surface geometry [20]. For each ICESat campaign, the ASCII data are converted into the GIS shapefile format, using the location of each footprint. Figure 1 shows the ICESat L2D campaign tracks from 25 November to 17 December 2008 crossing over the Tibetan Plateau. SRTM DEM The Shuttle Radar Topography Mission was flown in February 2000 and collected the first ever high resolution near-global digital elevation data. In this study, we use the SRTM 90 m DEM, produced by NASA [21]. This DEM has a resolution of 90 m at the equator corresponding to 3-arc SRTM DEM The Shuttle Radar Topography Mission was flown in February 2000 and collected the first ever high resolution near-global digital elevation data. In this study, we use the SRTM 90 m DEM, produced by NASA [21]. This DEM has a resolution of 90 m at the equator corresponding to 3-arc seconds and is distributed in 5 × 5 tiles. To cover the full Tibetan Plateau, 20 SRTM DEM tiles are concatenated, as shown in Figure 1. The tiles are available in both ArcInfo ASCII and GeoTiff format. The digital elevation data were stored in a grid as m × n matrix. The data are projected in a Geographic (latitude/longitude) projection, with the WGS84 horizontal datum and the EGM96 vertical datum. The vertical error of the DEMs is reported to be less than 5 m on relative flat areas and 16 m on steep and rough areas [22]. Note that meanwhile the United Stated Geological Survey (USGS) released a 1 arc second version of SRTM [23]. This version could not yet be incorporated in this study however. In a study over Swiss alpine glaciers [24], SRTM 90 m DEM elevations were compared to the Swiss 25 m national DEM (DHM25). Results indicate that some significant differences occur over individual glaciers, but that differences tend to level out when averaged over larger areas. As we also upscale results from individual glaciers to regions, we may assume that our results obtained using the 90 m SRTM DEM are still valid. Future studies are recommended to check and if necessary correct DEM data for possible co-registration errors [25]. GLIMS Glacier Outlines The GLIMS project is a project designed to monitor the world's glaciers, primarily using data from optical satellite instruments. Now, over 60 institutions worldwide are involved in GLIMS for inventorying the majority of the world's estimated 160,000 glaciers. These glaciers are distributed in GIS shapefile format and are referenced to the WGS84 datum. In this study, we downloaded the glacier mask presenting glacial outlines on the Tibetan Plateau, submitted by Chinese Academy of Sciences, as shown in Figure 1 [1]. The glacier mask is based on aerial photography, topographic maps and in situ measurements. The product was released on 21 July 2005, but the state of the glaciers is expected to represent the situation in 2002 [26]. Each glacier is represented by a polygonal vector with attributes such as identification code, area, width, length, min elevation, max elevation, and name. Note that a new Chinese glacier inventory was published in 2015 [27]; this version could not yet be incorporated in this study. Methods To estimate a glacial thickness change trend, we consider differences between glacial surface elevations derived from 2003-2009 ICESat laser altimetry and a digital elevation model. Here, the digital elevation model is used as a reference surface. In addition, a glacier mask is used to identify ICESat elevations that are likely to sample glaciers. Each difference is time-stamped by the ICESat acquisition time. Valid differences obtained during the same ICESat campaign track over a certain homogeneous glaciated area, also called a sampled glaciated area, are used to estimate a mean difference. Mean differences for each sampled glaciated area are grouped to form a time series. Consecutively, a temporal trend is estimated through the mean differences per area, resulting in a temporal trend of glacial thickening or thinning. Additionally, differences between the ICESat GLA14 elevations and the reference SRTM DEM may correspond to change in glacial thickness between 2003 and 2009 if certain requirements are met. However, the vertical accuracy of each ICESat footprint strongly depends on terrain surface characteristics, so we have to remove uncertain footprints before the estimation. Therefore, we estimate surface slope and roughness from the SRTM DEM. Based on the SRTM DEM, the terrain surface parameters slope S and roughness R are estimated, using a 3 × 3 kernel scanning over all pixels of the grid. For each pixel, the slope S in decimal degrees is locally estimated by Equations (1)-(3) [28,29]. where the h i values (i = 1 ÷ 9) are corresponding to the DEM elevations in the kernal while ∆lat and ∆lon are the width and the height of a grid cell in meters, estimated by distance Equation (4) [30]. where d is the shortest distance over the earth's surface-the "as-the-crow-flies" distance-between the two points (λ 1 , ϕ 1 ) and (λ 2 , ϕ 2 ) in radians in a geographic coordinate system and r is the earth's radius (mean radius = 6371 km). The roughness R in meters is defined as the root mean square of the differencesê i between the grid heights and the local 3 × 3 plane, best fitting in the least squares sense [31], following Equation (5). Determining a Sampled Glaciated Area Because of the orbital configuration of ICESat and its along track only sampling, Tibetan glaciated areas are only sampled sparsely by ICESat. In addition, surface elevation changes on these mountain glaciers are expected to be affected significantly by the orientation and face of the corresponding mountain range. For example, the south face of the Himalayas is experiencing more precipitation than the north face, while on the other hand north faces experience less incoming solar radiation. Therefore we decided to group nearby glaciers having similar orientation into one sampled glaciated area while, on the other hand, glaciers on different sides of a mountain range ridge were grouped into different areas. First, we extracted footprints of all ICESat campaigns within the GLIMS glacier outlines, as illustrated in Figure 2. Then, each glaciated area outline was manually determined, by considering the locations of the glaciers and the ICESat footprints. For example, in Figure 2 the ICESat-sampled glaciers having a northern orientation were grouped into one glaciated area, A, while those on the other side of the mountain ridge were grouped into another glaciated area, B. Finally each glaciated area was coded by an identification number. Identifying a Glacier Elevation Difference A glacier elevation difference Δh is identified as the difference between an elevation of an ICESat footprint within a sampled glaciated area and the reference SRTM DEM, compare Equation (6), where Δh is in meters above EGM2008. Each glacier elevation difference Δh depends on the characteristics of the terrain illuminated by the ICESat pulse and the characteristics of the ICESat measurement itself. It is in principle also affected by the local quality of the SRTM reference elevation, but in this study it is assumed that the quality of the STRM DEM is not location dependent. What is assessed in this study is the quality of the elevation difference with respect to the attributes described in Table 1. For this purpose, we extract ICESat footprints within the sampled glaciated areas and obtain their full attributes. Time ICESat acquisition time or arrival time of the laser pulse on the reflecting surface in UTC "dd-mm-yyyy" format, derived from the GLA14 data Lat Geodetic latitude in degrees, derived from the GLA14 data Lon Geodetic longitude in degrees, derived from the GLA14 data Elev Elevation in meters above WGS84, derived from the GLA14 data GdHt Geoid height in meters in the EGM2008 datum, derived from the GLA14 data SatFlg Saturation correction flag, identifying possible saturation issues, derived from the GLA14 data Identifying a Glacier Elevation Difference A glacier elevation difference ∆h is identified as the difference between an elevation of an ICESat footprint within a sampled glaciated area and the reference SRTM DEM, compare Equation (6), where ∆h is in meters above EGM2008. Each glacier elevation difference ∆h depends on the characteristics of the terrain illuminated by the ICESat pulse and the characteristics of the ICESat measurement itself. It is in principle also affected by the local quality of the SRTM reference elevation, but in this study it is assumed that the quality of the STRM DEM is not location dependent. What is assessed in this study is the quality of the elevation difference with respect to the attributes described in Table 1. For this purpose, we extract ICESat footprints within the sampled glaciated areas and obtain their full attributes. A glacier elevation difference ∆h is maintained for further analysis if the corresponding ICESat measurement is considered good according to the following criteria. First we select those footprints whose return echo is not or only lightly saturated and moreover have only one peak in its Gauss decomposition. That is the value of SatFlg should equal 0 or 1, and the value of NumPk should equal 1. A footprint with one mode is expected to correspond to homogeneous land surface. Then we remove footprints affected by clouds. If ICESat footprints are affected by clouds, the elevation variation within one track can be very large, while the altitude difference with other tracks is high [16]. In this study, if the ICESat elevation difference to the SRTM DEM ∆h is larger than 100 m, the footprint is assumed to be affected by clouds and removed from further analysis. Here, we analyze different settings incorporating the terrain surface characteristics slope and roughness. We remove footprints with a slope S bigger than a threshold S 0 and roughness R bigger than a threshold R 0 . Applying strict thresholds will result in a relative small number of remaining glacier elevation differences albeit of relatively high quality. A slope S below 10 • is always considered good while a slope of over 30 • results in an inacceptable bias. The roughness R is estimated directly from the SRTM data, its lower limit of 5 m corresponds to relative flat areas while its upper limit of 15 m corresponds to high relief and rough areas. In the following, we consider 15 different settings with slope and roughness values within these outer limits, as described in Table 2. Each record in Table 2, corresponding to one such setting, also summarizes the corresponding resulting trend in glacial thinning/thickening for the whole Tibetan Plateau between 2003 and 2009, as determined by the following steps. Obtaining Mean Glacier Elevation Differences For each sampled glaciated area, glacier elevation differences all are time-stamped by ICESat acquisition time. The ICESat acquisition time ti is defined per ICESat track segment, where one track is sampling a glaciated area with consecutive individual footprints. A mean glacier elevation difference ∆h i is considered representative for the height of the glaciated area above the SRTM base map at ICESat acquisition time t i . The mean difference ∆h i and its standard deviation s i is computed using Equations (7) and (8), where k is the number of ICESat footprints in the track segment that are sampling a glaciated area at ICESat acquisition time t i and ∆h ij is the jth elevation difference, j = 1 ÷ k. Each ICESat acquisition time t i is considered as an epoch in the time series used to estimate a temporal trend using linear regression. Here we only use the mean glacier elevation difference ∆h i in a time series if its standard deviation s i is less than a threshold Std 0 and the number of ICESat footprints Remote Sens. 2017, 9, 160 8 of 19 k is at least six footprints. The threshold Std 0 is defined to be equal to the roughness threshold R 0 for each setting with respect to terrain slope and roughness. To remove unreliable elevation differences, we build an iterative algorithm. That is, if s i is bigger than Std 0 and ∆h ij − ∆h i is maximal for j in 1 ÷ k, the jth elevation difference ∆h ij is removed. Then, ∆h i and s i are re-computed. This process is repeated until s i drops below Std 0 or k is less than six. In Figure 3, the values ∆h i and s i representing mean glacier elevation differences and their standard deviations are shown between 2003 and 2009 for two glaciated areas A and B in case that S 0 , R 0 , and Std 0 are 15 • , 10 m, and 10 m, respectively. where, = ∆ℎ ∆ℎ … ∆ℎ : the vector of the average elevation differences per epoch. = : the vector of parameters of the linear trend, offset x0 and rate v. : the design matrix, in which ti denotes the ith epoch. Note that n is required to be at least six epochs. The rate v of a linear glacial thickness change is obtained by solving Equation (9) and the root mean square error (RMSE), as standard deviation of residuals, is also computed, using Equation (10) with the least-square residual vector ̂= − . This value consists of a combination of possible data errors and mainly the non-validity of the linear regression model. In addition, the propagated standard deviation σvv of the estimated rate v is given in Equation (11), where yy Q denotes the variance matrix, in which si is the standard deviation of the ith average difference. These values are considered as the confidence interval for the estimated glacial thickness change. Estimating a Temporal Glacial Thickness Change Trend For each glaciated area on the Tibetan Plateau, a temporal linear trend is estimated if there are at least six average differences or epochs available, corresponding to at least six ICESat campaign tracks during the observed period 2003-2009. For example, Figure 3 shows the distribution of the average differences of the glaciated areas A and B between 2003 and 2009. An annual glacial thickness change trend is estimated by linear adjustment using Equation (9) [33]. where, : the vector of the average elevation differences per epoch. x = x 0 v : the vector of parameters of the linear trend, offset x 0 and rate v. : the design matrix, in which t i denotes the ith epoch. Note that n is required to be at least six epochs. The rate v of a linear glacial thickness change is obtained by solving Equation (9) and the root mean square error (RMSE), as standard deviation of residuals, is also computed, using Equation (10) with the least-square residual vectorê = y − Ax. This value consists of a combination of possible data errors and mainly the non-validity of the linear regression model. Remote Sens. 2017, 9, 160 9 of 19 In addition, the propagated standard deviation σ vv of the estimated rate v is given in Equation (11), where Q yy denotes the variance matrix, in which s i is the standard deviation of the ith average difference. These values are considered as the confidence interval for the estimated glacial thickness change. Continuing to the example of Figure Results Following the method above, temporal glacial thickness change trends on the whole Tibetan Plateau between 2003 and 2009 are estimated for 15 different settings with respect to terrain slope and roughness. The results are shown in Table 2. It indicates that, as expected, the number of observed glaciated areas and the RMSEs of differences estimated by the linear regression increase if the thresholds on slope S 0 and roughness R 0 are relaxed. In practice, the mean rates of glacial thickness change trends on the whole Tibetan Plateau for the five settings from S11 to S15 (all with R 0 = 15 m) are quite similar. In addition, the number of trends having a RMSE of over 5 m significantly increases when ICESat footprints at slopes of over 20 • are incorporated as well. A RMSE of over 5 m could correspond to a large fluctuation in glacial thickness or a bad fit of the linear trend model. Here, S 0 and R 0 are terrain slope and roughness thresholds, respectively. For each setting, N is the number of glaciated areas observable with a given setting and the numbers v and σ vv are the resulting overall rate and its propagated standard deviation of glacial thickness change while RMSE is the average of the root mean square errors (RMSEs) of the linear regression model. Additionally, N 5 is the number of observed glaciated areas having a RMSE of below 5 m. In this paper, we present the results of setting S13, where S 0 and R 0 equal 20 • and 15 m, respectively, because in this case, a maximum number of 67 areas are observed with RMSE ≤ 5 m. We assume that ICESat footprints selected for estimation of glacial thickness change given these settings are relatively appropriate given the steep and rough terrain of the Tibetan Plateau and given the quality of the SRTM DEM. Overall Glacial Thickness Changes: Tibetan Plateau and Its Basins In the case the thresholds S 0 = 20 • for terrain slope and R 0 = 15 m for roughness are applied, the result indicates that 90 glaciated areas on the whole Tibetan Plateau are sampled by enough ICESat footprints to estimate thickness change. In addition, 67 RMSEs are below 5 m. For each glaciated area, a temporal trend in glacial thickness is estimated, as shown in Table S1. In Figure 4, a glacial thickness change rate is symbolized by a red or blue disk at a representative location in each observed glaciated area. Most of the observed glaciated areas in the Himalaya, the Hengduan Mountains and the Tanggula Mountains experienced a serious decrease in glacial thickness. However, in most of the observed glaciated areas in the Western Kunlun Mountains in the northwest of the Tibetan Plateau, glaciers oriented toward the north were thickening while those oriented toward the south were thinning. In general, glacial thickness on the whole Tibetan Plateau decreased between 2003 and 2009 at a mean rate of −0.17 ± 0.47 m a −1 . This number is obtained by averaging all estimated rates v and their propagated standard deviations σ vv , but note that the size, distribution and representativeness of the observed glaciated areas are not taken into account. For this particular result, the absolute value of the estimated error, 0.47 m a −1 is larger than the estimate rate at −0.17 m a −1 . This result indicates that, given the measurements, it is most likely that glaciers on the Tibetan plateau were thinning between 2003 and 2009, but there is some significant chance that they were actually thickening. A more extensive study on the uncertainties associated to glacier mass balance studies from a geostatistics perspective can be found in [34]. Overall Glacial Thickness Changes: Tibetan Plateau and Its Basins In the case the thresholds S0 = 20° for terrain slope and R0 = 15 m for roughness are applied, the result indicates that 90 glaciated areas on the whole Tibetan Plateau are sampled by enough ICESat footprints to estimate thickness change. In addition, 67 RMSEs are below 5 m. For each glaciated area, a temporal trend in glacial thickness is estimated, as shown in Table S1. In Figure 4, a glacial thickness change rate is symbolized by a red or blue disk at a representative location in each observed glaciated area. Most of the observed glaciated areas in the Himalaya, the Hengduan Mountains and the Tanggula Mountains experienced a serious decrease in glacial thickness. However, in most of the observed glaciated areas in the Western Kunlun Mountains in the northwest of the Tibetan Plateau, glaciers oriented toward the north were thickening while those oriented toward the south were thinning. In general, glacial thickness on the whole Tibetan Plateau decreased between 2003 and 2009 at a mean rate of −0.17 ± 0.47 m a −1 . This number is obtained by averaging all estimated rates v and their propagated standard deviations σvv, but note that the size, distribution and representativeness of the observed glaciated areas are not taken into account. For this particular result, the absolute value of the estimated error, 0.47 m a −1 is larger than the estimate rate at −0.17 m a −1 . This result indicates that, given the measurements, it is most likely that glaciers on the Tibetan plateau were thinning between 2003 and 2009, but there is some significant chance that they were actually thickening. A more extensive study on the uncertainties associated to glacier mass balance studies from a geostatistics perspective can be found in [34]. For each basin belonging to the Tibetan Plateau, a mean thinning or thickening rate v B ± σ B is estimated, as average of rates v and propagated standard deviations σ vv . The result is shown in Table 3. In practice, the rate per basin is of course affected by the area of each glacier within the basin. However, in this study we only estimate trends representative of nearby glacier groups. A next but far from trivial step would be to design an interpolation scheme taking the sparsely available trends as input and use them to estimate an overall trend while incorporating e.g., the relative location, orientation, and representativeness of each available trend. Here, the area of glaciers is not taken into account when estimating overall glacial rates. The results show that mass loss due to glacier-thinning seems to take place in most of the basins, excluding Tarim Basin. Subsequently, lost or gained water volumes from glaciers by basin are approximately estimated, by multiplying the mean glacial thickness change rate with the total glacier area of each basin, as shown in Table 3. Table 3. Mean glacial thickness change rate per basin, where N is the number of observed glaciated areas and the total glacier area is obtained from the GLIMS glacier mask. Lost or gained water volumes from glaciers are approximately estimated, by multiplying the mean glacial thickness change rate with the total glacier area of each basin. Basin Total Glacier Area (km 2 ) N v B ± oe B (m a −1 ) Water Volume (Gt a −1 ) Impact of Orientation on Glacial Thickness Change The results indicate that glacial thickness change indeed strongly depends on the relative position in a mountain range. Most glaciers at a north face increase in volume, although some decrease but in that case at a slower rate than its south-facing counterpart. In total, there are 15 pairs of observed glaciated areas, i.e., adjacent glaciated areas located on opposite sides of the main mountain ridge, all listed in Table 4. Such situation is illustrated in Figure 7, showing the Western Kunlun Mountains range. The temporal trends between 2003 and 2009 on the north-facing glaciated area A equaled 0.69 ± 0.30 m a −1 while on its south-facing counterpart, glaciated area B, the trend had opposite sign, equaling −1.02 ± 0.29 m a −1 . Similarly, the glacial thickness change rates at E, facing north, and F, facing southeast, were 0.58 ± 0.28 m a −1 and −0.29 ± 0.44 m a −1 , respectively. Furthermore, the glacial thickness on C, toward the northeast, was estimated to decrease at a rate of 0.09 ± 0.30 m a −1 while glaciers in area D, toward the southwest, thinned at a rate of −0.29 ± 0.20 m a −1 . A possible explanation is that south-facing glaciers receive much more solar radiation than north-facing glaciers. Even glaciated area C, oriented toward the northeast, faces the sun more than areas A and E. Similarly, glaciated area D, oriented toward the southwest, is receiving less sunlight than glaciated areas B and F. Additionally, this can be also the effect of precipitation driven by orography. Discussion In this section, we discuss the sensitivity of our results to the removal of ICESat footprints based on terrain surface criteria and the GLIMS glacier mask. First we discuss the impact of the terrain surface criteria for assessing the signal quality of the ICESat measurements. Second, the GLIMS glacier mask is static which has some effect on the estimation of glacial thickness change trend. Finally a comparison of our result to previous research is presented. Exploring Terrain Surface Criteria Several large glaciers sampled by ICESat footprints were considered to explore appropriate terrain surface criteria. The following relations were studied while determining the thresholds for terrain slope and roughness: glacier elevation difference ∆h vs. slope S, roughness R and elevation h SRTM , respectively; and slope S vs. elevation h SRTM . The results are illustrated here for one case study considering a glacier area at the Gurla Mandhata I Mountains The results indicate that glacier elevation differences ∆h increase with terrain slope, as illustrated in Figure 8a. The existence of such a slope bias is already described [35]. Large valley glaciers often have a surface roughness of below 20 m, see Figure 8b. In addition, a larger surface roughness will result in a positive bias in the estimated glacial thickness. Discussion In this section, we discuss the sensitivity of our results to the removal of ICESat footprints based on terrain surface criteria and the GLIMS glacier mask. First we discuss the impact of the terrain surface criteria for assessing the signal quality of the ICESat measurements. Second, the GLIMS glacier mask is static which has some effect on the estimation of glacial thickness change trend. Finally a comparison of our result to previous research is presented. Exploring Terrain Surface Criteria Several large glaciers sampled by ICESat footprints were considered to explore appropriate terrain surface criteria. The following relations were studied while determining the thresholds for terrain slope and roughness: glacier elevation difference Δh vs. slope S, roughness R and elevation hSRTM, respectively; and slope S vs. elevation hSRTM. The results are illustrated here for one case study considering a glacier area at the Gurla Mandhata I Mountains The results indicate that glacier elevation differences Δh increase with terrain slope, as illustrated in Figure 8a. The existence of such a slope bias is already described [35]. Large valley glaciers often have a surface roughness of below 20 m, see Figure 8b. In addition, a larger surface roughness will result in a positive bias in the estimated glacial thickness. Table S1) at Mount Guala Mandhata I, belonging to the Ganges Basin. The relaxation of the slope threshold results in an increase in the number of accepted ICESat track segments sampling a glaciated area. This is illustrated in Figure 9 for an area in the Hengduan Mountains (No. 6 in Table S1). In Figure 9a, a number of 10 track segments was accepted, given a slope threshold of 15°. Based on these track segments, a trend was estimated with a RMSE of 4.18 m. In Figure 9b, the slope threshold was relaxed to 25°, resulting in a total number of 13 track segments. However, the quality of the final trend (RMSE = 6.39 m) decreases with the increase of the number of track segments. These two examples show some of the impacts of the slope and roughness thresholds. In previous research, the results were annual glacial thickness change trends for defined regions [13,16]. These trends were directly estimated from all glacier elevation differences between ICESat elevations and the reference SRTM DEM on glacier areas, after removing footprints affected by clouds. This method ensures the availability of sufficient ICESat footprints to estimate trends in glacial thickness for relatively large regions. However, it ignores the impact of the high relief terrain characteristics of the Tibetan Plateau and surrounding mountain ranges. In addition, their definition of the sampled regions somehow smooths out significant signal, as it lumps together glaciers with Table S1) at Mount Guala Mandhata I, belonging to the Ganges Basin. The relaxation of the slope threshold results in an increase in the number of accepted ICESat track segments sampling a glaciated area. This is illustrated in Figure 9 for an area in the Hengduan Mountains (No. 6 in Table S1). In Figure 9a, a number of 10 track segments was accepted, given a slope threshold of 15 • . Based on these track segments, a trend was estimated with a RMSE of 4.18 m. In Figure 9b, the slope threshold was relaxed to 25 • , resulting in a total number of 13 track segments. However, the quality of the final trend (RMSE = 6.39 m) decreases with the increase of the number of track segments. These two examples show some of the impacts of the slope and roughness thresholds. In previous research, the results were annual glacial thickness change trends for defined regions [13,16]. These trends were directly estimated from all glacier elevation differences between ICESat elevations and the reference SRTM DEM on glacier areas, after removing footprints affected by clouds. This method ensures the availability of sufficient ICESat footprints to estimate trends in glacial thickness for relatively large regions. However, it ignores the impact of the high relief terrain characteristics of the Tibetan Plateau and surrounding mountain ranges. In addition, their definition of the sampled regions somehow smooths out significant signal, as it lumps together glaciers with different characteristics with respect to orography and orientation. Clearly there is a difficult trade-off between using more elevations of less individual quality against using less elevations of better quality. Remote Sens. 2017, 9,160 15 of 20 different characteristics with respect to orography and orientation. Clearly there is a difficult trade-off between using more elevations of less individual quality against using less elevations of better quality. Table S1) in the Hengduan Mountains, belonging to the Brahmaputra Basin. In this example the roughness R0 was kept fixed at 15 m. State of the GLIMS Glacier Mask Observations serving as input for the GLIMS glacier mask were obtained from 1978 to 2002, using aerial photographs, topographic maps and in situ measurements [24]. Because of remoteness and harsh climatic conditions on the Tibetan Plateau, it is difficult to make field investigation, therefore the Chinese glacier inventory that was used to establish the GLIMS glacier mask took place at different periods. The inventory was organized per drainage basin. The inventory for example took place at Qilian Mountains in 1981, at the Inner Plateau in 1988, etc. Positional uncertainty is expressed as a distance of 20 m, i.e., a given location lies within a circle of 20 m radius from the true location. In addition, recent studies report that the total glacier area on the Tibetan Plateau is shrinking [2,4,5,[7][8][9]. Therefore, in this study some ICESat footprints acquired between 2003 and 2009 will fall within the GLIMS glacier outlines but are not sampling a real glacier anymore. This will affect the mean elevation difference i h Δ at the ICESat acquisition time ti. However, the number of such footprints within the same ICESat track segment is not large because the along track distance between consecutive footprints is approximately 170 m, and criteria on terrain surface are in place to remove uncertain footprints. To further improve the glacial thickness change trends derived from ICESat/GLAS data, two techniques were applied. First, the glacier mask could be checked for each ICESat campaign using contemporary spectral (e.g., Landsat 8) or SAR data (e.g., Sentinel 1). Alternatively, classification techniques could be applied to the ICESat full waveform signals (GLA01 or GLA06 product) to verify if a ICESat signal is sampling snow, ice or rock [36]. Applying both types of analysis for the complete Tibetan Plateau is quite labor intensive however. Additionally, the most cloud free Landsat scenes, acquired between 2003 and 2011 to delineate glacier outlines [13,16]. However, it is difficult to match the acquisition time of ICESat campaigns with Landsat data for the full observed period for the whole Tibetan Plateau. Glacial Thickness Changes for Sub-Regions Our results consider annual glacial thickness change trends for relatively small areas. It is interesting to compare it with previous research. Neckel et al. (2014) grouped glaciers on the Tibetan Plateau into eight sub-regions, as illustrated in Figure 10 [16]. One of their results consists of annual Table S1) in the Hengduan Mountains, belonging to the Brahmaputra Basin. In this example the roughness R 0 was kept fixed at 15 m. State of the GLIMS Glacier Mask Observations serving as input for the GLIMS glacier mask were obtained from 1978 to 2002, using aerial photographs, topographic maps and in situ measurements [24]. Because of remoteness and harsh climatic conditions on the Tibetan Plateau, it is difficult to make field investigation, therefore the Chinese glacier inventory that was used to establish the GLIMS glacier mask took place at different periods. The inventory was organized per drainage basin. The inventory for example took place at Qilian Mountains in 1981, at the Inner Plateau in 1988, etc. Positional uncertainty is expressed as a distance of 20 m, i.e., a given location lies within a circle of 20 m radius from the true location. In addition, recent studies report that the total glacier area on the Tibetan Plateau is shrinking [2,4,5,[7][8][9]. Therefore, in this study some ICESat footprints acquired between 2003 and 2009 will fall within the GLIMS glacier outlines but are not sampling a real glacier anymore. This will affect the mean elevation difference ∆h i at the ICESat acquisition time t i . However, the number of such footprints within the same ICESat track segment is not large because the along track distance between consecutive footprints is approximately 170 m, and criteria on terrain surface are in place to remove uncertain footprints. To further improve the glacial thickness change trends derived from ICESat/GLAS data, two techniques were applied. First, the glacier mask could be checked for each ICESat campaign using contemporary spectral (e.g., Landsat 8) or SAR data (e.g., Sentinel 1). Alternatively, classification techniques could be applied to the ICESat full waveform signals (GLA01 or GLA06 product) to verify if a ICESat signal is sampling snow, ice or rock [36]. Applying both types of analysis for the complete Tibetan Plateau is quite labor intensive however. Additionally, the most cloud free Landsat scenes, acquired between 2003 and 2011 to delineate glacier outlines [13,16]. However, it is difficult to match the acquisition time of ICESat campaigns with Landsat data for the full observed period for the whole Tibetan Plateau. Glacial Thickness Changes for Sub-Regions Our results consider annual glacial thickness change trends for relatively small areas. It is interesting to compare it with previous research. Neckel et al. (2014) grouped glaciers on the Tibetan Plateau into eight sub-regions, as illustrated in Figure 10 [16]. One of their results consists of annual glacial thickness change trends for each of these eight sub-regions. Accordingly we estimated glacial thickness change trends for the same eight sub-regions as well. For each sub-region, a mean glacial thickness change rate v R ± σ R is estimated as average of the glacial thickness change rates v and propagated standard deviations σ vv of the observed glaciated areas within the sub-region. The results are presented in Table 5 and compared to Neckel's ∆h trends. disparity between sub-regions B and C may be caused by: (i) the low number of observed glaciated areas; and (ii) differences in orientation of the observed glaciated areas: sub-region B consists of two south-facing glaciated areas and one north-facing glaciated area while sub-region C consists of three south-facing glaciated areas and two north-facing glaciated areas. At sub-region E, in case we set S0 = 20° and R0 = 15 m, the number of ICESat footprints is not enough to estimate a temporal trend. We assume that the total number of observed glaciated areas per sub-region and their orientation affect these mean glacial thickness change rates. That is, when the number of observed glaciated areas is large enough and observed glaciated areas located on opposite sides of the main mountain ridge are approximate, the mean glacial thickness change trend per sub-region is going to be more reliable. [16]. Sub-Region Name estimated for high-mountain Asian glaciers by Gardner et al. (2013) [15]. Both results indicate that most of the glaciers in the Tibetan Plateau are thinning, except for the Western Kunlun Mountains, as shown in Table 6. The strongest glacier-thinning occurs in the Himalaya range and in the Hengduan mountains. The glacial thickness change rate in the western and inner plateau is near balanced or nearly equals zero. Inversely glaciers in the Western Kunlun Mountains are thickening. [16]. Sub-Region Name The comparison indicates that sub-regions (A, F, G, and H), relatively densely covered by glaciers, have a similar thickness change rate. Considering the other sub-regions, sub-region D has a somehow similar trend while rates in sub-regions B and C have a relative large disparity. The disparity between sub-regions B and C may be caused by: (i) the low number of observed glaciated areas; and (ii) differences in orientation of the observed glaciated areas: sub-region B consists of two south-facing glaciated areas and one north-facing glaciated area while sub-region C consists of three south-facing glaciated areas and two north-facing glaciated areas. At sub-region E, in case we set S 0 = 20 • and R 0 = 15 m, the number of ICESat footprints is not enough to estimate a temporal trend. We assume that the total number of observed glaciated areas per sub-region and their orientation affect these mean glacial thickness change rates. That is, when the number of observed glaciated areas is large enough and observed glaciated areas located on opposite sides of the main mountain ridge are approximate, the mean glacial thickness change trend per sub-region is going to be more reliable. Generally, our results are comparable to elevation change rates v G ± σ G estimated for high-mountain Asian glaciers by Gardner et al. (2013) [15]. Both results indicate that most of the glaciers in the Tibetan Plateau are thinning, except for the Western Kunlun Mountains, as shown in Table 6. The strongest glacier-thinning occurs in the Himalaya range and in the Hengduan mountains. The glacial thickness change rate in the western and inner plateau is near balanced or nearly equals zero. Inversely glaciers in the Western Kunlun Mountains are thickening. Representativeness of An Observed Glaciated Area A difficult question is to what extent the sparse estimates obtained by ICESat are representative for the full population of the Tibetan Plateau glaciers. This question cannot be answered here but we can assess which fraction of the glaciers is sampled. For this purpose, we determine the ratio κ between glaciated area sampled by ICESat footprints and the total glaciated area, following Equation (13). Here N is the total number of accepted ICESat footprints, A F is the area covered by one ICESat footprint and A G is the total sampled glaciated area. A glaciated area can be considered to be well sampled if the total number of ICESat footprints sampling is large, while its total area is relatively small. An ICESat footprint with its diameter of 70 m occupies an area AF of~3850 m 2 . For example, in Figure 2, glaciated area A occupies 30.6 km 2 and is sampled by 108 accepted ICESat footprints. Therefore, A's sample ratio equals 0.0136. Similarly, glaciated area B occupies 8.5 km 2 and is sampled by 94 accepted ICESat footprints, so B's sample ratio is 0.0426. In this way, the sample ratio for each of 90 observed glaciated areas is determined (see Table S1). Note that this ratio does not take the spatial and temporal distribution of the ICESat footprints into account, and therefore only provides a very rough indication on how well a glaciated area is sampled. Similarly, the sample ratio for all observed glaciated areas on the whole Tibetan Plateau could be computed as well. As a result, the total area of 90 observed glaciated areas for the whole Tibetan Plateau is 5831.5 km 2 and these glaciated areas were sampled by a total number of 16,002 accepted ICESat footprints. Thus in this case the sample ratio equals 0.0106. Note that one location might be sampled by several ICESat footprints from different epochs. That is not taken into account in this first assessment. Conclusions By exploiting ICESat laser altimetry data, thickness change rates of 90 glaciated areas on the whole Tibetan Plateau were estimated between 2003 and 2009. By considering terrain surface criteria slope and roughness, temporal glacial thickness change trends for the whole Tibetan Plateau were evaluated for 15 different settings. The results show that the settings of terrain slope and roughness equaling 20 • and 15 m to remove uncertain ICESat footprints, respectively, are appropriate for the steep and rough glaciers in the Tibetan Plateau. In addition, the orientation of glaciers has been taken into account. The study indicated that most of the observed glaciated areas in the Himalaya, the Hengduan Mountains and the Tanggula Mountains experienced a serious thinning while in most of the observed areas in the Western Kunlun Mountains north-facing glaciers were thickening while south-facing glaciers were thinning. In addition, glacial thickness changes indeed strongly depend on the relative position in a mountain range. Most north-facing glaciers increase in thickness, although some decrease but, in those cases, at a slower rate than their south-facing counterpart. Our results complement previously estimated water level changes of Tibetan lakes [37,38]. Using additional explicit runoff relations between glaciers and lakes [39], correlations between glacial and lake level changes can be determined to improve understanding of the water balance on the Tibetan Plateau.
2018-04-03T05:15:57.950Z
2017-02-15T00:00:00.000
{ "year": 2017, "sha1": "56b855647997bc4b9098a9adfa09a8835b151b5f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/9/2/160/pdf?version=1487668281", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "98591036b0308d670fa1da5e35da6e410e4b545a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
47622302
pes2o/s2orc
v3-fos-license
The inotropic effect of isoniazid on isolated atria of guinea-pigs The incidence of heartfailure is increasing every year, while there is no ideaL inotropic agent yet available. Therefore the searchfor an ideaL inotropic agent is still needed. Isoniazid (INH) has been wideLy used as an anti-tuberculosis agent that htts similar structure to the K+ channel blocker 4-aminopyridine. Considering tlnt K* channel blocker may prolong depolarization leading to a more influx of Ca** into the cell and increase cardiac contractility, the effect of INH on cardiac contractility were investigated. In guinea' pig isoiated atria preparations, INH produced concentration-dependeitt increase contractility. The maximal concentration of INH in 'increasing force of contractility is 100 mM. This is approximately an equieffective concenffation compared to I uM of adrenaline induced cardiac contractility. The difference is that INH has no effect on the rate of contraction. A non-selective beta-adrenoceptor antagonist propranolol (I uM) inhibits the inotopic action of adrenaline (1 uM) but with no effect on INH (100 mM) induced rdiac contractility. It is speculated that INH possesses a positive inotropic effcct. This effect is not associated with the activation of cardiac beta-adrenoceptoi by catecholamine as a consequence of monoamine oxidase inhibition, but may be through the prolongation of depolarization of the myocardial cells by blocking of Kt channels. of these agents are safe and effective in long term trial.zTherefore, looking for an effective inàtropic agents is needed in conquering heart failure. Isoniazide (INH) is an oral anti-tuberculosis which has been use world-wide since 1945.The structure of INH very similar to that of 4-aminopyridine, a potassium channel blocking agent (Fig. 1).It has been well established that by blocking of potassium channel, 4-aminopyridine is able to prolong cell depolarization and excitation.3In cardiac cell, prolonged depolarization may allow more calcium influx into the cell leading to increase contractility.4 In this study, the effect of INH in increasing cardiac contractility was investigated using isolated guinea- pig atria. METHODS Guinea-pigs (300-350 g) of either sex were killed and the hearts rapidly removed.The atria were dissected free and suspended between a tissue hook and an isotonic transducer in an organ bath containing l0 rnl fresh physiological salt solution (PSS), maintained at 37 degree Celcius and bubbled with a mixture of 95Vo O2/5Vo CO2. The atria were placed under basal tension of I g wt. The spontaneous atrial contraction were measured with a strain gauge transducer for recording the rate and force of atrial beating.The isolated atria were allowed to equilibrate for a period of 20 min before any experimental procedures were commenced, during each time the bathing solution was regulary Inotropic effect ofisoniazid 19 exchanged.The responsiveness of the tissue was then tested by the addition to the organ bath of adrenaline (l uM).Responses to addition of adrenaline was allowed to develop fully, after which the agonist was removed by repeatedly exchanging the PSS in the organ bath. Concentration-response relationships for the positif inotropic effect of isoniazide (INH) was established by its cumulative addition to the organ bath untill obtaining the maximal response. To examine the effect of beta-adrenoceptor blocking agent on the positive inotropic response of INH, the atria was exposed two times by either INH or adrenaline, 30 min apart.Propranolol (1 uM) was added to the atrial bathing solution 15 min before commencing to construct the second exposure to the test agonist; it then remaining present throughhout the rest of the experiment. Data were expressed as mean and standard error of the means @ s.e.m), and were analyzed by paired, two-tailled Student's t-test.In all cases, level of probability (P) of less than 0.05 was taken to indicate a statistical significance. Effect of propranolol on lNH-induced positive inotropic response Concentation of INH (100 ml@ and adrenaline (l uM) which produced similar increases in force of contaction The beta-adrenoceptor antagonist propranolol (1 uM) when introduced 15 min before the second exposure of guinea-pig atria had no significant effect on INH (100 mM) induced contractility (P>0.05,unpaired t- test).The mean value of percent increase in force of contraction to 100 mM INH alone and in the present of propranolol were 64 + 2,5 and 61 * 3Vo, respectively.However, propranolol significantly reduced the adrenaline (l uM) induced contractility (P<0.05,unpaired t-tesQ.The mean value of percent increase in force of contraction to adrenaline (l uM) given alone and in the present ofpropranololwereT2 + 4 and 32 + 2,5Vo, respectively (Fig. 3). DISCUSSION Myocardial cell membrane exhibits selective per- meability to ions, positively charged such as Na*, K, Ca** or negatively charged such as Cl-, creating an electrical potential across the cell membrane.The ions move across the cell membrane through ion channels. An action potential responsible for initiating each "heart contraction" is due to a sudden increase in membrane conductance to Na+, this allows rapid influx of Na* into the cell which in turn is responsible lead to an increase of catecholamine release.T It has also been reported that euphoria or convulsion in patients with tuberculosis who receive INH are due to the increase level of catecholamine as a consequence of MAO-inhibition.sTherefore, it is possible that the positive inotropic effect of INH is due to the activation of the beta-adrenoceptor within the heart by catecholamine. In order to investigate this, the positive inotropic effect of INH was tested by the presence of non-selective beta- adrenoceptor antagonist propranolol.The effect of propranolol was also compared to adrenaline (1 uM), which is approximately equieffective concentration to INH (100 mM) to induce cardiac contractility.kr the presence of propranolol, INH (100 mM) still produced a significant positive inotropic effect.In contrast, propranolol did inhibit the positive inotropic effect of adrenaline (1 uM).This results clea'1y demonstrate that INH increases cardiac contractility which is not due to activation of cardiac betaadrenoceptor by increase level of catecholamine.This is in accord to the finding that INH has no effect on heart rate. In conclusion, INH possesses a positive inotropic effect, which is not through the activation of beta-
2017-05-01T21:04:17.384Z
2000-01-01T00:00:00.000
{ "year": 2000, "sha1": "da4c7ded679a65853d2872b00be690452846a6ae", "oa_license": "CCBYNC", "oa_url": "http://mji.ui.ac.id/journal/index.php/mji/article/download/645/621", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "da4c7ded679a65853d2872b00be690452846a6ae", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
225794878
pes2o/s2orc
v3-fos-license
Seeing the forest for the trees: Visualizing platformization and its governance The complexities of platforms are increasingly at odds with the narrow legal and economic concepts in which their governance is grounded. This article aims to analyze platformization through the metaphorical lens of a tree to make sense of information ecosystems as hierarchical and interdependent structures. The layered shape of the tree draws attention to the dynamics of power concentration: vertical integration, infrastructuralization, and cross-sectorization. Next, the metaphor helps to revision the current patchwork of European regulatory frameworks, addressing the power asymmetry between citizens and the data-driven systems through which their daily practices are governed. Finally, the platformization tree serves to identify points of intervention that may inform European regulatory bodies and policy-makers to act as agents of change. Taking a holistic approach to platformization, this visual metaphor may inspire a set of principles that reshapes the platform ecosystem in the interest of society and the common good. Introduction What makes American Big Tech companies powerful and their platforms' governance complex? This article argues it is because they collectively operate an exclusive set of competing-cum-coordinating platforms that reign the core of the world's digital information systems from which they leverage unprecedented economic, societal, and (geo-) political control. In recent years, tech companies have turned products into data services where customers pay mostly with their personal information and attention. Markets and public sectors, infrastructures, and utilities are drawn into a data-driven ecosystem which is thoroughly commodified and whose impact grows in line with bourgeoning new fields, such as artificial intelligence and robotics technologies. The complexities of platforms are increasingly at odds with the narrow legal and economic concepts in which their governance is grounded. Instead of concentrating on tech firms leveraging an ever growing number of platforms, we propose to shift the focus to the dynamics of platformization and adjust governance strategies accordingly. Platformization is a process akin to industrialization or electrification, referring to a multifaceted transformation of globalized societies (Poell et al., 2019). The rise of corporate and state-controlled platform ecosystems has upended the once popular ideal of a universal and neutral Internet that connects the world. To some extent, it has also undermined classic distinctions between state, market, and civil society-concepts that are still vital in demarcating governmental arrangements. Global information systems reigned by techno-corporate apparatuses now supersede the economic powers of nations; their influence arguably surpasses the political clout of elected governments and administrations when it comes to regulating democracies and civic life (Moore, 2018). While tech platforms increasingly control the gateways to all Internet traffic, data circulation, and content distribution-making entire societies dependent on their systems-they have managed to dodge conventional regulatory scrutiny (Gillespie, 2018). National and supranational regulatory frameworks (i.e. the European Union (EU)) typically scrutinize one aspect of governance, such as market concentration, freedom of information, or privacy rights, even when platformization runs across legal frameworks and across continents. There is a growing need to understand how platformization works and to create new imaginaries that help redraft compartmentalized governance frameworks into a more holistic approach (section "From platform governance to governing platformization"). In an attempt to visualize the dynamics of platformization and its actors, this article proposes a "tree" as a constitutive metaphor (section "The platformization tree"). Such metaphorical imagine may help make sense of information systems as complex structures whose operative power is wielded through hierarchical and interdependent layers; these layers intertwine visibly and invisibly, belowground as well as aboveground, horizontally and also vertically. The layered yet integrated shape of the tree draws attention to the dynamics of platformization: vertical integration, infrastructuralization, and cross-sectorization (section "The dynamics of platformization"). The metaphor also helps to revision the current patchwork of regulatory frameworks, addressing the power asymmetry between citizens and the information systems through which they are governed (section "Governing the unruly status of intermediary platforms"). Finally, the platformization tree serves to identify points of intervention that regulatory bodies, particularly in the EU, may deploy to act as agents of change, for instance by articulating a set of principles and values that reshapes the platform ecosystem in the interest of society and the common good (section "Reshaping governance to promote platform diversity"). From platform governance to governing platformization "The platform Web is made up of privately owned public spaces, largely governed by the commercial incentives of private actors rather than the collective good of the broader society" is how Taylor Owen (2019) sums up the problem of a the current platform society (Van Dijck et al., 2018). There is a growing discontent with tech companies that have become too big and multifaceted to operate transparently in the public eye; their extraordinary power also negatively affects markets and democracies. The social and economic costs of power concentration are becoming a global problem, due to "surveillance capitalism" that underpins the economic logic of data extraction controlling the lives of Western consumers (Couldry and Mejias, 2019;Srnicek, 2017;Zuboff, 2019). The American-based system is largely monopolized by five Big Tech companies (Alphabet-Google, Amazon, Facebook [FB], Apple, and Microsoft, a.k.a. GAFAM), which has now penetrated the core of economic and civic life on most continents, except for China. China operates a state-controlled, corporately run ecosystem of platforms revolving around their big three companies (Baidu, Alibaba, and Tencent or BAT). Increasingly, the ideological clash between state-powers manifests itself as a techno-corporate clash. Such clashes reveal that rather than operating as distinct platform ecosystems, they are intertwined at various levels. The entanglement between American, Chinese, and European interests in the global governance of digital innovation is a driver of mounting tensions between continental super powers and their allies (DeNardis, 2020;Jia, 2018;Mueller, 2017;Steinberg, 2019;Winseck, 2017). The European Union (EU), despite a scarcity of home-grown "big" tech companies, tries to position itself as a governmental agent of change in the global digital economy. In its policy document Shaping Europe's digital future (2019), the European Commission (EC) articulated its seemingly incongruous ambitions to prioritize tech innovation leadership in the data economy alongside a commitment to protect democratic and public values in the platform society, promoting a level playing field and open markets along with transparency, trustworthiness, and privacy. The EC has so far deployed a patchwork of regulatory interventions to deal with the problems caused by globally operating platform companies-from monopolization of online markets and violation of privacy protection to curbing disinformation and hate speech. The EC intends to make Europe the place-to-go for high-quality industrial data that can be used to create, for instance, AI-tools; at the same time and by the same means, it wants to create a framework for "common European data spaces"-a new digital data infrastructure that will stimulate and incentivize privately held data to be shared and used for the common good (European Commission [EC], 2020). To achieve such bold ambitions in 2021, it will be critical to refashion Europe's current patchwork of rule-based regulations and data policies into a holistic, principle-based type of governance. Acknowledging the need for new imaginaries, we propose a visual metaphor that configures platformization as a dynamic process. In the past, platforms have often been examined as metaphorical constructs with technological, social, economic, and political dimensions (Gillespie, 2010;Van Dijck, 2013). Platforms are fueled by data and governed by algorithms; yet they function as part of platform ecosystems-an assemblage of networked platforms, governed by a particular set of mechanisms (Van Dijck et al., 2018: 9). In his seminal work, Benjamin Bratton (2016) has argued that platforms such as smart grids, clouds, and mobile apps evolve not as separate objects but as a computational apparatus with a new governing architecture. The layered architecture of platforms has been visualized as a collection of "stacks," reflecting features of modularity and accumulation (Andersson Schwarz, 2017;Tiwana, 2014;Walton, 2017). Internet activist Marleen Stikker (2019) distinguishes between three different types of stacks-the state, corporate, and public stack-to theorize the convergent and divergent interests of governments, markets, and commons. Yet other theorists configure constellations where stacks are partitioned into "core" and "peripheral" platforms (Constantinides et al., 2018). Two problems we run into when configuring platform ecosystems as "stacks" is that some envision single platforms as entities distinct from the larger digital and social infrastructures through which they operate, and some still presume the possibility of separating corporate from state interests, even though they appear increasingly difficult to disentangle in the new platform order. As Langlois and Elmer (2019) have convincingly argued, tech giants are moving away from the enclosed platform model toward building a data-based infrastructure that affords them to take over the running of cities, transportation, communication, retail, and so on. While doing so, they are "claiming the need not to be subjected to public regulation because they are breaking new grounds, in effect demanding a new state of 'permissionless innovation' to shape our conditions of existence" (Langlois and Elmer, 2019: 248). For platform governance, such transformation is problematic not only because these constellations evade existing regulatory frameworks but also because they defy the very economic and legal concepts in which they are grounded-firms, markets, consumers, infrastructures, as well as states, citizens, and public and private sectors. Moreover, not all platforms are equal, and they are not "stacked" randomly. Some are more equal than others as platform ecosystems are organized hierarchically and interdependently. In sum, the "stack" may no longer be adequate to imagine the complex dynamics underlying the system as a whole (Donovan, 2019). Therefore, we propose to move away from imaging platforms as distinct entities, cumulated in "stacks," toward envisioning platformization as an evolving dynamic process, propelled by human and nonhuman actors. Platformization pertains to "the interpenetration of the digital infrastructures, economic processes, and governmental frameworks of platforms in different economic sectors and spheres of life" (Poell et al., 2019: 6). Favoring a combined STS and political economy approach, we try to understand how sociotechnical systems and political-economic actors (firms, states) build symbiotic relationships to create connective value and develop coordinating power. The impact of platformization has already been documented with regard to the Web as such (Helmond, 2015) to cultural production (Nieborg and and to mobile app systems (Nieborg and Helmond, 2019). The next section argues how a new metaphor, the platformization tree, can be used as a prism for disentangling complex platform ecosystem dynamics. The platformization tree To envisage the platform ecosystem's hierarchical and interdependent nature, we imagine a tree that consists of three interconnected layers: the roots of digital infrastructures all leading to the trunk of intermediary platforms which branches out into industrial and societal sectors that all grow their own twigs and leaves. The tree metaphor emphasizes how platforms constitute "living" dynamic systems, always morphing and hence coshaping its species. Like air and water can be absorbed by leaves, branches, and roots to make the tree grow, platformization is a process in which data are continuously collected and absorbed. Data (knowingly) provided and (unknowingly) exhaled by users form the oxygen and carbon dioxide feeding the platform ecosystem. Due to the ubiquitous distribution of APIs, the process of absorbing data and turning them into nutrients-a metaphorical kind of photosynthesis-stimulates growth, upward, downward, and sideways. Each tree is part of a larger ecosystem-a global connective network driven by organic and anorganic forces. Resisting the temptation to build on this metaphor, we instead concentrate on the three layers that constitute its basic shape: roots, trunk, and branches ( Figure 1). The roots of the tree refer to the layers of digital infrastructure which penetrate into the soil; roots can run deep underground and spread widely, connecting trees to one another. Roots signify the infrastructural systems on which the Internet is built-cables, satellites, microchips, data centers, semi-conductors, speed links, wireless access points, caches, and more. Material infrastructures enable telecommunications and networks like the Internet and intranets to send data packages. Online traffic is organized through coded protocols, such as the TCP/IP protocol that helps identify every location with an IP-address, and a domain name system (DNS) for proper routing and delivering of messages. The World Wide Web is one such protocol system which helps routing data seamlessly across the net. Internet service providers (ISPs) can provide the infrastructure on which clients can build applications, such as browsers. All separate root elements contribute to a global digital infrastructure-a structure on which many companies and states depend to build their platforms and online services. The Internet itself was originally meant to serve as a "utility," independently organized and managed, indifferent to various geopolitical and corporate interests, to guarantee the global fluidity of Internet traffic. For instance, Internet Corporation for Assigned Names and Numbers (ICANN) represents the ideal of multi-stakeholder governance, an ideal that has come under pressure as companies and states are extending their powers to appropriate the "deep" architecture of the Internet. 1 On one hand, tech firms privatize vital parts of the infrastructure (Malcick, 2018;Plantin et al., 2018). Google, for instance, invested billions of dollars in data centers across the globe and underwater cables for data distribution. On the other hand, states and governments increasingly seek control over digital infrastructures, illustrated by American government interventions in Huawei's efforts to develop 5G networks in Europe. While control over the "deeper" infrastructural layers has privatized and politicized, we can see similar struggles in the layers situated in the gradual changeover between the roots and the trunk of the tree, for example consumer hardware and cloud services. Hardware devices such as mobile phones, laptops, tablets, digital assistants (Siri, Echo, Alexa), and navigation boxes allow for Internet activity to spread among users. Inside these devices, hardware components-including hubs, switches, network interface cards, modems, and routers-are tied to proprietary software components such as operating systems (iOS, Android) and browsers (Chrome, Explorer, Safari). The architecture of cloud services forms a blueprint for data storage, analytics, and distribution; control over cloud architecture increasingly informs the governance of societal functions and sectors. Amazon Web Services, Google Cloud, and Microsoft Azure dominate this layer, and while states and civil society actors become increasingly dependent on them, public control over their governance is dwindling. Blurring the boundaries between "digital infrastructure" and "intermediary services" allows for further incorporation. The intermediary platforms in the trunk of the tree constitute the core of platform power, as they mediate between infrastructures and individual users, as well as between infrastructures and societal sectors. The stack at this level includes identification or login services (FB ID, Google ID, Amazon ID, Apple ID), pay systems (Apple Pay, Google Pay), mail and messaging services (FB Messenger, Google Mail, MS Mail, Skype, FaceTime), social networks (Facebook, Instagram, WhatsApp, YouTube), search engines (Google Search, Bing), advertising services (FB Ads, Google), retail networks (Amazon Marketplace, Prime), and app stores (Google Play, Apple). This list is neither exhaustive nor static. None of these intermediary platforms is essential for all Internet activities, but together they derive their power from being central information gateways in the middle, where they dominate one or more layers in the trunk, allowing them to channel data flows upward and downward. What characterizes intermediary services is that (1) GAFAM platforms strategically dominate this space while there is hardly any nonmarket or state presence and (2) these super-platforms are highly interdependent, governing the platform ecosystem through competition and coordination. In the next section, we will explain in more detail how power is exercised from this intermediary level. When we move to the branches that sprout out of the trunk of the tree, we may see their volume expanding and diversifying into smaller arms and twigs, allowing for foliage to sprawl infinitely toward the sky. The branches represent the sectoral applications which are built on platform services in the intermediary layer (trunk) and enabled by the digital infrastructure (roots). The numerous branches of the tree represent the many societal sectors where platformization is taking shape. Some sectors are mainly private, serving markets as well as individual consumers; others are mainly public, serving citizens and guarding the common good. In principle, sectoral platforms can be operated by companies-including the Big Five, incumbent (legacy) companies, and (digital native) startups-but also by governmental, nongovernmental, or public actors (Van Dijck et al., 2018). In practice, we have seen an increasing number of corporate players taking the lead in sectoral data-based services, even if these sectors are predominantly public (e.g. health, education). The platformization tree exemplifies a complex system that comprises a variety of human and nonhuman actors, which all intermingle to define private and public space. Unlike the "stack" metaphor, the platformization tree shows the order and accumulation of platforms is not random but the result of invisible forces shaping the tree into its current form: from the circulation of its resources via its root structure and intermediary trunk all the way to feeding its twigs and foliage. As the tree grows bigger and taller, the influence of private actors' operating platforms across all levels and layer of the tree is mounting. There is more diversity of players in the branches than there is in the trunk, just as there is (still) more diversity in the infrastructural roots than there is in the trunk. In the next section, we will focus on the dynamics of platformization by scrutinizing the privileged position of intermediary platforms as "orchestrators in the digital ecology value chain" (Mansell quoted in Lynskey, 2017: 9). The dynamics of platformization The process of seamlessly stitching infrastructural, intermediary, and sectoral platforms together causes distinctions between these levels to be obliterated. However, emphasizing their dissimilarities and hierarchy is key to seeing how and why some platforms have obtained rule-setting and coordinating power (Castells, 2009). Firms that operate various platforms across all three levels have more operative power; by fortifying their position in the trunk layer, they develop and consolidate controlling power over the system as such. What characterizes intermediary platforms is that they form "obligatory passage points" between the roots and the branches (Callon, 1986). They can mediate all kinds of interactions between (end) users and service suppliers; they can accumulate intelligence from data and content flowing between various layers; they can transform data flows into monetary value; and they can apply gatekeeping and moderation activities to data and content flows . Owners of critical intermediary platforms are afforded extraordinary power to set the rules for data trafficking in the global network as such. The Big Five tech companies owe their concentration of power to at least three types of platformization dynamics: the vertical integration of platforms, the infrastructuralization of intermediary platforms, and the cross-sectorization of platforms. We will explain each type in more detail below. Vertical integration of platforms As said earlier, the distinction between infrastructural, intermediary, and sectoral platforms is increasingly fluid, allowing data flows to move across the connective system. Platformization pushes control over data flows in two directions: from the trunk downward toward the infrastructural layer as well as upward toward the branches of sectoral platforms and built-on applications. Plantin et al. (2018) have called the first part of this process the "platformization of infrastructure"; the Internet's digital infrastructure is increasingly transformed into a service model, illustrated by the integration of cloud services, hardware configuration, and analytics services into the intermediary platforms. Think, for instance, of Apple Pay which has a built-in NFC chip for exclusive use; other pay systems or rivaling services cannot deploy the hardware build into its iPhone. Hardware devices, computer chips, and cloud architectures are hence "platformized" to consolidate a company's position as an intermediary. Platformization also pushes upward, spilling out from the trunk into a wide variety of sectors. A continuous influx of user data happens via the leaves; sucked up by twigs and branches they can be seamlessly transported toward the trunk. Looking at the public sector of primary education, we can illustrate how this works. Google Suite for Education is a software package based on personalized learning algorithms designed to bring spelling and math tools into the classroom. The app package is built into Chromebook laptops, which are also equipped with Google Search, Google Login, Gmail, and so on. Vertical integration of platforms across the (de)fault lines of companies allows data streams to flow seamlessly between root, trunk, and branches, hence facilitating information flows to move upstream and downstream, channeling users into the proprietary Google stack. Hence, the dependence of schools on proprietary information systems effectively funnels pupils' data, generated in a public context, into a proprietary data flow controlled by one corporation's platforms. Vertical integration, often promoted as the seamless integration of platforms to facilitate user convenience, in practice results in the privatization of data flows causing user lock-in and vendor lock-in (Van Alstyne et al., 2016). Although we can still witness a lot more diversity of public and private actors at the sectoral level than at the intermediary level, the growing presence of the Big Five platforms in many branches of the tree marks society's increased dependency on them. Vertical integration of platforms not just obfuscates the boundaries between infrastructures and sectors, private and public platforms; it also negatively affects the need for developing independent platforms, adding to a privatized Internet where "information may never have to journey across public infrastructure" (Srnicek, 2017: 113). Infrastructuralization of intermediary platforms Intermediary platforms are increasingly moving toward becoming infrastructures for users-a process Plantin et al. (2018: 306) have called the "infrastructuralization of platforms." We commonly locate infrastructures at the root layer; however, intermediary platforms in the trunk increasingly manage to obtain infrastructural status (Plantin and De Seta, 2019). Mark Zuckerberg has often called Facebook a "social" infrastructure; with over two billion users, the social network has become a vital obligatory passage point for data flows passing through the trunk. Through its "family of apps" (WhatsApp, Instagram, Messenger, Login, Advertising, Analytics), Facebook is garnering a central position in the middle where it can connect content and data flows in the invisible backend. This horizontal movement toward building a denser presence across one or more layers in the trunk strengthens a tech company's position in the system as a whole. The intermediary level of the American ecosystem, operated by a handful of major players, constitutes a self-organized and self-governed core. Being part of the trunk is crucial for companies to exert power upward, downward, and sideways. As long as data and content flows keep passing through the trunk-flows that can be exclusively mined, processed, combined, and repurposed-their operators define the tree's shape. A bigger and taller trunk layer means more control over the tree; less operators in the trunk means more efficient coordination. The intermediary level is rather exclusive and restricted. If you need access to a large number of users, you have to go through Facebook; for selling products to mass customers, you are dependent on Amazon's retail network; for downloading apps, Apple's and Google's app stores are unavoidable bottlenecks; to find information, you have to pass through Google's or Microsoft's search engine territory. But the Big Five are also interdependent: Apple's iCloud is built on Amazon Web Services and Microsoft's Azure; and Facebook is dependent on Apple and Google for allowing its platforms in their app stores. Interdependencies turn the Big Five platforms into "coordinating competitors"a form of "coopetition" that easily escapes scrutiny by regulatory agencies who tend to focus on individual firms (Daidj and Egert, 2018;Kostis, 2018). Cross-sectorization Platformization becomes even more pervasive as companies expand their influence across sectors. "Cross-sectorization," as we call this process, allows companies to collect and connect personal information and behavioral data from multiple sectors. For instance, Amazon is concomitantly nesting itself in the medical sector, the transportation sector, and the insurance sector. In 2018, Amazon built a software platform for searching medical files (Amazon Comprehend Medical) and acquired pharmaceutical giant PillPack. Partnering with two other companies, it also started an insurance unit (Haven) to offer 1.2 million employees healthcare insurance. Cross-sectorization allows for connecting not just services-Amazon could grow into a one-stop-shop for diagnostics, and ordering and delivery of medication-but also for controlling information about users through combining their data flows. The more data flows can be connected, the more information can be derived from the system and fed back into it. Data flows are the oxygen feeding algorithmic intelligence, hence providing the nutrients for value creation. Vertical integration, infrastructuralization, and cross-sectorization are the main dynamics that boost platformization. All three dynamics point toward power concentration in the system's middle; the Big Five platform operators are "trunking the tree" into a gigantic Californian sequoia by growing it thicker and taller-thicker by swelling its ringed structure while making it an exclusive centralized space and taller by enlarging the trunk upward and downward, incorporating the roots and the branches while erasing the distinctions between them and also obliterating the boundaries between market and nonmarket sectors. The power of platformization emanates from Big Tech companies' ability to engage in an unprecedented form of competition-cum-coordination, particularly via their intermediary platforms. They attain a precarious balance by carving out spaces for their own platform functionalities, while opening up to rivals in other areas; by coordinating online space with other major players while competing in other segments; and by integrating their own platforms vertically while maintaining competition in 'oligopolistic' platform markets (Dolata and Schrape, 2018). The lens of platformization dynamics allows us to see how regulatory practices may apply to various levels and various firms, not in isolation but in conjunction, which brings us to the question: What makes platform ecosystems so difficult to govern and why is platformization seemingly impervious to regulatory forces? Governing the unruly status of intermediary platforms Legal intervention in the current ecosystem is complicated, particularly due to the slippery ontology and unruly status of intermediary platforms. They constitute a vague and impermeable layer due to their "in betweenness," a liminal position pertaining both to their functionality and to the status of their operators, commonly called "information companies" or "tech firms." Tech companies deliberately push their platforms to vacillate between sectors and infrastructures, between markets and nonmarkets, between private and public interests, between a marketplace for goods and services and a marketplace of ideas, while adopting features of both. Moreover, they exert unprecedented power over people's lives, affecting autonomy and freedom through imposing their architectural choice design upon users-powers that were previously assigned to state actors in charge of shaping governance institutions and rulings. Such hybrid positioning poses serious challenges to regulators and lawmakers, who are bound to act within the available frameworks (e.g. competition law, privacy law, antitrust law, fundamental rights law), while other legal regimes pertain to governing sectoral responsibilities (e.g. banking, media, or education) or to infrastructures (e.g. public utilities vs private infrastructures). Each of these legal frameworks has a limited scope and reach, commonly focusing on single actors (e.g. firms, markets) and arguing in the private interest of consumers or in the public interest of citizens. Looking at two different examples-one from antitrust law and the other from information law-we can illustrate how legal scholars have used compartmentalized frameworks to reign the "unruly" status of intermediary platforms. Lina Khan (2016), taking the perspective of competition and antitrust law, meticulously analyzes Amazon's conduct. She demonstrates how the firm's ability to observe clients' usage of its web services (AWS) allows to detect and stymie the success of upcoming firms. Connecting data flows derived from AWS to those of Amazon Marketplace and onto delivery services and retail products, Kahn argues how Amazon distorts the level playing field, exploiting exclusive knowledge from data flows to prioritize its own products and services. To counterbalance the firm's power, she first proposes a "prophylactic ban" on vertical integration by driving a wedge between the exploitation of online infrastructures and sectoral services. Kahn's second suggestion is for regulators to apply certain common carrier obligations and duties onto certain crucial platforms-conditions that traditionally apply to public utilities. This can only work, though, if a new legal definition of "essential facilities" justifies a restricted functionality (Khan, 2016: 801). Staying within the parameters of markets and single companies, Kahn keenly illuminate aspects of Amazon's anticompetitive structure and conduct while underscoring deficiencies in the current legal doctrine (Khan and Vaheesan, 2017). A similar case exposing the unruly status of intermediary platforms originates from the angle of information and media law. Philip Napoli (2019) atgues that Facebook adopts a double legitimacy as a public square and a market place while avoiding public accountabilities. The company recuses itself from the liabilities of the news sectors, settin its own rules with regards to filtering out hate speech and fake news. Facebook owes its Janusfaced status to a tactical maneuver which allowed the company to evade the limited public interest protections inscribed in the US legal system. Section 230 of the 1996 Telecommunications Act grants immunity to various forms of legal liability to online content providers for "content produced or disseminated on the platform by third parties even if they actively engage in various forms of editorial selection, filtering, or curation" (Napoli, 2019: 158). This analysis leads him to conclude the following: "The fact that the public-interest standard has no regulatory foothold in either the structure or behavior of social media platforms means that we have a growing disconnect between regulatory motivations and rationales that needs to be addressed" (Napoli, 2019: 153). Arguing from different legal perspectives, Kahn and Napoli both come to the conclusion that narrow regulatory frameworks inhibit governments' abilities to regulate the larger societal interests at stake in these individual cases concerning Amazon and Facebook. Their insights can hardly be considered in isolation, though, and this is where the tree metaphor might offer new imaginary space. If we approach platformization more expansively, we start seeing how it promotes vertical integration, infrastructuralization, and cross-sectorization across all levels and layers of the ecosystem, turning it into a constellation that fuses corporate, public, and civic interests. Second, it helps notice that platform power lies not with individual companies, but in the coordinating, rule-setting power of the connective ecosystem as a whole. And third, the metaphor may also help understand ecosystems as (geo-)political-economic constructs which are interconnecting various layers at all three levels. We will elaborate on each of these arguments below. To start with the first, looking at the Amazon and Facebook cases through the platformization-tree lens helps focus on the effects of their shared dynamics. Amazon's vertical integration of data flows, its infrastructuralization of services in the trunk (AWS), as well as its extensive cross-sectorization (medical, transport, insurance, etc.) consolidates their powerful position, which allows them enormous control and leverage over the datafied ecosystem as it evolves over time. Inadvertently feeding the metaphor, CEO Jeff Bezos once said in an interview: "We are comfortable growing seeds and waiting for them to grow into trees" (Anders, 2012). Facebook, for their part, primarily "trunks the tree" by merging data flows from platforms that have a marketing purpose (Advertising) with those primarily serving political information, public deliberation, and interpersonal communication (Facebook, WhatsApp, Instagram, Messenger). Similar mechanisms can be identified in how Google, Apple, and Microsoft-each in their own distinct way and yet strikingly similar-operate their platforms across all three level, divulging a commanding pattern. While quite a number of scholars have properly addressed the respective horizontal, vertical, and cross-sectoral envelopment strategies deployed by individual firms, few have pursued a comprehensive approach to platformization across all layers (Dolata and Schrape, 2018). The tree might help envision why the ecosystem is no longer a collection of separate "stacks"-neatly divided into infrastructural and sectoral, public and private platforms-but has morphed into its current tiered "trunked" shape. If public interests become virtually dependent on private infrastructures while state or civil representatives have little sway over the conditions of its architecture, affordances, and functionalities, the information ecosystem gradually assumes a monocratic status. Second, the tree metaphor helps shift the focus from individual companies running multiple platforms in a competitive market to a set of collaborating competitors that manages to standardize the technical and social rules for all online traffic. Last year, Mark Zuckerberg called the proposal to break up Facebook, Google, or Amazon, an "existential threat" to these companies while failing to change the system "because now the companies can't coordinate and work together" (Stevens, 2019, emphasis added). Only those platform operators who have the ability to deploy data flows upstream, downstream, and side-stream have the ability to jointly control and organize the information system as such. Platformization works to their advantage when tech companies can align their crucial gatekeeping and monetizing functionalities across infrastructures and sectors, sustaining their proprietary data flows without assuming the costly implications of civic governance. While public and civil society actors are still present in the root and branch layers, they hardly occupy any space in the trunk that grows thicker and taller, diminishing the egalitarianism and diversity of actors operating within the system. The most compelling argument used in favor of allowing a corporate "oligopoly" to run an ecosystem is that it allows for a "frictionless" user-consumer experience (Smyrnaios, 2018). A forceful argument against it is that the seamless system is virtually impermeable to outsiders-be it other companies, governments, nongovernmental actors, or citizens. Platformization dynamics shape the tall and thick trunk of the Californian sequoia, hence stipulating the growth of a monoculture rather than promoting a diverse ecosystem. Finally, the tree metaphor allows insight in the political-economic dimensions of globally interconnected platform ecosystems, which can hardly be viewed separately from their sociotechnical affordances. The American GAFAM-system and the Chinese BAT-system are both dominant platform ecosystems. In spite of their ideological differences, the two species are remarkably similar: both the Californian sequoia and the Chinese bamboo tree have developed sizable tall trunks; both blend state and corporate interests across the roots, trunk, and branches into seamlessly integrated services. Their striking sociotechnical similarities enable widespread economic entanglement. As mentioned earlier, tensions between the three main blocs (United States, China and Europe) rise as fights over geopolitical power become fights over infrastructural power in digital space. These various contests are proof of how platform ecosystems are no longer separate entities but are deeply intertwined-not only at the roots, as illustrated by Huawei's disputed role in developing the 5G infrastructure but also at the trunk and branches. For instance, while Apple still derives 40% of its app store revenue from Chinese users, it is now pressured by the American government to move some of its hardware production back to the United States. Alibaba's and Amazon's conquests of online retail markets in Europe are crowding out national and local services, triggering resentment. The more societies are governed by and through globally operating connective ecosystems, the more difficult it seems for regulatory bodies to govern their unruly dynamics. The lack of effective national and transnational-let alone global-regulatory frameworks complicates comprehensive governance efforts. Reshaping governance to promote platform diversity This section brings us back to Europe's role in reshaping platform governance. Since the world's information systems are predominantly owned and operated by American and Chinese companies, it may befall onto European legislators and regulators to act as global agents of change. While they lack the technological prowess of either one system, Europeans control access to a huge continental market which they aim to protect in line with its democratic ideals, but which suffers from policy diffraction. The main question, then, becomes how Europe can move from a patchwork of siloed frameworks toward a comprehensive approach. Or, as Owens (2019) argues, we need a new set of rules to bridge the global governance gap of our time: "The challenges we confront are systemic, built into the architecture of digital media markets, therefore public policy response must be holistic and avoid reactions that solve for one aspect of the problem while ignoring the rest." Given the EU's ambition, cited at the beginning of this article, to design a new digital data infrastructure that will incentivize data flows to be shared and used for the common good, what would be needed to shape such agenda? So far, the EU has reacted to the negative consequences of platformization mostly through mobilizing its conventional legal frameworks, for example competition and market regulation, copyright and privacy regulation, and hate speech and misinformation directives. 2 Staying within its narrow confines, the EC has taken up concrete cases against individual companies. In recent years, substantial fines were imposed on Google for proven anticompetitive behavior; more recently, the EC started an investigation into whether Amazon is unfairly using data collected by third-party sellers to advance its own price policy; and Apple's app store and its payment system Apple Pay have drawn antitrust scrutiny. The introduction of the general data protection regulation (GDPR) in 2018 infused privacy law and data protection as meaningful parameters into a debate that was previously fueled primarily by market principles. And European governments (i.e. Germany) have called for tech companies to take responsibility for removing unlawful content, such as hate speech and discriminatory utterances. Invoking the plight of tech companies as being on par with those of media organizations, they have mobilized media law to broaden the juridical spectrum, gravitating the center of the debate from market power to societal responsibility. Such shift at least acknowledges that platform power spills beyond market structures, affecting society as such (Nemitz, 2018). As a result, legal disputes that were before limited to antitrust and competition law have been expanded to include other relevant legal frameworks; they might well be extended further and also pertain to human rights law and public law (Jorgensen, 2019). Each of these regulatory and policy interventions has sent strong signals of the EC's disapproval of Big Tech's practices, but neither fines nor sweeping single-issue policies have so far resulted in systemic changes. As some scholars have argued, we "need to bring together disparate policy instruments into a coherent overall framework and regulatory architecture" (Tambini, 2017: n.p.). Others contend we should move from "rulesbased regulation" toward "principles-based regulation" (Nooren et al., 2018: 282). But this is easier said than done with an EU whose global power may exceed its transnational policy leverage. Instead of pursuing various policies directed at regulating single platforms, individual firms, and isolated issues, Europe might try a novel strategy-one that targets platformization dynamics as a meaningful starting point for regulatory counterpower. European societies have a long tradition of organizing their democracies based on balanced cooperation between market, state, and civil society actors (Mager, 2018). So they should feel particularly compelled to go back to the drawing board and articulate set of principles that prioritizes the common good by empowering citizens and civil society organizations to help governments design an open and diverse ecosystem. Again, the platformization tree might provide an interesting metaphorical lens for articulating various sets of normative-legal, technical-ethical, and democratic-civil principles, to name just a few. For starters, normative-legal principles could help define the ontological distinction between infrastructural, intermediary, and sectoral platforms, which in turn may inform various legal conditions to run them in isolation or in conjunction, and state the responsibilities pertaining to their operation. For instance, if cloud services were labeled digital infrastructures they could be held up to certain standards of neutrality and openness; if they were labeled intermediary platforms, they might be subject to content liability. Similarly, if social network platforms were categorized as sectoral services, like news organisations, they could be held responsible for content in different ways than when they were categorized as infrastructural services, such as telecoms. An urgent normative question arising with regard to platforms now operating at the intermediary level will be whether they are granted a separate status that comes with specific responsibilities and liabilities or whether they will have a binary choice between infrastructural and sectoral regimes. By the same token, technical-ethical principles may be issued to inform the design of data and algorithmically driven systems. The principles of fairness, accountability, interoperability, and responsibility-also known as FAIR principles for scientific data management and stewardship (GO Fair Initiative, 2016)-may be applied up and down all three levels, from infrastructures to sectoral platforms. Pursuing such principles may alleviate power asymmetry, allowing individuals to control their data without losing the benefits of connectivity. For instance, if platform interoperability and data portability were facilitated across platforms, this might create conditions for safeguarding crossplatform traffic while promoting the open exchange of data flows. Mandating such principles at the technical level may also support legal rules aimed at preventing vertical integration and cross-sectorization. Furthermore, democratic-civil principles based on public values could be used to inform a balanced architecture. The platformization tree has shown how the obliteration of private, corporate, state, and civic space requires the reassertion of these distinct interests in a democratic online structure. Do infrastructural platforms, such as cloud services, offer public or private services and what warrants their distinction? If intermediary platforms, such as social networks, are public spaces, what responsibilities and liabilities pertain to their operation? And is the incorporation of data flows generated in public sectors (e.g. schools, hospitals) permitted when they can be connected to data flows outside the public realm? The principle of data sovereignty gives users the ability to control the storage, accessibility, and processing of their own (meta)data. When switching between different platforms, users could be afforded to choose a specific data regime: they can keep their self-generated data private, donate it anonymously to a "data commons," or put their data at the disposal of particular platform operators. Tim Berners-Lee initiative Solid (2018) exemplifies how such set of principles may inform a platform's architecture. It is beyond the scope of this article to provide a full description of sets of principles; we merely want to illustrate how a new imaginary may help design an open and diverse platform ecosystem (Gorwa, 2019). However, it should be clear that articulating such principles may shape a species different from the California sequoia or the Chinese bamboo tree. The European tree does not have a trunk that grows taller and thicker fed by proprietary data flows, but it has a "federated," decentralized shape. It features switching nodes between and across all levels and layers, allowing users to change between platforms and define at each point how their data may be deployed. Such tree may help grow a different kind of ecosystem-one that allows for more variety, openness, and interoperability at all levels ( Figure 2). Crucial to reshaping the ecosystem's architecture is to maintain diversity at the infrastructural, intermediary, and sectoral levels. Indeed, European nations and the EU should be concerned about protecting public values and interests at all three levels, while carving out space for independent institutions and civil society actors to operate independent platforms. In 2019, German chancellor Angela Merkel called for a European public cloud service and for setting standards of cloud computing based on public values such as privacy, security, and democratic control. The recent German-French initiative GAIA-X aims to build a digital infrastructure based on principles of data sovereignty, public accountability, interoperability, and decentralization (Federal German Ministry of Economic Affairs, 2019). Both actions signal the acutely felt need to reshape the system's architecture to reflect European norms and values. Instead of adding to the geopolitical tension, European policy-makers could exploit their relative position as outsiders to redirect their regulatory efforts to counterpoise the adverse effects of platformization dynamics. Growing a diverse and sustainable platform ecosystem requires a comprehensive vision; the tree allows us to visualize a platform constellation that comprises multiple levels, visible and invisible, underground and above surface. By allowing a handful of tech companies to define the principles of a market-driven ecosystem, they are afforded all rule-setting and governing power over the world's information ecosystems. Focusing on single firms, markets, or individual platforms will not lead to profound, systemic changes. We need to see the forest for the trees in order to understand how to effectively govern their connective structures hidden in layers of code. The tree, although merely a metaphor, expresses the urgency to diversify the platform ecosystem in order to keep it sustainable. Without diversity, we can't grow a rich, nutritious forest; without a variety of actors with distinct and respected societal roles, we cannot control its unbridled growth; and without a set of principles, we cannot govern its dynamics. Changing a system starts with vision and visualization.
2020-07-09T09:08:14.760Z
2020-07-08T00:00:00.000
{ "year": 2020, "sha1": "b070c2ad7c4c876a9e105abb5fb7407369994a07", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1461444820940293", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "e1bce635e63bc50a489dcad93ecc803d7e706d95", "s2fieldsofstudy": [ "Political Science", "Computer Science", "Law" ], "extfieldsofstudy": [ "Computer Science", "Business" ] }
58991779
pes2o/s2orc
v3-fos-license
Right Upper Lobe Torsion after Right Lower Lobectomy: A Rare and Potentially Life-Threatening Complication An 84-year-old woman was referred to our institution with suspected right lung cancer. Subsequently, she underwent thoracoscopic right lower lobectomy without mediastinal lymph node dissection. Postoperatively, she complained of dyspnea and developed arterial oxygen desaturation after 12 h and acute respiratory failure (ARF). An emergency chest computed tomography revealed the right upper bronchial stenosis with hilar peribronchovascular soft tissue edema because the middle lung lobe had been pushed upward and forward and the right upper lung lobe had twisted dorsally. Emergency bronchoscopy revealed severe right upper bronchial stenosis with an eccentric rotation and severe edema. The bronchia stenosis was successfully treated with glucocorticoids and noninvasive positive pressure ventilation for ARF. Introduction Lung torsion after lobectomy is a potentially life-threatening complication due to the possible development of acute respiratory failure (ARF). Here we report the case of a woman who underwent thoracoscopic right lower lobectomy without lymph node dissection for advanced f-stage IIB lung cancer and subsequently developed right upper lobe torsion causing ARF 12 h postoperatively. The upper lobe twisting just after right lower lobectomy is a rare potentially life-threatening complication. Case Report An 84-year-old woman with angina, diabetes mellitus (DM), hypertension, and Alzheimer's disease was referred to our institution for suspected right lung cancer. Physical examination revealed the following: body height, 150 cm; weight, 68.4 kg; and body mass index, 30.4. Chest computed tomography (CT) revealed a 1.8 cm nodular lesion with an illdefined margin in the right lower lobe, suggesting lung cancer without metastasis (Figure 1(a)). Three-dimensional CT revealed normal bronchial anatomy (Figure 1(b)). Her preoperative vital capacity was 1.77 L as assessed using a spirogram, and the forced expiratory volume in 1 s was 1.35 L. Subsequently, we performed thoracoscopic right lower lobectomy without mediastinal lymph node dissection. The anesthetic and operative times were 189 and 92 min, respectively, with minimal blood loss. The total amount of intraoperative fluid replacement was 1000 mL. Final pathological finding was adenocarcinoma with hilar lymph node metastasis diagnosed as pT1bN1M0 (p-stage IIB according to the 8 th IASLC classification criteria) [1]. Extubation was safely performed in the operating room, and she was followed up in the intensive care unit. However, postoperatively, she complained of dyspnea without chest pain and developed arterial oxygen desaturation 12 h postoperatively. Oxygen saturation reduced to 86% despite the administration of 10 L/min oxygen, corresponding to a PaO 2 of 54 mmHg. An emergency chest computed tomography (CT) revealed the right upper bronchial stenosis with hilar peribronchovascular soft tissue edema (PSTE) because the middle lung lobe had been pushed upward and forward, and the right upper lung lobe had twisted dorsally (Figures 2(a) and (a) 2(b)). Three-dimensional CT scan showed severe bronchial stenosis (Figure 2(c)). Emergency bronchoscopy revealed severe right upper bronchial stenosis with an eccentric rotation and severe edema (Figure 2(d)). Echocardiography and electrocardiography revealed a cardiac ejection fraction of 55% and normal diameter of the inferior vena cava, thus ruling out ischemic heart disease. Subsequent emergency blood tests revealed normal hepatorenal function and serum albumin level. She was diagnosed with localized right upper bronchial obstruction with bronchial edema and hilar PSTE due to right upper lobe torsion after right lower lobectomy. There was no evidence of venous congestion, hemorrhagic infarction, necrotic findings, increased pleural effusion, or atelectasis. Therefore, we decided conservative treatment as primary care. ARF was treated using noninvasive positive pressure ventilation for 2 days and 40 mg methylprednisolone injection for 3 days. A follow-up chest CT on postoperative day (POD) 3 revealed improvement of the right upper bronchial stenosis; she subsequently received 30 mg oral predonine for 7 days (Figures 3(a)-3(c)). 3D-CT on POD 14 showed the counterclockwise rotation of right upper lung lobe but obvious improvement of stenosis of the bronchus (Figure 3(d)). The chest tube was removed on POD 1. She was discharged on POD 16, after recovery. At the 4-month followup, she exhibited good health without any evidence of right upper bronchial stenosis. Discussion Lung torsion (LT) is a very rare but potentially lifethreatening complication and can be caused by pulmonary resection, tumor, or trauma [2][3][4]. Hennink had reported that the incidence of LT after pulmonary resection is less than 0.4% [5]. In reference to lobar torsion after lung resection, middle lobar torsion after right upper lobectomy (41.0%) and left lower lobar torsion after left upper lobectomy (23.1%) were mainly reported. Then, the right upper lobe (RUL) torsion after lung resection after right anterior segmentectomy (2.6%) and right middle lobectomy (7.7%) had been reported. The LT-related mortality rate was 8.3% [6]. In this literature, we describe the first case of RUL torsion causing ARF after right lower lobectomy. Depending on the symptoms, such as congestion or infarction, incomplete LT also needs surgical intervention. Particularly, in incomplete cases, it is important to rule out other possible causes of bronchial edema. Although bronchial edema is a rare surgical complication of lobectomy, the potential risk of progression to ARF should be recognized. Bronchial edema may be caused by excessive transfusion, cardiac dysfunction, barotrauma to the bronchus due to positive pressure ventilation, or circulating vasoactive mediators due to surgical reactive or ischemic changes. Although cardiac ultrasonography performed by a cardiovascularspecialist should have been employed to rule out acute cardiac failure, no evidence of such cardiac failure causing edema could be identified as a hepatorenal disorder or hypoalbuminemia. Both CT and bronchoscopy are the modalities of choice for prospective diagnosis of LT. Clinicians should search inversion of the vascular pattern, congestion, or infarction of the affected lung by enhanced chest CT [6,7]. Bronchoscopy was an alternative examination, and the most suggestive bronchoscopic findings included bronchial occlusion and fish mouth orifice [5,8]. In this case, we suspected bronchial edema due to RUL incomplete torsion by these multiple examinations to rule out other pathophysiology. Moreover, PSTE develops secondary to conditions such as pulmonary lymphangitic carcinomatosis, lymphoproliferative disorders, hydrostatic pulmonary edema, pneumonia, interstitial pulmonary emphysema, and interstitial hemorrhage [9][10][11]. In this case, the exacerbation factors of bronchial edema with PSTE, except for the reactive change of LT, were suspected to be as follows: (1) ipsilateral lymphatic vessels were damaged due to lobectomy and impaired lymphatic drainage; (2) local capillary permeability at the affected peribronchovascular soft tissue may have increased as a result of the surgical invasion; and (3) DM could affect the pathogenesis of endothelial damage in diabetic micro-and macroangiopathy around the twisted right upper bronchus [12]. Our case had no evidence of venous congestion, hemorrhagic infarction, necrotic findings, increased pleural effusion, or atelectasis. Finally, glucocorticoids and noninvasive positive pressure ventilation successfully treated the patient. Prompt and effective treatment should be provided to prevent critical pulmonary failure under the strict monitoring. Conclusion We report a case of right upper lobe torsion causing ARF after right lower lobectomy. This is a rare and potentially lifethreatening surgical complication following lobectomy. The patient was successfully treated using glucocorticoids and noninvasive positive pressure ventilation for ARF. Finally, glucocorticoids and noninvasive positive pressure ventilation successfully treated the patient. Prompt and effective treatment should be provided to prevent critical pulmonary failure under the strict monitoring.
2019-01-25T14:03:02.419Z
2018-12-23T00:00:00.000
{ "year": 2018, "sha1": "f69d9e9f0c466b679bf9d4e947738385248ef12a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2018/2146458", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a50f02f09e689ae03be2da1c859ebb51cc6feb9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
217464031
pes2o/s2orc
v3-fos-license
Eff ect of Urinary Retention on the Satisfaction and Complication Rate in Benign Prostatic Hyperplasia Patients Undergoing Prostatectomy Citation Banakhar MA. Eff ect of urinary retention on the satisfaction and complication rate in benign prostatic hyperplasia patients undergoing prostatectomy. JKAU Med Sci 2016; 23 (3): 2329. DOI: 10.4197/Med. 23.3.3 Abstract A cohort study from January 2000 till January 2005 included all patients who underwent prostatic surgery with histopathology report of benign prostatic hyperplasia. Patients were divided into elective or retention group. Data of patient demographics, satisfaction and postoperative complications were collected. Total of 119 patients, retention (n = 30), elective group (n = 89). Retention rate was 25%. There was no eff ect of retention on postoperative complication (elective = 44%, retention = 41%) p value = 0.826, odds ratio 0.878, CI (0.363 2.124) nor any eff ect on patients' satisfaction (elective = 54%, retention = 59%) p value = 0.661, odds ration = 1.256, CI (0.520 3.034). Patients’ age and prostate size did not show any eff ect on postoperative outcome. While presence of infl ammatory cells in benign prostatic hyperplasia (BPH) histopathology showed positive eff ect on satisfaction (BPH alone = 47%, BPH + infl ammation = 71%) p value = 0.037, and a protective eff ect on postoperative complications (BPH alone = 45%, +BPH with infl ammation = 18%) p value = 0.167. Conclusion: Retention rate is comparable to the international reports. The presence of infl ammatory cells in the benign prostatic hyperplasia histopathology showed a positive eff ect on postoperative satisfaction. Introduction A cute urinary retention (AUR) presents in 0.4 -25% of urological practice and as an indication for transurethral resection prostate (TURP) in 25 -30%. In Proscar Long-Term Effi cacy and Safety Study (PLESS) pharmaceutical trial acute urinary retention occurred in 3% of treatment arm, and 7% of placebo arm; while in Prostate World Study Group (PROWESS) trial acute urinary retention represents 1% in treatment arms and 2.5% in placebo arm. In Saudi Arabia, Mosli et al. reported that till year 2000, the rate of benign prostatic hyperplasia (BPH) patients presenting with urine retention reached 57% in some Saudi hospitals compared to the international reports of 30% [1] . With the doubled incidence of acute urinary retention in our country we questioned its eff ect on prostatectomy postoperative outcome. We constructed this study looking for the eff ect of urinary retention on satisfaction and complication rate in patients who underwent prostatectomy. Material and Methodology A cohort study, from January 2000 till January 2005 included all BPH patients who underwent prostatic surgery (TURP), bladder neck incision (BNI), and open prostatectomy. Their histopathology was reviewed. We included patients whose histopathology report was benign prostatic hyperplasia with or without infl ammation while the malignant histopathology (prostate cancer, atypical small acinar proliferation (ASAP) and prostate intracytoplasmic neoplasia (PIN)) were excluded. Patients were divided into elective group (who underwent prostatic operation and admitted on elective basis) or retention group (who underwent prostatic operation with a history of recurrent or refractory retention). Patients' demographics, prostate size, patient age and histopathology, satisfaction and postoperative complications were collected. Patients have given their informed consent and the study protocol has been approved by the institute's ethical committee. Outcome Measurement The study's primary outcome is postoperative satisfaction and complication rate. Patient satisfaction was assessed using a direct questionnaire (Appendix 1), which was submitted to all postoperative patients (after Complications were retrieved from patients' medical fi les for any documentation of at least one of the following complications occurring within 12 months postoperatively in the absence of coassociation with other diff erent causes. Complications sought included: Bladder neck obstruction, failure to void, urge urinary incontinence, retrograde ejaculation, urinary tract infection, urethral stricture, and stress urine incontinence. The study secondary outcomes include patients age (which were recorded at the time of operation), prostate size (which was assessed using transabdominal ultrasound done preoperatively), and histopathology of BPH with infl ammation confi rmed by the presence of infl ammatory cells in prostate resected tissue during the operation. Surgical Procedure All patients underwent prostate operations either TURP, BNI or transvesical prostatectomy in a standard step by diff erent surgeons. All procedures were done under general or spinal anesthesia, with prophylactic antibiotic and all patients were continued on ciprofl oxacin 500 mg orally twice a day for 14 days. Exclusion Criteria All patients diagnosed with prostate cancer, or if their TURP resected tissue histopathology showed any prostate cancer, ASAP and PIN were excluded. Statistical Analysis Data were collected using Microsoft Excel 2003 (Microsoft Corp., Redmond, WA USA). Statistical analysis was performed using SPSS for Windows, Version 15.0 (SPSS Inc., Chicago, IL USA) cross-tab, chi-square test with risk estimate were used for data analysis. All values are considered statistically signifi cant at P values ≤ 0.05. Results The total study sample of 119 were further divided into two study groups: the retention group, including patients who underwent prostatic operation with a history of recurrent or refractory retention (n = 30); and elective group, who underwent prostatic operation and admitted on elective basis (n = 89). Both the retention and elective groups had a comparable means 65y, 63y, respectively. Procedures performed for the retention group included 93% TURP, 7% open prostatectomy, while in the elective group 84% had TURP, 1% had open prostatectomy and 15% BNI. Histopathology review for the retention group showed 63% with pure BPH histopathology while 37% included infl ammatory response in the BPH histopathology. The elective group had 84% BPH and 7% BPH with infl ammation. Secondary analysis of our study group identifi ed other factors that can aff ect the postoperative outcome. Patients age and prostate size did not show any eff ect with p-value = 0.471 CI (1.56 -6.92), and 0.441 CI (8.32 -2.69), respectively. In fact, acute urinary retention is an indication for operation in 20 -30% of BPH patients. In Saudi Arabia the incidence of AUR reaches as high as 57% i.e., almost double the international reports of 30% [1] , in our institute AUR is as high as 54%. Few papers address the role of prostate infl ammation in BPH progression, risk AUR and response to medical therapy [9][10][11][12] . Chughtai et al. [13] postulated infl ammation as the third component in BPH pathogenesis and symptom progression by activating CD4+ lymphocytes. In this study, our hypothesis was that AUR has an eff ect on the patient postoperative outcome (complication and satisfaction), but the results showed that in both comparable groups (retention and elective groups) there was no diff erence in the postoperative outcome between them. Secondary analysis of prostate size and patient age, showed no eff ect on both postoperative satisfaction and complication rates. The presence of infl ammatory cells in the BPH histopathology showed a signifi cant eff ect on patient satisfaction. We believe that it has unexplainable protective eff ect on postoperative complication rate but the p value was not signifi cant because of the absence of histopathology in BNI cases (Fig. 3). Conclusion Retention rate in our study is comparable to international reports. History of urinary retention does not have any eff ect on postoperative satisfaction nor complication rate, while the presence of infl ammatory cells in the BPH histopathology showed a positive eff ect on patients' post-operative satisfaction and a protective eff ect on prostatic post-operative complications.
2019-11-22T01:35:12.398Z
2016-09-30T00:00:00.000
{ "year": 2016, "sha1": "3446585b43fb4000103062db442a1c6d8a2d8cbe", "oa_license": "CCBYNC", "oa_url": "http://jkaumedsci.org.sa/index.php/jkaumedsci/article/download/541/540", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee1ea5c101e3f2ce5497a225c838bf22be027561", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267138926
pes2o/s2orc
v3-fos-license
Deep Sustainability as Care: A Nondual Approach to Environmental Communication ABSTRACT This essay suggests the concept of “deep sustainability” as a philosophical orientation for environmental communication scholars to address not only the empirical but also the ethical and ontological questions associated with sustainability. Drawing on the thoughts of deep ecology and founded in a nondual ontology with origins in perennial wisdom, it argues that in order to create a counterculture to the uncaring neoliberal order, there is a need to substantially increase awareness of the devastating implications of the dualistic discourse inherent to this order. What is required is a new and radically different worldview of “interbeing,” rooted in the lived experience of the interconnectedness – oneness – of all life. Extending research in the study of sustainability discourse, this essay contends that it is only when our identity in-group becomes all-inclusive, that is, when duality dissolves, that caring for all beings, be they humans, trees, animals, or other lifeforms, comes effortlessly and with deep – lasting – sustainability as the natural result. It is true that we live in a "world-in-crisis," where human-made environmental catastrophes multiply and relentlessly push human society into what seems to be its endgame (Cottle, 2023).The more humanity insists on doing business-as-usual under the neoliberal regime, even with an alleged "green" twist to it, the worseand exceedingly more unpredictablethe disasters tend to unfold.It is tempting, to say the least, to give in to despair, emotional fatigue, and apathy or to coping strategies such as climate denial, conspiratorial thinking, or plain cynicism.Being open-hearted and compassionate simply become increasingly difficult because it is also true that the more we care about the beings of the world, the more it hurts; it feels as if we are living in a "world of wounds" (Lent, 2021, p. 281). But there is still room and reason for hope.Not hope in the form of an ill-founded optimism that in some miraculous way everything will work out fine, but hope based on the acknowledgment of the interconnectedness of all life.This kind of hope entails the rise of an ecological civilization that would lead the way out of the destructive Anthropocene into the much-needed Symbiocene (Lent, 2021).But for this hope to take root, the fundamental understanding of oneness needs to saturate every aspect of human (in)action (Olausson, 2023). To this end and drawing on the philosophy of deep ecology (e.g.Cronon, 1996;Macy, 2021;Naess, 2016), I elaborate here on the emerging idea of deep sustainability (e.g.Buriti, 2019;Martin, 2020) in an attempt to conceptualize what I see as the fundamental requirement of an integral sustainability that actually lasts (Olausson, 2023;Olausson, forthcoming).Deep ecology broadly might be understood as a philosophical and ecological perspective that advocates for a fundamental shift in how humans relate to the natural world.It emphasizes the interconnectedness of all living beings and aims to address the root causes of environmental issues by fostering a deeper sense of responsibility and respect for the Earth's ecosystems.Deep sustainability adds to this perspective the direct conceptual linkage to overall sustainability discourse as well as the explicit ontological fundaments of perennial "nondual" wisdoma worldview of lived interbeing.Because when the seamless web of life becomes evident, not primarily through an intellectual understanding but through direct experience, it is no longer possible to behave in unsustainable ways.As Eisenstein (2022) suggests in a talk, it then becomes obvious that when harm is done to even a single one of the myriad of constituents that form the indivisible whole of which we all are part, the consequences are universal: "When I understand that my very existence at its core is part of the existence of each ecosystem and species around the world, then I know that whatever happens to them is in a way happening to me." Thus, in order to create a counterculture to the uncaring neoliberal order, there is a need to substantially increase awareness of the devastating effects of the dualistic discourse that is inherent to this order.In other words, to make visible the tendency to interpret and communicate the world in terms of dichotomies such as nature-culture, human-animal, us-them, all of which create an illusory sense of separation and form the cognitive-discursive justification of the continuous struggle against or exploitation of the "other." Deep sustainability entails a radical transformationindeed, a paradigm shift (Lent, 2021) in human consciousness toward nonduality (e.g.Spira, 2017), that is, the recognition that All is One.The ontological assumption of nonduality forms the backbone of perennial philosophy (Huxley, 1945(Huxley, /2009) and is the main message of a great deal of the world's wisdom traditions, for instance, in the ancient Indian texts of the Bhagavad Gita and the Upanishads, in the Chinese Tao Te Ching, and in the biocentric perspectives of indigenous peoples (Milstein, 2008).The fact that Alain Aspect, John Clauser, and Anthon Zeiliger were rewarded the 2022 Nobel Prize in physics due to their groundbreaking research on quantum entanglement, only testifies to the validity of what has been known for a very very long time among such various and geographically dispersed wisdom traditions. Obviously, the nondual ontology and its origins in traditional wisdom rather than scientific knowledge may sound both misplaced and controversial in the academic context.But when all other measures, including the outcomes of science, seem to be failing sustainability, we need to look beyond our taken-for-granted assumptions of a world constituted by oppositional and separate phenomena as well as beyond the traditional academic canon.There is an immense amount of wisdom available in the world with a largely unexplored potential to reveal the very foundation of our sustainability problems (Lent, 2021;Macy, 2021;Olausson, 2023;Wilber, 2000). An obvious first step for this philosophical orientation to take empirical root would be to investigate how (non)duality shapes sustainability discourse itself.Recent research suggests, for example, that The UN's Agenda 2030 with its 17 global sustainability goals (SDGs) does not only lack communication perspectives but also the fundamental acknowledgement of interconnectedness, feeding into the ideology of anthropocentrism (Kopnina 2019;Martin 2020).Overall, the ontological outlook of nonduality would open new paths for ideology critical research of discursive "otherization" with its integral take on our shared existence.It provides communication scholars with a firm foundation to anchor the argument that duality and polarization in various discursive contexts not only come with harmful sustainability consequences, but that they in fact are entirely untrue. In sum, finding solutions to the grave sustainability challenges caused by the exploitative human culture and its destructive and uncaring economic system requires a profound reassessment of the "nature we carry inside our heads" (Cronon, 1996, p. 22). If we want to steer our civilization on another course … it's not enough to make a few incremental improvements here and there.We need to take a long hard look at the faulty ideas that have brought us to this place and reimagine them.We need a new worldviewone that is based on sturdy foundations.(Lent, 2021, p. 4) In this transformational process, I suggest that the concept of deep sustainability provides environmental communication scholars with a platform to explore more deeply not only the empirical but also the ethical-philosophical questions associated with sustainability (Olausson, 2023;Olausson, forthcoming). The identity problem Among all dualisms, the one between nature and culture has received the most attention in the research field of environmental communication (e.g.Carbaugh & Cerulli, 2013;Cronon, 1996;Olausson & Uggla, 2021;Pezzullo, 2007).In short, this widespread dualism involves the notion that nature is where humans are not (Olausson, 2020).The thought figure that humanity is external to nature is discursively reproduced and nature turned into a distant "other." This does not mean, however, that there is no variety in how the relationship is represented.Sometimes nature is portrayed as subordinate to humanity and their needs; nature then becomes the object to master and exploit in service of humanitythe subject (Uggla & Olausson, 2013), whereas at other times it is depicted as superior to human culture and assigned the role as active subject, which means that nature must be feared, obeyed, and served by humanitythe object (Olausson & Uggla, 2021).Thus, the nature-culture dualism prevails regardless of whether the argument is to serve nature or to exploit it (Pollan, 1991). For, of course, to speak of man [sic] intervening in natural processes is to suppose that he might find it possible not to do so, or to decide not to do so.Nature has to be thought of … as separate from man, before any question of intervention or command … can arise.(Williams, 2005, p. 76) Elsewhere (Olausson, 2023;Olausson, forthcoming), I have argued that the nature-culture dualism is an excellent example of how duality forms a tyrannical (and ideological) structure of thought and language.It keeps us fettered in existential separation and prevents us from perceiving and experiencing the seamless whole that humanity form together with so many other lifeforms.The risk is obvious that if we do not (re)turn attention to the interconnectedness of all life, it will not be possible to solve the global mega-problems we are facing for good.Deep sustainability will stay out of reach, as it were. To further develop this line of thought, deep sustainability is basically a matter of identity growth in terms of taking anotherand indispensableevolutionary step as human beings, so that our perceived collective identity is no longer dependent on the formation of an out-group, that is, on the construction of the "other" (e.g.Mouffe, 2005;Olausson, 2005).Because it is only when nature becomes part of our identity in-groupwhen duality dissolvesthat caring for all lifeforms comes effortlessly. The ecological crisisor Gaia's main problemis not pollution, toxic dumping, ozone depletion, or any such.Gaia's main problem is that not enough human beings have developed to the postconventional, worldcentric, global levels of consciousness, wherein they will automatically be moved to care for the global commons.(Wilber 2000, p. 137) Thus, a prerequisite for deep sustainability to emerge is that the anthropocentric identity figure of thought and language and the interrelated dualistic relationship between nature and culture erode.The experience of intimate interconnection with all lifeforms presupposes the profound recognition that we live in a more-than-human world (Abram, 1997) in which human and non-human lifeforms exist on perfectly equal terms. The deep-going effect not only on environmental but also on social sustainability is a logical consequence of the evolutionary identity shift toward nonduality.Partly, because Abram (1997) probably has a strong point when suggesting that the ongoing devastation of non-human environments and the extinction of non-human lifeforms are also cause of the division, disharmony, and lack of trust in human relationships.Partly, because social sustainability is driven by a vibrant and inclusive democracy, where issues of identity are crucial.From the perspective of radical democracy, for example, Mouffe (2005) argues that identity struggles and communicative conflicts are defining elements of a well-functioning democracy because at the very moment consensus seems to have been reached, there is always some, less resourceful, identity group that has been oppressed. The conflict perspective on democracy as well as the assumption that identity is contingent are entirely valid at this point in time, when differentiation toward the "others" seems to be necessary in order to shape and establish our contextually determined in-groups (Olausson, 2005).However, along with the expansion of consciousness toward nonduality, we might discover our essential identity, which is shared by all beings, and, in turn, treat all lifeformsincluding those in human formwith respect and care.We then begin to realize, and above all experience, that all are perfect manifestations of the same web of life as we belong to, with profound effects on democracy as a result. The pre-conceptual identity When talking to people about nonduality (scholars and laypersons alike), I usually meet a bit of resistance: "Hey, this is the foundation of languageit's built on duality, and as soon as we communicate, it manifestshow can we possibly cope without dualisms?"This question is entirely valid and could be approached from at least two angles, the first originating from critical theory, which is a familiar strand within communication research and emphasizes the dialectical relationship between language and society.This means that, on the one hand, language nurtures separating structures such as the nature-culture dualism, as communication on "autopilot" often does.But, on the other hand, communication has the amazing potential to contribute to change.In the same way that language and communication are shaped by society and culture, they too can be revitalized and transformed by communication (Fairclough, 1995).However, for this potential to realizefor us to be able to influence structures that chain our thinking and communication in inert and dualistic normsthose elements of our language that we perhaps take the most for granted must be "denaturalized" (Machin & Mayr, 2012).When we become aware of the discursive nature-culture rift, as the topical case in point, we simply do not have to take it as a natural given anymore. Elsewhere (Olausson, 2023;Olausson, forthcoming), I have suggested that in order to facilitate denaturalization of the nature-culture dualism, its dissolution could be integrated on a deeper and more intuitive, even spiritual, level.This leads to the second angle from which voices sceptical of the realization of nonduality could be met.The answer is: Clear your head of concepts because they obstruct access to the wordless web of life!By letting naming bewhen not defining an object with a specific linguistic signwe can open to an experience of union that transgresses seeming boundaries.Intercultural studies of discourse have shown that the very absence of human communication and naming is crucial for the experience of connection with nature.Carbaugh and Boromisza-Habashi (2011, p. 114), for example, describe this state of nonduality as "an expressive coexistence with nature, albeit one of an unnamable kind."Further, according to Milstein (2008), any attempt to verbally reproduce such deeply meaningful "humanature" experiences actually becomes a verbal encapsulation of the experience, which leads to a separation from the nature aspect we are trying to conceptualize. Hence, a genuine experience of interconnectedness between different lifeforms cannot be obtained through the crude communication tools we have access to through language, and when we try to conceptualize experiences of interbeing, the effect is often the opposite, namely a reproduction of duality.But when communing wordlessly, we simply let go of the anthropocentric identity and instead embrace an identity on the pre-conceptual level.The requirement for this sense of interconnection to happen is "to preserve the silence withinamid all the noise.To remain open and quiet, a moist humus in the fertile darkness where the rain falls and the grain ripens," as eloquently put by Hammarskjöld (1964, p. 70).Emptying the mind of the almost obsessive stream of thoughts takes some practicing.Music, art, poetry, literature are all excellent means to practice the stillness within, as are dwelling in nature, yoga, and meditation."Yoga" (which is a complete philosophy) literally means "to unite," and when tuning into the inner silence, we unite with the shared essence of all beings. 1 This sense of flow and timelessness thus occurs when we are fully immersed in the present moment, feeling a sense of focused concentration, enjoyment, and a loss of awareness of the separate self. In sum, the moment we know that "both the perceiving being and the perceived being are of the same stuff" (Abram, 1997, p. 67, italics in original), duality spontaneously and effortlessly collapses.The interconnectedness of everything becomes evident through all-encompassing experience, completely different from conceptual understanding, and when duality dissolves, there will be no "others"be they humans, trees, animals, or other lifeformsto fight, exploit, destroy or even care for, and deep sustainability will come naturally.
2024-01-21T15:19:59.394Z
2024-01-16T00:00:00.000
{ "year": 2024, "sha1": "2438f574e2520d8764b5870fed9a742646082415", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17524032.2023.2296842?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "494d7994a58db310a1a95966c5938d809cd87b48", "s2fieldsofstudy": [ "Environmental Science", "Philosophy" ], "extfieldsofstudy": [] }
250192325
pes2o/s2orc
v3-fos-license
Genome-Wide Analysis and Profile of UDP-Glycosyltransferases Family in Alfalfa (Medicago sativa L.) under Drought Stress Drought stress is one of the major constraints that decreases global crop productivity. Alfalfa, planted mainly in arid and semi-arid areas, is of crucial importance in sustaining the agricultural system. The family 1 UDP-glycosyltransferases (UGT) is indispensable because it takes part in the regulation of plant growth and stress resistance. However, a comprehensive insight into the participation of the UGT family in adaptation of alfalfa to drought environments is lacking. In the present study, a genome-wide analysis and profiling of the UGT in alfalfa were carried out. A total of 409 UGT genes in alfalfa (MsUGT) were identified and they are clustered into 13 groups. The expression pattern of MsUGT genes were analyzed by RNA-seq data in six tissues and under different stresses. The quantitative real-time PCR verification genes suggested the distinct role of the MsUGT genes under different drought stresses and abscisic acid (ABA) treatment. Furthermore, the function of MsUGT003 and MsUGT024, which were upregulated under drought stress and ABA treatment, were characterized by heterologous expression in yeast. Taken together, this study comprehensively analyzed the UGT gene family in alfalfa for the first time and provided useful information for improving drought tolerance and in molecular breeding of alfalfa. Introduction Drought is one of the main environmental stresses affecting plant growth and development process causing significant reduction in crop yield and quality. The threat of drought to agriculture systems is aggravated because of the reduction of fresh water resources and increasing food demand [1]. A decrease in water availability would have a deleterious effect on plant growth, because about 80-95% of plant fresh weight is comprised of water [2]. While plants have evolved the ability to alter biological processes to avoid the harm of stress, under drought condition, the plant stress-protectant metabolites increased and the antioxidant system activated to maintain redox homeostasis [3]. Activated drought stress pathways also includes phytohormones, such as abscisic acid (ABA), which serves as a first stress signal to promote stomatal closure to induce osmotic adjustment and alter gene expression to accommodate to conditions of water defecit [4]. Glycosylation is a pronounced and universal modification found in all living systems, which mainly affects the transport, stability, storage, reactivity, and bioactivity of the sugar acceptors [5]. Glycosyltransferases (GTs), an unusually large enzyme family, were classified into 114 superfamilies (CAZy, http://www.cazy.org, accessed on 20 April 2021) depending on similarities of amino acids, substrate specificity, catalytic functions, and the existence of conserved sequence motifs. Among them, the largest glycosyltransferase family in plant species, GT family-1, also named uridine diphosphate glycosyltransferase (UGT), catalyzes the covalent addition of sugars from nucleotide UDP-sugar donors to operative groups such as carboxyl, hydroxyl, and amine on a wide variety of lipophilic molecules [6]. The Identification of UGT Genes in Alfalfa A recent complement of high-quality assembly of the chromosome-level genome sequence of Zhongmu-4, which combines both illumines and Pacbio sequencing data, enabled an in-depth analysis of UGT genes in alfalfa [30]. According to blast search and gene annotation of Pfam and NCBI databases, putative UGT genes were obtained. After removing redundant genes or those lacking the PSPG box, a total of 409 genes were identified as UGT genes in alfalfa. For convenience, these genes were named MsUGT001 to MsUGT409 according to the physical distribution of genes on the chromosomes. Most of the genes ranged from 300 to 500 amino acids, except for a few genes that were above 800 and below 200 amino acids. The theoretical isoelectric point (pI) ranged from 4.77 (MsUGT183) to 9.91 (MsUGT133) with an average of 5.80. The molecular weight (Mw) varied between 14,478.83 (MsUGT133) and 248,800.02 (MsUGT401) with an average of 57,118.89 Da. The prediction of cellular localization of 409 MsUGT genes showed that 316 and 173 genes were localized in the cytoplasm and plasma membrane, respectively. Besides, 81 genes were predicted to be localized both in the plasma membrane and the cytoplasm, and 20, 21,1 and 4 genes were distributed in the chloroplast, nuclear, extracellular, lysosomal, and mitochondrial, respectively. (Table S1). Phylogenetic Analysis of MsUGTs Based on the 17 A. thaliana and 14 M. truncarula UGTs sequences the phylogenetic analysis of UGT genes in alfalfa was conducted ( Figure 1). The phylogenetic trees of UGT members were clustered into 17 groups, which were named A-N (based on identified group in A. thaliana); and O, P, R that were newly found in M. truncatula and alfalfa. The MsUGTs were clustered into 13 groups, lacking A. thaliana 4 conserved phylogenetic groups (A, G, J, N), and groups O, P, and R were found in alfalfa. The number of UGT members in each group was inconstant: the largest group was I, which contained 134 gene members, and the smallest two groups were C and R, both of which contained only 2 members. Groups L, K, H, M, E, F, B, and D had 43,10,13,9,62,6,4, and 100 members, respectively. In addition, the two newly identified groups O and P contained 10, and 14 members, respectively. Genomic Localization and Synteny Analysis of MsUGT Genes The genetic localization on the chromosome was mapped based on the newly released Zhongmu-4 genome annotation information. Three hundred and eighty-one genes out of 409 MsUGT genes were randomly located in 32 chromosomes, another 28 MsUGT genes were not distributed to any chromosomes as anchored on the scaffolds. There were high densities of MsUGT genes clustered at a particular site on chromosomes 6 and 7 ( Figure 2). The number of MsUGT genes varied from a minimum of 15 in chromosome 2 to a maximum of 118 genes on chromosome 6. Within them, chr2_1, chr2_2, chr2_3 and chr2_4 had 3, 7, 3 and 2 genes, respectively, while chr6_1, chr6_2, chr6_3 and chr6_4 had 33, 25, 30 and 30 genes, respectively. In addition, there were 22, 25, 16, 58, 81 and 61 UGT genes located in chromosome1, 3, 4, 5, 7 and 8 respectively. Genomic Localization and Synteny Analysis of MsUGT Genes The genetic localization on the chromosome was mapped based on the newly released Zhongmu-4 genome annotation information. Three hundred and eighty-one genes out of 409 MsUGT genes were randomly located in 32 chromosomes, another 28 MsUGT genes were not distributed to any chromosomes as anchored on the scaffolds. There were high densities of MsUGT genes clustered at a particular site on chromosomes 6 and 7 (Figure 2). The number of MsUGT genes varied from a minimum of 15 in chromosome 2 to a Gene duplication is regarded as one of the fundamental driving pressures that contributes to the genomic evolution as well as the genetic systems [31]. Therefore, we executed collinearity analysis to identify the gene duplication events of UGT genes in alfalfa. Seven-hundred and forty-six pairs of genes were identified to have duplicated segments (Figure 3), among them 122 pairs had tandem duplication events (Table S2). Interestingly, those UGT genes mainly from group L did not have duplicated events. Gene duplication is regarded as one of the fundamental driving pressures that contributes to the genomic evolution as well as the genetic systems [31]. Therefore, we executed collinearity analysis to identify the gene duplication events of UGT genes in alfalfa. Sevenhundred and forty-six pairs of genes were identified to have duplicated segments (Figure 3), among them 122 pairs had tandem duplication events (Table S2). Interestingly, those UGT genes mainly from group L did not have duplicated events. Conserved Motifs and Gene Structure of MsUGT Genes It is well known that conserved protein motifs are essential to protein function and exon-intron structure is crucial to gene regulation [32]. Therefore, the distribution of conserved motifs and exon-intron were investigated systematically in alfalfa UGT proteins. A total of 20 motifs were verified in the 409 UGT proteins. Motif 1, which was the PSPG domain, and motifs 2 and 3 were observed in all of the UGT proteins. The distribution of the 20 motifs in each UGT members is shown in Figure S1. As expected, members in the same subfamily had parallel motifs; for example, group I had motif 4, 8, and 18, while other groups did not have them. Motif 20 existed only in groups E, F, and D. Besides, Motif 3 was in the beginning and motif 11 and 7 were in the tail of most of the sequences. Further, the exon-intron structures of MsUGT genes were displayed to elucidate the Conserved Motifs and Gene Structure of MsUGT Genes It is well known that conserved protein motifs are essential to protein function and exon-intron structure is crucial to gene regulation [32]. Therefore, the distribution of conserved motifs and exon-intron were investigated systematically in alfalfa UGT proteins. A total of 20 motifs were verified in the 409 UGT proteins. Motif 1, which was the PSPG domain, and motifs 2 and 3 were observed in all of the UGT proteins. The distribution of the 20 motifs in each UGT members is shown in Figure S1. As expected, members in the same subfamily had parallel motifs; for example, group I had motif 4, 8, and 18, while other groups did not have them. Motif 20 existed only in groups E, F, and D. Besides, Motif 3 was in the beginning and motif 11 and 7 were in the tail of most of the sequences. Further, the exon-intron structures of MsUGT genes were displayed to elucidate the diversity of structure of these genes, ( Figure S2). Out of the 409 identified genes, 269 genes (65.8%) contained introns, and 165 genes (40.3%) contained UTR. The number of introns varied from 1 to 27 in those genes, 168 genes contained only 1 intron and 8 genes contained more than 10 introns. Cis-Regulatory Elements Analysis in the MsUGT Gene Promoters The transcriptional regulation of the MsUGT genes was analyzed by predicting potential cis-regulatory in upstream promoter regions (2000 bp) by online service PlantCARE. A total of 22 cis-elements were recorded, involving abiotic stress, hormones, light response and developmental regulation (Table S3). A total of 800 ABRE cis-elements involved in ABA, and 230 TC-rich repeat cis-elements related to defense and stress response were identified ( Figure S3). Besides, 258 MsUGT genes contained more than 10 cis-regulatory elements in their promoter regions, indicating the MsUGT genes are likely to respond to various stress responses and in the regulation of synthesis of secondary metabolites. Expression Pattern of MsUGT Genes in Six Different Tissues To access the expression pattern of MsUGT genes in different tissues, the microarray datasets of flower, nodule, leaf, root, elongating stem internodes and post-elongation stem internodes of alfalfa were downloaded, based on the previous study. The RNA-seq datasets showed that 134 MsUGT genes were expressed in all six tissues and 407 genes were expressed in at least one tissue ( Figure 4). Among these genes, 99, 60, 22, 28, 24 and 10 MsUGT genes were highly expressed (FPKM > 10) in flower, leaf, root, nodule, elongating stem internodes and post-elongation stem internodes, respectively, indicating diversified glycosyltransferases processes function in different tissues. Besides, there were no links between the expression pattern and the phylogenetic groups, suggesting the expression pattern of each MsUGT gene was unique. Expression Pattern Analysis of MsUGT Genes under Abiotic Stresses and ABA Treatments The expression levels of the MsUGT genes under abiotic stresses and ABA treatments were analyzed using the RNA-seq data downloaded from NCBI ( Figure S4). The results showed that 384 (93.89%) MsUGT genes were expressed in at least one stress condition. Expression Pattern Analysis of MsUGT Genes under Abiotic Stresses and ABA Treatments The expression levels of the MsUGT genes under abiotic stresses and ABA treatments were analyzed using the RNA-seq data downloaded from NCBI ( Figure S4). The results showed that 384 (93.89%) MsUGT genes were expressed in at least one stress condition. Relative Water Content of Leaves under Drought Stress and ABA Treatments We measured the relative water content of leaves (RWC) under ABA treatments, PEG-induced drought stresses, respectively. The RWC of leaves showed no significant differences among 0 h, 24 h and 48 h under ABA treatments. While, the RWC of leaves significantly decreased 48 h after 15% and 20% PEG conditions, respectively. The RWC of leaves showed no significant difference 24 h after 15% PEG condition. Therefore, we defined 15% PEG and 20% PEG after 48 h as mild drought (MD) and severe drought (SD), respectively ( Figure 5). Relative Water Content of Leaves under Drought Stress and ABA Treatments We measured the relative water content of leaves (RWC) under ABA treatments, PEG-induced drought stresses, respectively. The RWC of leaves showed no significant differences among 0 h, 24 h and 48 h under ABA treatments. While, the RWC of leaves significantly decreased 48 h after 15% and 20% PEG conditions, respectively. The RWC of leaves showed no significant difference 24 h after 15% PEG condition. Therefore, we defined 15% PEG and 20% PEG after 48 h as mild drought (MD) and severe drought (SD), respectively ( Figure 5). Lowercase letters indicate significant differences. qRT-PCR Analysis of MsUGT Genes under Drought Stress and ABA Treatments Then MsUGT genes in different subfamilies (MsUGT003, MsUGT024, MsUGT028, MsUGT045, MsUGT091, MsUGT100, MsUGT113, MsUGT279, MsUGT280, MsUGT305, MsUGT359, MsUGT386), which potentially responded to drought stress and ABA, were chosen for qRT-PCR analysis to confirm their expression profile under different levels of drought stresses and exogenous ABA treatment. The results showed that the expression levels of the MsUGT genes varied greatly in shoots and roots under different drought stresses and ABA treatments ( Figure 6). The expression of 4 genes (MsUGT024, MsUGT045, MsUGT100 and MsUGT305) were upregulated both in shoot and root upon exposure to mild and severe drought stresses, indicating positive regulation. In addition, MsUGT028 was upregulated in the shoot and downregulated in the root, while, MsUGT113 and MsUGT279 were upregulated in the root and downregulated in the shoot when exposed to mild and severe drought stresses. For ABA treatment, the expression pattern of these genes varied greatly: MsUGT003 and MsUGT305 were upregulated in shoot and root under ABA treatment, and MsUGT045, MsUGT091, MsUGT279, MsUGT359 and MsUGT386 were downregulated. Besides, 4 genes (MsUGT024, MsUGT028, MsUGT100, MsUGT113) were upregulated in root and downregulated in the shoot, while only one gene (MsUGT280) was upregulated in shoot and downregulated in root when exposed to ABA treatment. Figure 6). The expression of 4 genes (MsUGT024, MsUGT045, MsUGT100 and MsUGT305) were upregulated both in shoot and root upon exposure to mild and severe drought stresses, indicating positive regulation. In addition, MsUGT028 was upregulated in the shoot and downregulated in the root, while, MsUGT113 and MsUGT279 were upregulated in the root and downregulated in the shoot when exposed to mild and severe drought stresses. For ABA treatment, the expression pattern of these genes varied greatly: MsUGT003 and MsUGT305 were upregulated in shoot and root under ABA treatment, and MsUGT045, MsUGT091, MsUGT279, MsUGT359 and MsUGT386 were downregulated. Besides, 4 genes (MsUGT024, MsUGT028, MsUGT100, MsUGT113) were upregulated in root and downregulated in the shoot, while only one gene (MsUGT280) was upregulated in shoot and downregulated in root when exposed to ABA treatment. Figure 6. qRT-PCR results of the relative expression of twelve selected MsUGT genes in response to drought stress and ABA treatment in shoot and root. CK, MD, SD and ABA represent the control condition, mild and severe drought and ABA treatments, respectively. Asterisks indicate the significance compared with CK; * represents significant (p < 0.05) and ** represent highly significant (p < 0.01). The error bars indicate the standard errors of three biological replicates. MsUGT003 and MsUGT024 in Response to Drought Tolerance and ABA Treatment in Yeast MsUGT003 and MsUGT024 were significantly upregulated under different levels of drought stresses and ABA treatments. Herein, we assumed that the two representative genes contributed to coping with drought stress and ABA signaling in alfalfa. The function of MsUGT003 and MsUGT024 in response to drought and ABA treatments were investigated in transformed yeasts using the pYES2−MsUGT003 and pYES2−MsUGT024 constructs (Figure 7). The results indicated that there was no difference among empty vector lines and transformed lines except 10 6 -fold dilution under control condition. The growth of MsUGT003 transformed yeast and the empty vector line were significantly influenced by 250 µM ABA treatment, while the MsUGT024 transformed yeast continued growing under 10 4 and 10 5 -fold dilution. On the other hand, under 30% PEG condition, MsUGT024 transformed yeasts showed more sensitivity to drought than empty vector yeasts, but MsUGT003 transformed yeasts depicted resistance to drought stress, especially under 10 6 -fold dilution. Figure 6. qRT-PCR results of the relative expression of twelve selected MsUGT genes in response to drought stress and ABA treatment in shoot and root. CK, MD, SD and ABA represent the control condition, mild and severe drought and ABA treatments, respectively. Asterisks indicate the significance compared with CK; * represents significant (p < 0.05) and ** represent highly significant (p < 0.01). The error bars indicate the standard errors of three biological replicates. MsUGT003 and MsUGT024 in Response to Drought Tolerance and ABA Treatment in Yeast MsUGT003 and MsUGT024 were significantly upregulated under different levels of drought stresses and ABA treatments. Herein, we assumed that the two representative genes contributed to coping with drought stress and ABA signaling in alfalfa. The function of MsUGT003 and MsUGT024 in response to drought and ABA treatments were investigated in transformed yeasts using the pYES2−MsUGT003 and pYES2−MsUGT024 constructs (Figure 7). The results indicated that there was no difference among empty vector lines and transformed lines except 10 6 -fold dilution under control condition. The growth of MsUGT003 transformed yeast and the empty vector line were significantly influenced by 250 μM ABA treatment, while the MsUGT024 transformed yeast continued growing under 10 4 and 10 5 -fold dilution. On the other hand, under 30% PEG condition, MsUGT024 transformed yeasts showed more sensitivity to drought than empty vector yeasts, but MsUGT003 transformed yeasts depicted resistance to drought stress, especially under 10 6 -fold dilution. Multiple Sequences Alignment and Phylogenetic Tree Construction Diverse alignments of the identified amino acid sequences of M. sativa as well as 17 A. thaliana and 14 M. truncarula UGTs were conducted using the Clustal X program with the default settings. The phylogenetic tree was carried out by the MEGA 7 using Neighbor-Jointing methods with the following settings: bootstrap values were set as 1000 replicates along with p-distance methods were set as pairwise deletion options for dealing with gaps among the amino acid sequences [35]. The polygenetic tree was displayed by the online program iTOL (https://itol.embl.de/, accessed on 25 May 2021) [36]. Chromosomal Locations of the MsUGT Genes and Gene Duplication Analysis The allocation of the MsUGT genes on chromosomes was displayed by the online program MG2C (http://mg2c.iask.in/mg2c_v2.1/, accessed on 25 May 2021) laid on the genome annotation files of M. sativa [37]. Duplication events of MsUGT genes were characterized by the MCScanX and drawn by TBtools software [38]. Gene Structure and Motifs' Composition of the MsUGTs The MsUGT gene structures were displayed by the online tool Gene Structure Display Server (http://gsds.gao-lab.org/, accessed on 3 June 2021) using the coding sequences and the genome sequences [39]. The consensus motifs of MsUGTs were identified through the online MEME server using the deduced amino acid sequences (http://meme-suit.org/, accessed on 3 June 2021) [40] with the subsequent settings: the site distribution was set as any number of repetitions and the number of motifs was 20; the minimum motif sites and width were 5 and 6, respectively; and maximum motif sites and width were both set as 100. Cis-Regulatory Element Analysis The sequence of 2000 bp from the promoters of the MsUGT genes were extracted using Tbtools. Cis-regulatory elements in the 2000 bp regions of MsUGT genes were characterized using the online service (http://bioinformatics.psb.ugent.be/webtools/plantcare/html/, accessed on 3 June 2021) [41]. Plant Materials, Relative Water Content, Drought and ABA Treatment Seeds of the M. sativa L. cv. Gongnong No.1 were surface sterilized by sodium hypochlorite for 3 min and flushed with sterile water 3 times, then these seeds were put into Petri dishes with moistened filter papers for germination. After 3 days, uniform seedlings were transferred into hydroponic units containing 1/2 MS (Murashige & Skoog) Medium in a greenhouse. The conditions in the greenhouse are described below: 16 h light, 25 • C/8 h dark, 20 • C cycle, 75% relative humidity. Drought stress was applied to one-month-old seedlings using polyethylene glycol 6000 (PEG-6000): mild drought (MD) and severe drought (SD) conditions were created by using 15% and 20% PEG-6000, respectively; and the control condition (CK) was created using sterile water. ABA treatment was implied with 100 µM ABA solution sprayed onto the whole plant. Three young leaves were weighted immediately to obtain fresh weight after 0 h, 24 h and 48 h under MD, SD and ABA treatment, respectively. Then turgid weights (TW) were measured 5 h after putting the leaves into distilled water. Then leaves were dried in an oven at 75°C 24 h to obtain dry weights (DW). Relative water content was calculated as the following formula. Young leaves and roots were sampled before treatments as control and after 48 h after treatments and flash frozen in liquid nitrogen. Thereafter these were stored at −80 • C for RNA preparation. All experiments were conducted for three biological replicates. RNA seq and qRT-PCR Analysis The RNA-seq data of MsUGT genes were accessed from the NCBI (http://www. ncbi.nlm.nih.gov/sra, accessed on 10 July 2021) to investigate the expression pattern of different tissues (flower, nodule, root, leaf, elongation stem internodes and post-elongation stem internodes, SRP055547) and different abiotic stresses (drought stress (SRR16068779-83, SRR16068789-90), ABA (SRR7166039-40, SRR71660320-21), salt stress (SRR14999928-33), low temperature (SRR9888362-67) and high temperature (SRR10166266-69, SRR10166274-75)). The nucleotide sequences of all MsUGT genes were blasted against transcriptome datasets. Then the expression values and heatmap of MsUGT genes in different tissues and under different stresses were conducted by Tbtools. Twelve genes from different subfamilies with high FPKM values in the drought stress and ABA were selected for quantitative real-time PCR (qRT-PCR) experiments. RNA was extracted from shoot and root tissues after PEG-6000 or ABA treatment in accordance with the instruction provided with the RNAiso reagent (Takara, Dalian, China). The first-stand cDNA was reverse-transcribed from the extracted RNA which was separated from the whole genomic DNA using the TaKaRa reaction Kit. The qPCR was conducted using a SYBR Green qPCR kit (Sangon, Shanghai, China) in accordance with the manufacturer's instruction on a CFX96 Real-Time PCR Detection System (Bio-Ras, Los Angeles, CA, USA). qPCR was performed in a 10 µL system containing 5 µL of 2 × SG Fast qPCR Master Mix, 0.2 µL of forward and reverse primers (10 µM each), 1 µL of DNA Buffer, 1 µL of cDNA, and 2.6 µL of double-distilled water. The relative expression level of each MsUGT gene was determined according to the 2 −∆∆Ct method [42]. The primers were designed using SnapGene software with melting temperatures ranging between 58 and 65 • C and synthesized by Sangon Biological Engineering Technology (Shanghai, China). Each biological replication was supported by three technical replicates. Heterologous Expression Validation in Yeast The pYES2−MsUGT003 and pYES2−MsUGT024 were constructed according to the previous study [19]. Briefly, the full-length coding sequences of MsUGT003 and MsUGT024 were reverse-transcribed from the RNA using a ClonExpress ® MultiS One Step Cloning Kit (Vazyme Biotech Co., Ltd., Nanjing, China) according to the manufacturer's instruction. Then two cloned genes were linked to pYES2 expression vector with specific primers (Table 1). After validation of sequences, the empty pYES2 plasmid, recombinant pYES2− MsUG T003 and pYES2−MsUGT024 were transformed into a specific yeast Saccharomyces cerevisiae strain INVSc1. Then the yeasts were cultivated in a liquid medium containing 2% synthetic complete (SC)−Ura galactose. Further, the yeasts were collected for drought and ABA treatments. The cells were resuspended in 30% PEG-6000 or 250 µM ABA, then the prepared yeast culture was diluted 10 fold and grown on a solid medium containing glucose for 2-3 days to check the expression of the binding protein. Table 1. Primers used in this study. Discussion Plant UGTs is the largest glycosyltransferase family that regulates glucose metabolism, homeostasis and participates in detoxification [43]. They play an essential role in plant growth, development and coping with environmental changes by regulating glucose metabolism, homeostasis, and secondary metabolites [32,43]. The UGT multigene family has been profiled in many plant species including A. thaliana [13], T. aestivum [16], Z. mays [14], Linum usitatissimum [44], and legumes [18], including M. truncatula, M. albus, T. paratense, Lotus japonicas, Glycine max and Phaseolus vulgaris. However, the UGT family in alfalfa has not been comprehensively analyzed so far. In this study, a systematic analysis was conducted in the alfalfa UGT gene family, including phylogenetic relationships, gene location, conserved motifs, intron/exon position, gene duplication and gene expression. A recently published genome sequence of alfalfa has provided an opportunity to investigate the diversity in the alfalfa UGT multigene family in a great detail. In the present study, we identified 409 MsUGT genes. It is worth noting that the number of the alfalfa UGT genes was larger than in any plant studied so far, such as A. thaliana (120) [13], Gossypium hirsutum L. (274) [45], G. max (242) [46], M. albus (189) [19] and M. truncatula (243) [18]. This is probably due to the genome assembly of 32 chromosomes of alfalfa and its large genome size (3068 Mb). A phylogenetic tree was usually constructed for comparing the gene family members and identifying their similarities and differences [47]. The phylogenetic tree displayed that 409 MsUGTs have clustered into 13 groups. The number of UGT groups in different plant species varied largely, A. thaliana, G. hirsutum, M. truncatula, Cajanus cajan were clustered into 14, 10, 11, and 15 groups, respectively [13,18,45,47]. Alfalfa UGTs lacked conserved A, G, J, and N groups, however, the number of groups O and P, which were newly identified in T. aestivum [16] had 10 and 14 members, respectively. Another newly identified group R in C. sinensis [48] had 2 members in alfalfa. Previous studies indicated that group E contained the largest UGT members in most plant species. Our research showed that group E was the third largest group, containing 62 genes. However, group I has expanded to become the largest group, containing 134 genes and composing 32.8% of putative UGT genes in alfalfa. There are MsUGT genes clustering in group I in chromosome 6 and chromosome 7. By contrast, there are no gene clusters in group E in alfalfa. The expansion of group I in alfalfa would contribute to its evolution in adapting to different stress conditions. As previously reported, UGTs belonging to group I, such as UGT83A1 in O. sativa, was demonstrated to be induced by abiotic stresses and catalyze the glycosylation of flavonoid to improve plant tolerance [49]. The analysis of chromosome location indicated that MsUGT genes were unevenly distributed on 32 chromosomes and mainly clustered on chr6_1, chr6_2, chr6_3, chr6_4, chr7_1 and chr7_2. The gene duplication contributed to the evolutionary novelty and genome complexity by favoring an expanded accumulation of new molecular activities [50,51]. There were 746 pairs of segmental duplications in the UGT gene family in alfalfa, suggesting that gene duplication played an essential role in the active expansion and evolution of the UGT family in alfalfa. The intron mapping of 409 MsUGT genes showed that 65.8% of members contained introns, which is more than the number (42%) of A. thaliana UGT genes [13], while close to the number (60%) of L. usitatissimum [44], indicating that alfalfa processed UGT gene evolutionary diversity. All MsUGT sequences process motif 1, which contains the UGT conserved PSPG box. Besides, group E contains 16 motifs and the largest group I contains 18 motifs, indicating the expansion of motif numbers would function to improve the stress tolerance of alfalfa. In addition, most gene members belonging to the same subfamily possess comparable motifs and share similar exon-intron patterns in terms of intron members or lengths. These results may provide useful information of the evolution and function of MsUGTs. Plant UGTs, as enzymes for glycosylation, function in many processes that participate in plant growth and abiotic stress. To get a further understanding of the function of UGTs in alfalfa, the expression pattern in different tissues and abiotic stresses were analyzed, based on the online universal microarray data. The results revealed that 407 genes (99%) were expressed at least in one tissue. Similar patterns were found in Z. mays and L. usitatissimum, wherein 82% and 73% of UGT genes showed an expression pattern at least in on tissue [14,44], In addition, 384 MsUGT genes (94%) were expressed at least in one stress. UGTs participate in the glycosylation of substrates, impacting their water solubility, biological activity, subcellular localization and transport characteristics, thereby maintaining metabolic balance in plant cells to improve drought resistance [52]. In alfalfa, a genome-wide association study showed that the UGT gene was associated with forage quality under conditions of water deficit [53]. In this study, the role of twelve genes expressed highly in drought stress and ABA signaling were verified by qRT-PCR. The expression of MsUGT genes in shoots and root varied differently exposed to PEG and ABA treatments, indicating these representative MsUGT genes might have universal functions to drought stress and ABA signaling. Moreover, we cloned MsUGT003 and MsUGT024 to transform into yeasts and confirmed their function under ABA and drought treatments, respectively. The results showed that MsUGT003-transformed yeasts appeared more tolerant to drought stress, which was consistent with the results of qRT-PCR analysis. As previously reported, UGT76E11, which belongs to group E in A. thaliana, modulated the flavonoid metabolism and enhanced the scavenging capacity for ROS to improve the drought resistance [54]; here, MsUGT003transformed yeast gain more tolerance to drought condition, probably due to the positive regulation of MsUGT003 in drought stress. In addition, MsUGT024-transformed yeasts appeared to have more tolerance to ABA treatments. ABA accumulates when plants are exposed to drought and plays an important role in the reduction of water loss by transpiration under water stress conditions. UGTs, glycosylated ABA to ABA-GE, which is a storage form end product of ABA [55]. When the drought stress ceased, the concentration of ABA returned to the normal level rapidly through the glycosylation of UGT to form ABA-GE [24]. The ABA tolerance of the MsUGT024-transformed yeast indicates MsUGT024 probably plays an important role in ABA metabolism in alfalfa. By contrast, the MsUGT003transformed yeast appeared less tolerant to ABA treatment and MsUGT024-transfomed yeast showed less tolerance to drought stress. Therefore, the interception of UGT and ABA as well as the specific functions of UGTs in alfalfa under drought stress need further investigation. Conclusions In the present study, the UGT gene family of alfalfa was analyzed systematically and comprehensively. UGT genes (409 in number) were identified and their phylogenetic tree, chromosomal location, duplication events, and exon-intron structures, conserved motifs and cis-regulatory elements were evaluated to get a better insight into the role of the UGT gene family in alfalfa. The RNA-seq analysis confirmed that MsUGT genes expressed in different tissues and under different abiotic stresses, qRT-PCR further confirmed the results of digital expression analysis observed under drought and ABA treatments. Heterologous expression in yeasts indicated that MsUGT003 and MsUGT024 were in response to drought stress and ABA signaling. To sum up, our study could have high importance in exploiting the potential molecular function of MsUGT genes in drought stress and in applying molecular approaches in the breeding of alfalfa, but their functions still require a series of experiments to confirm this.
2022-07-02T15:08:55.224Z
2022-06-29T00:00:00.000
{ "year": 2022, "sha1": "8953d078719964f1110906351fa7dec17b9fdd07", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/13/7243/pdf?version=1656672475", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c51dd6e2aa488949855f597e766bf1014c375aa8", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
244172758
pes2o/s2orc
v3-fos-license
Generic Relationships between Field Uses and Their Geographical Characteristics in Mountain-Area Dairy Cattle Farms : In mountain farms, challenges posed by the degree of land slope, altitude and harsh climate further compound multiple other possible constraints, particularly in relation to the distance of the farm from the farmstead. This study focused on how mountain-area dairy farmers factor the geographical characteristics of their fields into their field-use decisions. To that end, we surveyed 72 farmers who farm the traditional Salers breed of cattle and 28 specialised dairy system farmers in the central Massif region, France. Information was collected on the uses and geographical characteristics of all grassland fields ( n = 2341) throughout the entire outdoor grazing season, without identifying farmers’ rationales for their field-use decisions. Field-use classes were constructed for the traditional Salers system per group of fields (grazed-only, cut-only, grazed-and-cut) and then used to classify fields in the specialized dairy system. The geographical characteristics, which were associated afterwards, were significantly different between the field groups and between field-use classes. Grazed-only fields were found to be more sloping and cut-only fields were smaller and further from the farmstead. Distance/area combinations were different according to field use (ani-mal category, earliness of first cut, grazing and cutting sequence) and were decisive for all field-use classes. This study allowed the identification of generic relationships between field uses and their geographical characteristics in mountain-area dairy cattle farms. Introduction Field geography largely dictates the activity options and organisations adopted by livestock farmers, which means that field geography is key to the business of livestock farming. Different field geographies (which vary due to size, fragmentation, dispersion, altitude, slope, type of soil, and more) offer different sets of perspectives, and possibilities. The that way livestock farmers adapt to the specific features of their fields in order to accommodate their livestock activities has been a focus of research over the last few decades, chiefly in relation to more complex landscape settings, such as hedgerow network zones or mountain uplands [1][2][3]. A majority of papers have focused on mountain-area zones, where challenges posed by the degree of land slope, altitude and harsh climate further compound the many other possible constraints. Much of this research has been impelled by French teams, which can be explained by the large share of mountain-area farms in France (comprising around 17% of farmland in France), particularly in the Massif central (56% mountain-upland farms accounting for 62% of utilised agricultural area) where cattle farming is predominant [4]. Furthermore, challenges related to distance increasingly compound the other farm work constraints. These challenges are largely due to expanding of the field patterns; the average farmland area tripled in size between 1970 and 2010 to reach 57 ha-or 48 ha excluding collective areas-in mountain-area farms in 2010 [4]. Analysis of land field utilisation has often been addressed at the field level, considered to be the analysis mesh or as the elementary mesh [1,3], and sometimes aggregated up to the farm level [2,5]. The term 'field' generally equates to the functional field, i.e., a unit of land that livestock farmers use for the same purpose throughout a season, which results from several neighbouring cadastral parcels being collapsed together [6][7][8] or from one cadastral parcel being divided up [2,9]. Direct survey is often the preferred approach for capturing field features and uses and the rationales given by the farmers, but the method is time-intensive, especially if the objective is to exhaustively cover every factor (e.g., all fields in a region and all uses across a whole season) and/or achieve a representative picture of a whole population (e.g., large sample). Few papers have reported surveys undertaken across a large sample of livestock farms. Research teams sometimes move to shorten the time required for data collection by conducting a small number of surveys and supplementing this data with expert experience [10] or data from farm networks [5], or by narrowing the scope in order to cover a subset fraction of the field parcels [11]. Whatever the method implemented or objective pursued, all of the studies on this topic report that livestock farmers use field traits to gauge the most appropriate utilisation of the field given the livestock system chosen and the objectives set. As field patterns are becoming increasingly sprawled and scattered, livestock farmers are moving to optimise how they run their farm and develop purpose-led strategies, where decisions on how best to use fields are pivotal to the process [12]. Their strategies integrate the specific geographic traits of the fields, along with other factors, like herd characteristics, herd management decisions or labour availability [1,6,9,10,13]. These various publications converge to show evidence that livestock farmers develop their own logics for using fields according to their geographical characteristics. It appears that the methods deployed favour an approach based on understanding each system in order to bring out a general principle. However, is it possible to identify a general principle through the statistical analysis of the relationships observed between the use of the fields and their geography? This question prompted us to posit two hypotheses: I) fielduse logics have common foundations, independently of the livestock system or the farmer implementing it, and II) it is possible to bring out these common foundations via the analysis of a large amount of data, freeing ourselves from the analysis of each system. Our objective here was to show and characterise the interrelationship of field use and field geography via a farmer-based approach, exhaustive at the level of each farm and representative of the population of dairy cattle farmers in a mountain-upland area. The expected outcome was evidence of a set of logics for field-use according to the geographical characteristics that are generic to mountain-upland dairy cattle systems. Data Collection The information was collected through a survey of 100 grass-fed dairy-cattle farmers in the mountain uplands of the Cantal and Puy-de-Dôme regions, France. Two survey campaigns were led in 2005 (n = 72 surveys) and 2009 (n = 28 surveys), focused on traditional Salers breed-system farmers and specialised dairy farmers. All surveys were conducted in the farms and included two interviewers, in a single pass, and a questionnaire. The breeders were recruited in 2005 from among the 90 breeders listed by the Tradition Salers Association, with the aim of analysing the relationships between the uses of fields and their geographical characteristics. The breeders were selected in 2009 by the Cantal and Puy-de-Dôme milk control authorities, with the aim of analysing the relationships between calving periods and forage management, using a questionnaire designed to collect the same information on fields and farms as in 2005. General farm and herd-system characteristics are presented in Table 1. The across-farm variability stems partly from specific features of the traditional Salers system, with cows simultaneously producing milk and 10-month weanling calves [14], and cows milked throughout lactation or the onset of suckling after a few months in-milk. Milk output from Salers cows (2223 kg/cow/year [15]) is lower than in dairy breeds, largely because a share of the milk is systematically sucked by the calf, which has to be present alongside the dam for it to let down milk. Details about the uses made of each field (e.g., dates of cuts, dates of animal lots turned into/out of the field, animal types and herd counts) were recorded using grassharvest and grazing-period diaries. The geographical characteristics of each field were collected and described by four variables. Surface area (in ha) was recorded from the administrative records filed by the farmers, and the other geographical criteria were detailed based on the farmers' statements. Distance from the farmstead (in km) corresponds to the journey travelled by road to the field. Slope was qualified by its intensity as perceived by the farmer (shallow, average, or steep) and its proportion across the field. Altitude (in m.a.s.l.) corresponds to the mean altitude of the field. Data Analysis The data were analysed at the grass-field scale, defined as an area of land used for the same purpose throughout the grazing season [2]. The use made of each field was described from turn-out to pasture in spring until return to stall in the autumn, via eight variables. The starting use-date and ending use-date served to capture the time-in-use (in days) and its position in the season. The date of first use (cut or graze) was characterised in relation to a theoretical date of the beginning of ear emergence (TDBE), which was calculated on the premise that the Massif central permanent grassland starts heading (ear emergence) at 120 calendar days at an altitude of 400 m.a.s.l., with a 6-day lag every further 100 m.a.s.l. interval [16]. Details of the date of first cut and number of cuts were collected to complete the descriptive data on grass-resource harvesting. The animal subcategory served to account for the animals that use the field during the grazing season: milked cows, suckler cows (un-milked late lactation cows) or dry cows, and calves (0-1 year), young heifers (1-2 years), and old heifers (2-3 years). The grazing intensity (in LU × day/ha, total or per animal category) was calculated by taking into account the field area and, for each grazing period, the duration (number of days between the entrance and exit dates), herd size and animal category. The forage supply (in kg DM/ha) was calculated as the total forage distributed to animals on the field during the grazing period. The first-round survey campaign was previously analysed by Garcia-Launay et al. (2012) [9], who used principal component analysis (PCA) and hierarchical agglomerative clustering (HAC) to stratify the fields (n = 1586) into 3 type-groups according to use-type, i.e., a group of grazed-only fields (six classes), a group of grazed-and-cut fields (six classes), and a group of cut-only fields (three classes). Thereafter, we used the aggregation protocol described in Perrot (1990) [17]. For this, the fields from the second-round survey campaign (n = 756) were separated by multi-level sorting (in Excel) according to their use profile, and were then aggregated into the same clusters as determined previously in Garcia-Launay et al. (2012) [9]. Another class was moreover created for the fields grazed by dry cows that had not been described in the first-round survey campaign. The three classes of cut-only fields identified in 2012 were rearranged here to factor in the earlier first grass harvest and the greater number of grass harvests observed in the second-round survey campaign. Each field class was then related to the corresponding geographical characteristics. ANOVA was performed using XLSTAT software to determine the significant differences between the field groups and between the field clusters. When the result of ANOVA was significant, pairwise means were compared using the Tukey test, with a significance threshold set at 0.05. The proportional slope variable was angular (arcsine square root) and was transformed to check the hypothesis of normal distribution and homogeneity of variances [18]. For the same purpose, distance from the farmstead and surface area were natural log-transformed. Results The total survey dataset counted 2341 fields: 1148 (49%) were grazed-only, 962 (41%) were grazed-and-cut, and 231 (10%) were cut-only ( Table 2). The grazed-only fields were used earlier in the season and for longer periods, whereas the cut-only fields were used later in the season and for shorter periods. Grazed-and-cut fields were intermediate between grazed-only and cut-only on all use-type criteria. Grazed-only fields were the steepest and highest fields, grazed-and-cut fields were the lowest fields, and cut-only fields were the smallest and furthest from the farmstead ( Table 2 and Figure 1). Table 2. Field uses and geographical characteristics (all fields and stratified by field groups (Mean (SD)). Grazed-Only Fields The animal category functioned to structure the seven classes of grazed-only fields (Ca, calf; He1, young heifer 1-2 years; He2, old heifer 2-3 years; MC, milked cow; SC, suckler cow; DC, dry cow; and DivG, diversified grazing) ( Table 3). The diversified grazing field class (DivG) and the milked cows field class (MC) had the highest headcount (32% and 25% of the total, respectively) whereas the dry cows class (DC) had the lowest headcount (3% of the total). All of the fields were almost exclusively used by a single category of animal (71%-94% of total grazing), except for the DivG fields which were grazed by all categories, but mainly by milked cows and heifers. The calf field class (Ca) were the most intensively grazed and the most abundantly distributed with forage, whereas DivG and DC fields were the least intensively grazed and the least abundantly distributed with forage. Grazing started very early in the season for all field classes (from -20 to -27 days ahead of the TDBE). Table 3. Field uses and geographical characteristics of the grazed-only field classes (n = 1148) (Mean (SD)). All of the grazed-only field classes were at a similar altitude and shared a similar proportion of sloping ground (Table 3). However, the grazed-only field classes were differentiated on various distance-area combinations (Figure 2 and Table 3). Fields allocated for use by young or not-immediately-productive animals (Ca, He1, He2, DC), which do not require large amounts of grazeable grass, were smaller but tended to be closer to the farmstead as the need for supervision was increased. Ca fields were thus closest to the farmstead, while He2 fields were furthest away. Conversely, fields allocated for use by adult or immediately-productive cows (MC, SC), which require sizeable amounts of grass to graze, were bigger but tended to be closer to the farmstead as the need for hands-on intervention was increased. The MC fields were thus kept close to the farmstead to facilitate the twice-daily milking work, whereas SC fields were further away. DivG fields had the same geographical profile as the group of grazed-only fields, although on average they were bigger, more steeply-sloped, and further from the farmstead. Cut-Only Fields The earliness of the first cut structured the three classes of cut-only fields ( Table 4). The intermediate-cut fields class (iC) was the biggest of these three classes (57% of the total). Late-cut fields (lC) were used relatively late on and were only cut one time, as opposed to the early-cut fields (eC), which were first cut early in the season and cut several times, and iC fields which logically intermediate between lC and eC. Field area again emerged as a discriminating geographical factor for the use of cut-only fields ( Figure 3 and Table 4), whereas these fields had already been identified as smaller than grazed-only or grazed-and-cut fields. The eC fields were found to be the largest, and also presented favourable factors for more intensive use, with an intermediate distance from the farmstead and a slightly lower altitude. The iC fields were furthest from the farmstead but were nevertheless cut several times. The lC fields were closer to the farmstead but were handicapped by a higher altitude. Grazed-and-Cut Fields The use sequence (i.e., the order and number of grazing or cuts and period used in the season in terms of earliness and duration) structured the six classes of grazed-and-cut fields ( Table 5). The grazed then cut then grazed sequence (GCG) and the cut then cut then grazed sequence (CCG) classes had the biggest counts (27% and 21% of the total, respectively), and the grazed then cut sequence (GC) class had the lowest count (3% of the total). The fields that started with grazing (GC and GCG) were used from a very early date (37 and 38 days ahead of the TDBE) and had the most extreme (the shortest and longest, respectively) durations of use. Among the fields that started with a cut (early cut then grazed sequence (eCG), late cut then grazed sequence (lCG), and cut then cut then grazed sequence (CCG)), the time of the first use was latest for lCG fields and the number of cuts was highest for CCG fields. All grazed-and-cut fields were mainly grazed by milked cows (36%-50%), with GCG fields most intensively grazed and GC fields least intensively grazed. Diversified sequence fields (DivS) were intermediate on all use-type criteria (grazing and cutting). Geographical characteristics differed according to the type of first use ( Figure 4 and Table 5). Fields first used for grazing (GC and GCG) were closer to the farmstead and were larger and steeper with greater use for grazing (GCG), and thus shared some overlapping characteristics with MC fields. The fields that were cut first (CCG, eCG, lCG) tended to be smaller, shallower-sloped, and further from the farmstead, and thus shared some overlapping characteristics with cut-only fields. eCG and lCG fields were only significantly different on the altitude factor, with a large enough gap (156 m.a.s.l.) to offset the start of spring-season plant emergence ( Table 5). The DivS fields, which were smaller and further from the farmstead and, therefore, posed more constraints for both grazing and mowing, were mostly left for adult cows (suckler and milked) to graze. These fields, which were predominantly used in the spring, corresponded to multi-option fields that could either be mobilised to increase the provision of grass to graze and leave other grazed fields more time for regrowth if weather conditions were not right for grass growth, or were side-lined for cutting if the weather conditions were right for grass growth. Table 5. Field uses and geographical characteristics of the grazed-and-cut field classes (n = 962) (Mean (SD)). SD, standard deviation; LU, livestock unit; DM, dry matter; Different letters within a row indicate significant differences at P < 0.05: *** P ≤ 0.001; ** P ≤ 0.01; * P < 0.05; ns, non-significant; 1 GC, grazed then cut field; GCG, grazed then cut then grazed field; eCG, early cut then grazed field; lCG, late cut then grazed field; CCG, cut then cut then grazed field; DivS, diversified sequences; 2 TDBE, theoretical date of the beginning of ear emergence. and dry cows, with the closest fields kept for calves and given regular forage supplements and attentive supervision. The middle of the figure features heifer fields that were appropriately average-sized for growing animals, located at a short distance from the farmstead for the small heifers that still need a fair amount of surveillance or at a longer distance for the older heifers that are more low-maintenance. The right of the figure features the big fields allocated to productive animals: very close-by fields were kept for milked cows due to the milking constraint, fields further away were used for the suckler cows, and fields even further away were used for diversified grazing. Grazing intensity was highest on grazed-only fields, particularly those used for animals with growth or production issues fed a high proportion of grazed grass. However, the large diameter of the calf fields illustrates their specific function as 'parking' fields, located very close to the cowshed and provisioned with 10 times more feed forage than all other fields. All of the grazed-and-cut fields were at an intermediate distance, with the smallest fields cut first and the biggest fields grazed first. Field Classes Grazing intensity Grazed-only field classes, non-productive animals Grazed-only field classes, growing animals Grazed-only field classes, productive animals Grazed-and-cut field classes, fields managed firstly for cutting Grazed-and-cut field classes, fields managed firstly for grazing Figure 5. Grazed-only field classes and grazed-and-cut field classes plotted according to distance, area and grazing intensity. Ca, calf; He1, young heifer 1-2 years; He2, old heifer 2-3 years; DC, dry cow; MC, milked cow; SC, suckler cow; DivG, diversified grazing; GC, grazed then cut field; GCG, grazed then cut then grazed field; eCG, early cut then grazed field; lCG, late cut then grazed field; CCG, cut then cut then grazed field; DivS, diversified sequences field. Discussion The 1990s marked a surge in research to understand the logics employed by livestock farmers to rationalise their field use strategies, especially in 'unfavourable' settings, in terms of natural environment and/or structural factors. Here we articulate this same line of research and pursue the novel approach initially developed by Garcia-Launay et al. (2012) [9]. We approached this analysis at the level of the field and its uses throughout the entire grazing season, at the scale of a population of working farm operations, without integrating the individual logic applied by each livestock farmer, before going on to associate the geographical characteristics of the fields. However, we mobilised a large-sized sample (100 farmers, 2341 fields) in order to ascertain a common logic applied by a population of farmers. Most of the extant researches have addressed field use practices as part of the farmer's wider livestock system strategy, where field geography is just one of many components of the system. Despite being different to our approach here, and despite the diversity of study locations and objectives targeted, studies on the field use-field geography relationships produced findings that, particularly regarding distance, area and slope characteristics, are consistent and coherent with our findings here. Morlon and Benoit (1990) [13] characterised field uses on farms in north-eastern France (the Lorraine region) via a hierarchical ranking of their geographical constraints, and, in addition, highlighted correlational fits between field uses and sets of field constraints, e.g., distance and area for dairy production, or slope and area for feed crop harvesting. Thénail and Baudry (2004) [2] and Marie and Delahaye (2009) [3] studied hedgerow network areas (bocage) in France, Spain, and the UK, and observed concentric field uses according to distance-to-farmstead thresholds, with the closest fields used for cows in milk, and fields used for feed harvests and grazing other types of animals located further away, which is in line with our results. Likewise, Brunschwig et al. (2006) [6], in relation to Massif central farms, showed field uses patterned by the type of animal (e.g., dairy cow or heifer) in a way that was bounded by distance thresholds and moved with centrifugal or centripetal forces over the course of the grazing season. Our results, which show the importance of closeness or distance of fields according to their use, are consistent with those of these three studies. However, we did not identify any distance thresholds or centrifugal or centripetal trends over the season. Both Brunschwig et al. (2006) and Marie and Delahaye (2009) [3,6] also highlighted the prominent role of various distance-area combinations in the fields' uses, as we did. Andrieu et al. (2007) [5] identified both the slope and distance to farmstead as determinant factors of field uses in Auvergne-region mountain landscapes (shallower-sloped fields used for cutting, closer-to-cowshed fields used for grazing dairy cows) whereas surface area was only identified as a determinant factor for fields used to harvest grass silage. Our results are in line with these authors' observations, but allow for a much more precise approach to these elements. In Aubrac-region farms further south, Martin et al. (2009) [10] identified distance (from the barn or the closest fields) as the single biggest determining factor (due to the possibility of returning the herd daily to the barn or moving it to another field, without transport), followed by slope (possibility of mechanisation) and distancearea combination (possibility of feeding 5 LU/ha for 3 days if the field is remote from the farmstead). Our results are consistent with these authors' analyses, which are based on expert opinion, but our results are however more generic in that they are obtained from direct interviews with many farmers. Various types of grassland-use sequences (harvest(s) and grazing) over the course of a campaign (up to five or six) emerged in work by Dubeuf et al. (1995) [19] and Rapey et al. (2008) [7], which converges with the diverse patterns of field use identified here (six classes of cut and grazed fields). Dubeuf et al. (1995) [19] also reported, as we did, that cut-only fields were cut relatively late, that proximity (a 2 km perimeter) was an important factor for grazed-and-cut fields with an early first use (cutting or early spring grazing), that more distant and sloping fields were used for grazing heifers or dry cows, and that big fields were used for dairy cows. Marie and Delahaye (2009) [3] noted, however, that dairy-cow fields were always nearby, which is in agreement with our results, but these authors found that such fields were not always big, which contrasts with our results. The geographical criteria employed here at the field level (slope, distance, area, altitude) are the same criteria as employed in studies conducted at wider scales (local community, region, province) to model trajectories of adaptation around territorial issues. Such modelling addressed, for example, agricultural land abandonment and reforestation in Italian mountains areas [20,21], climate and socio-economic scenarios in the French Alps [22], land fragmentation across dairy farms in Spain [23], strategic land-use allocation between livestock zones and urban zones in the Netherlands [24], and on to permanent grassland landscapes in Portugal [25]. Authors often add caveats limiting the generalisability of their results, citing a domain of validity that is tied to a given type of activity and/or a given territorial community [6,11,13], or an underpowered volume of data [10]. Here, our study collected a huge number of field data and built a set of field use classes showing significant differences-including for the associated geographical variables-that underline a robust set of results that are generic to mountain-upland dairy cattle systems in the Massif central. However, this generalisability still cannot extend to uplands regions where grazing is organised around moving the herd up to high summer pastures, like the Alps [19] or the Pyrenees [26]. The results can be extended to mixed-purpose farms (dairy and suckling system), which would encompass 46% of the farmers surveyed, but are not readily extendable to pure suckling system operations. Dairy farming has a narrower set of field-use options than suckling farming, particularly for cows in milk [10], and suckling-herd fields are managed in blocks (sets of several proximate fields that can be rotated around to offer adequate pasturage) so as to economise the frequency and time of herd movements [6]. The relativity inherent to the field pattern, to its localisation, and/or to the livestock farmer is often cited as a limitation to the generalisability of study findings. Priority rules governing the identified geographic constraints can effectively be made to shift due to the emergence of a stronger constraint. For example, a high density of small fields bounded by hedgerows may prompt breeders to consider the linear of hedgerows to be more important than field area and distance to farmstead [2], or a lack of grass on near-to-farmstead fields to agree to make silage on fields that are further away [13], or even a low mechanisable area to push back the grazing-distance thresholds for dairy cows in an effort to preserve cuttable land [5]. Le Ber and Benoit (1998) [27] demonstrated distance relativity in a heavily wooded environment by differentiating the distance to the farmstead (for example for grazing cows in milk) from the distance to the forest, as pastures are best located with forest cover to provide shelter from wind and sun, whereas crop fields should be established further away from forests. Some breeders may also reprioritise geographical criteria in response to challenging land-use constraints, typically to take advantage of the fields' altitudinal staggering in order to make profit from the staggered grass growth and thus distribute less concentrated feed [8]. However, Marie and Delahaye (2009) [3] concluded that the same set of basic principles of rationality (in field uses according to the distance-area combination) were common to dairy farms in all five hedgerow network areas studied in France, England, and Spain, independently of the size of the farms or their level of intensification or fragmentation. Lastly, collecting information based on expert input or farmers' statements also introduces relative subjectivities into the findings [7,10]. The assessment of the level of slope and land-use potentials on sloped fields is hugely dependent on the farmer's own subjectivity, but also on the equipment at their disposal [8]. Here, our design surveying a large number of farms and a large number of fields enabled us to smooth out local or human particularities to objectively capture field use-field geography relationships at the scale of a large population of grass-fed dairy cattle (or mixed dairy-suckler system) farms in a mountain-upland area. However, our study did not integrate all of the factors and features that can potentially influence livestock-farmer decisions. Other criteria have been identified as likely to influence field-use potentials, such as the presence of obstacles (rocks, trees, streams, etc.) or the absence of a watering point [28,29], their shape [1,2,28], their accessibility and soil water holding capacity [5,6,10,13,26,27]. Work overload, the arduousness of farm work, or the desire to find a better work-life balance call all lead prompt livestock farmers to simplify their herd management practices (in terms of diet, reproduction strategy, milking schedules, etc.) or rationalise their work schedules (to align to more normal working-week hours) [12,30]. Some of these choices could shift the field-use rules, for instance, making it possible for milked cows to graze on more distant fields as part of a system that is less milking-driven (grouped calvings over a short period of time and once-a-day milking in late lactation) [31]. The growing demographic of women heading up livestock operations (in 2016, 27% of farm managers or co-managers and associates were women, against 8% in 1970 [32]) or spouses of farmers working off-farm (50% of cases in 2010 against 40% in 1997 [4,33]) is driving a decisive shift to 'normalise' on-farm workhours and simplify task organisation accordingly [30]. Looking out to a wider scope, considering the ecosystem services bundles provisioned by livestock farmers, the plurality of users and uses for pastoral farming spaces, dealing with nuisance to neighbours, adhering to codes of environmental protection practice, accommodating footpaths and right of way, rural tourism (hiking trails, farm stays) and even traditional local practices, can also shape field use choices [34][35][36][37]. Conclusions This study captured objectified field use-field geography relationships based on significant differences between the identified field-use classes (six grazed-only field classes, seven grazed-and-cut field classes, and three cut-only field classes) and between the associated geographical characteristics. We adopted a novel approach based on extensive field-research data (100 farmers, 2341 fields) at the level of the field and its uses throughout the entire grazing season, at the scale of the population of working farm operations, without integrating the individual logic applied by each livestock farmer, and going on to associate the geographic characteristics of the fields. We identified the slope, area and distance to the farmstead as the most determinant geographical factors used by cattle farmers in deciding how to make use of their fields, along with various distance-area combinations associated with specific field uses. Furthermore, we ascertained the logics underpinning the way farmers associate a field use to a set of geographical field characteristics, even though our survey did not explicitly question the farmers on this point. Our approach and design has thus produced robust results that can be considered generic at the scale of dairy or mixed-purpose (dairy and suckling systems) cattle farms in the pasturebased uplands of the Massif central (France), and even in the mid-mountain area. Author Contributions: Conceptualisation, methodology, validation, writing-original draft preparation: C.S. and G.B. Both authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: Ethical review and approval were waived for this study, due to the cow herds of the farmers interviewed were not sampled, measured or manipulated in any way during the two studies covered by this publication. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-10-19T15:33:20.617Z
2021-09-24T00:00:00.000
{ "year": 2021, "sha1": "314cee63ebd17331ff49bc8711601b7903876ba6", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0472/11/10/915/pdf?version=1634649515", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0f78fa826995ee3b1bc096e2661b4f488ed2a480", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
252855169
pes2o/s2orc
v3-fos-license
Effect of Heat Input on Microstructure and Corrosion Resistance of X80 Laser Welded Joints : Using fiber laser welding technology, X80 pipeline steel welded joints with different welding heat inputs were obtained. Their microstructure, mechanical properties, and corrosion resistance (in NACEA solution saturated with hydrogen sulfide) were studied. Findings indicated that with the increase in heat input, the proportion of ferrite, strength, elongation, and corrosion resistance increased within a certain range and the sum of the proportion of martensite and bainite and hardness decreased. The heat input has a greater effect on the microstructure of weld metal (WM) and coarse-grained heat-affected zone (CGHAZ), while that of fine-grained heat-affected zone (FGHAZ) is basically unchanged. Obvious differences are also found in the corrosion resistance of different regions of the welded joints, among which FGHAZ has the strongest corrosion resistance, followed by WM and CGHAZ. The heat input mainly affects the microstructure type of the welded joint to affect the corrosion resistance. Therefore, we model the heat input as a function of R ct and i corr from this relationship. In addition, the corrosion products film produced by the long-term immersion of the welded joint in the saturated H 2 S NACEA solution can hinder the development of corrosion and enhance the corrosion resistance to a certain extent. Introduction The development of pipeline steel with high strength and toughness, benefiting from the increasing demand for oil and natural gas around the world, has been the focus of considerable attention. As one of the high-strength low-alloy steels, X80 pipeline steel has high strength and toughness and has been widely used in pipeline engineering around the world [1,2]. Welding is the most commonly used and effective way to make the cylinder-shaped pipeline in factories and join onshore pipes together on-site. The conventional welding methods used for pipeline steels, which have been studied abundantly [3], are mainly submerged arc, gas metal arc, shielded metal arc, and electrical resistance welding. Laser welding is a relatively new application with high efficiency and energy saving that has been used since the early 1960s [4]. Laser welding or hybrid welding, such as fiber laser-MAG hybrid welding, has been used in pipeline steels, showing great application prospects due to such advantages as flexibility, high energy density, and efficiency [5,6]. The heat input of laser welding can significantly affect the microstructure and mechanical properties of low-alloy high-strength steels [7][8][9], but relatively few reports have been made on laser welding of X80 pipeline steel. Corrosion has been one of the most important failure styles for the pipeline, and the welded joint is more susceptible to corrosion because of the gradients in chemical composition, metallurgical microstructure, and residual stress among the weld metal (WM), fusion line, heat-affected zone (HAZ) and base metal (BM). The corrosion damage of pipeline steels in or near the welded joints has been a research focus for decades [10,11]. Zhu and Xu found that the WM and BM were the most cathodic and anodic regions, respectively, by assessment of the galvanic corrosion interactions of the different regions in a CO 2 -containing environment [12]. Wang and Liu found that the microstructure of granular bainite mixed with ferrite and acicular ferrite showed the lowest and highest corrosion resistance, respectively, in X80 pipeline steel HAZ [13]. Ahmad et al. showed that grooving corrosion and rapid thinning are largely accelerated by welding defects, and the corrosion rate can be declined by alloying additions in WM, like Mo, Cr, V, and Mn [14]. Sajjad et al. found that proper heat treatment can promote the corrosion resistance of HAZ and WM in a high pH solution, which can be attributed to the formation of uniformly distributed polygonal ferrite and the decrease in the volume fraction of bainite [15]. Most of the previous corrosion tests were concentrated in soil, aqueous solution, sodium chloride solution, and seawater. In recent years, hydrogen sulfide corrosion inside the pipeline has attracted much attention. Such corrosion will cause stress corrosion and hydrogen cracking of the pipeline, seriously endangering its service life [16,17]. The research on content of hydrogen sulfide corrosion by relevant scholars mainly focuses on the base metal or traditional welded joints, but studies on laser welded joints of X80 pipeline steel are scant. Therefore, in this study, X80 pipeline steel welded joints with different welding heat inputs were obtained by laser welding technology. The microstructure, mechanical properties, and corrosion resistance (in an H 2 S environment) were investigated. The effects of heat input and microstructure on corrosion resistance were also explored. Materials and Solutions The chemical composition of X80 pipeline steels used in this study was (wt.%) 0.046C-0.305Si-1.76Mn-0.058Al-0.079Nb-0.008V-0.225Ni-0.023Cr-0.226Mo-0.015Ti -0.215Cu-0.00025B with low concentrations of S (0.007%) and P (0.001%). The CE pcm , adopted by American Petroleum Institute, was used to specify the limit of carbon equivalent (CE) for high-strength pipeline steel when the carbon mass fraction is less than 0.12%. Thus, the CE pcm for the X80 pipeline steel used in this study was 0.17693%. The test solution used in this study was NACEA solution (Tianjin Kermel Chemical Reagent Co., Ltd., Tianjin, China), which contains 5 wt.% NaCl and 0.5 wt.% CH 3 COOH with saturated H 2 S at pH 2.8. The temperature was maintained at 50 • C during tests. Prior to the testing, the solution was purged with N 2 for 2 h, and H 2 S flow was maintained during the test duration. The Laser Welding Process The sample in this experiment was X80 pipeline steel with a size of 200 mm × 100 mm × 26.4 mm. After all sample surfaces were sanded, they were cleaned with ethanol and dried to ensure the same experimental conditions and uniform surface treatment. The welding process was carried out by fiber laser (IPG-YLS-10000, IPG Photonics Corporation, Oxford, MA, USA) with a maximum power of 10 kW, an emission wavelength of 1070 nm, and a spot focus diameter of 0.2 mm. The heat input was varied by controlling the welding speed under the premise of ensuring acceptable penetration and forming. A set of experiments was designed with five levels of welding heat input (2.86 to 6.67 kJ/cm) with a fixed power of 10 kW. In order to reduce the error of the experiment, each level was repeated three times. The experimental parameters are shown in Table 1. In the welding process, 99.99% argon shielding gas with a flow rate of 15 L/min was adopted. The angle between the gas flow direction and vertical middle line was 45 • , and five heat input samples were processed in sequence perpendicular to the welding direction. A welding diagram is shown in Figure 1. Metals 2022, 12,1654 samples were processed in sequence perpendicular to the welding direction. diagram is shown in Figure 1. Microstructure Observations and Mechanical Property Testing After the welded specimens were etched with 4% nitric acid (4 mL nitric a mL ethanol, Tianjin Kermel Chemical Reagent Co., Ltd., Tianjin, China), the m ture was observed by optical microscope (OM, Shenzhen Senmeirui Technolog Shenzhen, China) and scanning electron microscope (FSEM, JEOL-7800F, JEO kyo, Japan). To better find the variation law of the microstructure, we selected s S3, and S5 for analysis, and the same is true for subsequent experiments. Imag 6.0 (Media Cybernetics, Rockville, MD, USA) software was used to count the of structures in each area of welded joints with different welding heat inputs ac the micromorphological characteristics of different microstructures. Microhard urements (HXD-1000TMC, Xian Weixin Testing Equipment Co., LTD., Xi' were performed under a loading force of 200 g and a loading time of 15 s. After five different locations for each sample, the average was taken and each samp licated three times. Tensile tests were performed in accordance with American Testing Materials (ASTM) E 8M-04 standards. The experiments were carried ou electronic universal testing machine (Zwick-Z250, ZwickRoell GmbH & Co. Germany) with a sample size of 100 mm × 15 mm × 3 mm. The experiment wa three times to ensure the accuracy of the experimental results. Microstructure Observations and Mechanical Property Testing After the welded specimens were etched with 4% nitric acid (4 mL nitric acid and 96 mL ethanol, Tianjin Kermel Chemical Reagent Co., Ltd., Tianjin, China), the microstructure was observed by optical microscope (OM, Shenzhen Senmeirui Technology Co., Ltd., Shenzhen, China) and scanning electron microscope (FSEM, JEOL-7800F, JEOL Ltd., Tokyo, Japan). To better find the variation law of the microstructure, we selected samples S1, S3, and S5 for analysis, and the same is true for subsequent experiments. Image Pro Plus 6.0 (Media Cybernetics, Rockville, MD, USA) software was used to count the proportion of structures in each area of welded joints with different welding heat inputs according to the micromorphological characteristics of different microstructures. Microhardness measurements (HXD-1000TMC, Xian Weixin Testing Equipment Co., LTD., Xi'an, China) were performed under a loading force of 200 g and a loading time of 15 s. After measuring five different locations for each sample, the average was taken and each sample was replicated three times. Tensile tests were performed in accordance with American Society of Testing Materials (ASTM) E 8M-04 standards. The experiments were carried out using an electronic universal testing machine (Zwick-Z250, ZwickRoell GmbH & Co. KG, Ulm, Germany) with a sample size of 100 mm × 15 mm × 3 mm. The experiment was repeated three times to ensure the accuracy of the experimental results. Electrochemical Measurements Electrochemical polarization curves and electrochemical impedance spectroscopy are of great significance in metal corrosion research and have been widely cited. Through their measurement results, information of metals such as corrosion rate, corrosion kinetics, and corrosion mechanism can be analyzed and discussed. Therefore, in this paper, electrochemical technology was used to study the corrosion resistance of welded joints. The test samples for electrochemical experiments were 5 mm × 5 mm × 2 mm cut from base material (BM), weld metal (WM), coarse-grained heat-affected zone (CGHAZ) and fine-grained heat-affected zone (FGHAZ) in welded joints, respectively. Prior to electrochemical tests, the samples were ground up to 800 grit SiC paper, then soldered to copper wires, mounted in silica gel, rinsed with deionized water, degreased in acetone, cleaned ultrasonically with ethyl alcohol for 15 min, and then air-dried. A three-electrode electrochemical cell system was employed with the use of the studied material as the working electrode (Gamry Interface 1000, Gamry Instruments Consulting Co., Ltd., Shanghai, China), platinum plate as the counter electrode, and saturated calomel electrode (SCE, Gamry Instruments Consulting Co., LTD., Shanghai, China) of +0.241 V SHE as the reference electrode. The electrochemical experiments were conducted after there was almost no change in open circuit potential (OCP) to assure the stability and validity. Electrochemical impedance spectroscopy (EIS) tests were performed at the OCPs from 0.01 Hz to 100 kHz with an amplitude of 10 mV. AC impedance spectrum curves of immersion for 0 h and 96 h were measured and the potentiodynamic polarization curves were obtained at 0.5 mV/s sweep rate. Microstructure Evolution As shown in Figure 2, the cross sections of the welded joints presented similar sound profiles with typical "goblet" shape without cracks and of acceptable appearance, indicating the feasibility of the welding processes with five welding speeds. The fusion line and the boundaries between heat-affected zone (HAZ) and BM (embellished by dashed lines) can be observed clearly. The welding penetration and width increased linearly with the increase in heat input, while their ratio decreased from about 1.9 to 1.3 ( Figure 3). wires, mounted in silica gel, rinsed with deionized water, degreased in acetone, cleaned ultrasonically with ethyl alcohol for 15 min, and then air-dried. A three-electrode electrochemical cell system was employed with the use of the studied material as the working electrode (Gamry Interface 1000, Gamry Instruments Consulting Co., Ltd., Shanghai, China), platinum plate as the counter electrode, and saturated calomel electrode (SCE, Gamry Instruments Consulting Co., LTD., Shanghai, China) of +0.241 VSHE as the reference electrode. The electrochemical experiments were conducted after there was almost no change in open circuit potential (OCP) to assure the stability and validity. Electrochemical impedance spectroscopy (EIS) tests were performed at the OCPs from 0.01 Hz to 100 kHz with an amplitude of 10 mV. AC impedance spectrum curves of immersion for 0 h and 96 h were measured and the potentiodynamic polarization curves were obtained at 0.5 mV/s sweep rate. Microstructure Evolution As shown in Figure 2, the cross sections of the welded joints presented similar sound profiles with typical "goblet" shape without cracks and of acceptable appearance, indicating the feasibility of the welding processes with five welding speeds. The fusion line and the boundaries between heat-affected zone (HAZ) and BM (embellished by dashed lines) can be observed clearly. The welding penetration and width increased linearly with the increase in heat input, while their ratio decreased from about 1.9 to 1.3 ( Figure 3). wires, mounted in silica gel, rinsed with deionized water, degreased in acetone, cleaned ultrasonically with ethyl alcohol for 15 min, and then air-dried. A three-electrode electrochemical cell system was employed with the use of the stud ied material as the working electrode (Gamry Interface 1000, Gamry Instruments Consult ing Co., Ltd., Shanghai, China), platinum plate as the counter electrode, and saturated calomel electrode (SCE, Gamry Instruments Consulting Co., LTD., Shanghai, China) o +0.241 VSHE as the reference electrode. The electrochemical experiments were conducted after there was almost no change in open circuit potential (OCP) to assure the stability and validity. Electrochemical impedance spectroscopy (EIS) tests were performed at the OCP from 0.01 Hz to 100 kHz with an amplitude of 10 mV. AC impedance spectrum curves o immersion for 0 h and 96 h were measured and the potentiodynamic polarization curve were obtained at 0.5 mV/s sweep rate. Microstructure Evolution As shown in Figure 2, the cross sections of the welded joints presented similar sound profiles with typical "goblet" shape without cracks and of acceptable appearance, indicat ing the feasibility of the welding processes with five welding speeds. The fusion line and the boundaries between heat-affected zone (HAZ) and BM (embellished by dashed lines can be observed clearly. The welding penetration and width increased linearly with the increase in heat input, while their ratio decreased from about 1.9 to 1.3 ( Figure 3). In addition, a slight undercut was detected in the weld bead and its extent decreased or even disappeared with the increase in heat input. However, when the heat input increased to 6.67 kJ/cm, a hole was detected in the center of the weld bead, which was attributed to metal evaporation and gravitational effects. The microstructures of the welded joints with three different heat inputs are shown in Figures 4-6. It can be seen that their microstructures undergo a series of transformations from the weweld metal to the BM. of 19 In addition, a slight undercut was detected in the weld bead and its extent decreased or even disappeared with the increase in heat input. However, when the heat input increased to 6.67 kJ/cm, a hole was detected in the center of the weld bead, which was attributed to metal evaporation and gravitational effects. The microstructures of the welded joints with three different heat inputs are shown in Figures 4-6. It can be seen that their microstructures undergo a series of transformations from the weweld metal to the BM. The microstructure of the as-received BM was polygonal ferrite (PF) with fine equiaxed grains and granular bainite (GB) with embellished by martensite (M) plates and retained austenite (so called M/A islands) as the secondary phase. The microstructure characteristics were formed by thermomechanically controlled processing (TMCP) [18]. The uniform distribution of fine mild microstructure of PF and hard microstructure of GB gives the pipeline steel better performance, including high deformability in particular. In the WM with different heat inputs, the types and sizes of microstructure vary widely. The microstructure transforms from low-carbon martensite with lath spacing less than 1 µm to fine-grained acicular ferrite with the increase in heat input, and bainite (B) changes from lath to granular. In addition, the M/A composition also changed from chains to islands and was evenly distributed around the ferrite (F) grains. The microstructure of the as-received BM was polygonal ferrite (PF) with fine equiaxed grains and granular bainite (GB) with embellished by martensite (M) plates and retained austenite (so called M/A islands) as the secondary phase. The microstructure characteristics were formed by thermomechanically controlled processing (TMCP) [18]. The uniform distribution of fine mild microstructure of PF and hard microstructure of GB gives the pipeline steel better performance, including high deformability in particular. The microstructure of the as-received BM was polygonal ferrite (PF) with fine equiaxed grains and granular bainite (GB) with embellished by martensite (M) plates and retained austenite (so called M/A islands) as the secondary phase. The microstructure characteristics were formed by thermomechanically controlled processing (TMCP) [18]. The uniform distribution of fine mild microstructure of PF and hard microstructure of GB gives the pipeline steel better performance, including high deformability in particular. embellished by M/A islands was observed clearly. The bainitic ferrite matrix was lathshaped and the M/A islands disperses in the boundaries of ferrite lath grains. The ferrite lath width increased and the M/A islands changed from needle-like to granular as the heat input increased. In the FGHAZ with different heat inputs, the type of microstructure did not change, consisting mainly of massive ferrite and granular bainite interspersed with M/A islands. The microstructure ratios of the base metal and each area of the welded joint with different heat input were counted, and the results are shown in Figure 7. The microstructure proportions of ferrite and bainite in the base metal are 91.7% and 8.3%, respectively. The largest proportion of the microstructure in each region of the joint is ferrite. When the heat input is 6.67 kJ/cm, the proportion of ferrite in FGHAZ is as high as 87.9%. With the increase in heat input, the proportion of ferrite in each region increases, and the sum of the proportion of martensite and bainite decreases. The proportion of bainite in WM and CGHAZ increases, the proportion of martensite decreases, and the proportion of bainite in FGHAZ decreases. The microstructure ratios of different regions of the welded joints are also quite different, with the largest proportion of ferrite in FGHAZ and the smallest in WM, while the sum of the proportions of martensite and bainite is the opposite. changes from lath to granular. In addition, the M/A composition also changed from chains to islands and was evenly distributed around the ferrite (F) grains. In the CGHAZ with different heat inputs, a similar granular bainite was obtained and the coarse prior austenite grain boundaries (PAGB) (embellished by dashed lines) embellished by M/A islands was observed clearly. The bainitic ferrite matrix was lathshaped and the M/A islands disperses in the boundaries of ferrite lath grains. The ferrite lath width increased and the M/A islands changed from needle-like to granular as the heat input increased. In the FGHAZ with different heat inputs, the type of microstructure did not change, consisting mainly of massive ferrite and granular bainite interspersed with M/A islands. The microstructure ratios of the base metal and each area of the welded joint with different heat input were counted, and the results are shown in Figure 7. The microstructure proportions of ferrite and bainite in the base metal are 91.7% and 8.3%, respectively. The largest proportion of the microstructure in each region of the joint is ferrite. When the heat input is 6.67 kJ/cm, the proportion of ferrite in FGHAZ is as high as 87. Mechanical Properties of Welded Joint The nominal stress-strain curves of the X80 pipeline steel and its laser welded joint with different welding rates are presented in Figure 8. The results of ultimate tensile Mechanical Properties of Welded Joint The nominal stress-strain curves of the X80 pipeline steel and its laser welded joint with different welding rates are presented in Figure 8. The results of ultimate tensile strength, yield strength and elongation are shown in Table 2. Almost no difference can be seen in the elastic region; the obvious strain-hardening behaviors can be seen for all samples. The laser welding thermal cycle has a conducive effect on the engineering strength, including the yield strength and the tensile strength, detrimental to the ductility. The strength and elongation also increased as the heat input increased. strength, yield strength and elongation are shown in Table 2. Almost no differenc seen in the elastic region; the obvious strain-hardening behaviors can be seen for ples. The laser welding thermal cycle has a conducive effect on the engineering s including the yield strength and the tensile strength, detrimental to the ductil strength and elongation also increased as the heat input increased. The observed microstructure variations occurring during the welding are cl flected in the microhardness distribution symmetrically. Figure 9 demonstrates so ical results. The hardness decreases gradually from the weld to the base metal, hardness value in the weld area decreases gradually with the increase in heat inp The observed microstructure variations occurring during the welding are clearly reflected in the microhardness distribution symmetrically. Figure 9 demonstrates some typical results. The hardness decreases gradually from the weld to the base metal, and the hardness value in the weld area decreases gradually with the increase in heat input. ples. The laser welding thermal cycle has a conducive effect on the engineering s including the yield strength and the tensile strength, detrimental to the ducti strength and elongation also increased as the heat input increased. The observed microstructure variations occurring during the welding are cl flected in the microhardness distribution symmetrically. Figure 9 demonstrates so ical results. The hardness decreases gradually from the weld to the base metal, hardness value in the weld area decreases gradually with the increase in heat inp Figure 10 shows the potentiodynamic polarization curves of each region of the welded joint with different welding heat inputs in NACEA solution saturated with hydrogen sulfide. The results showed that their anodic and cathodic processes were controlled by the charge transfer process, and no passivation process occurred. The Tafel extrapolation method was used for fitting to obtain the corrosion potential (E corr ) and corrosion current density (i corr ). The fitting results are shown in Table 3 and Figure 11. The more positive the value of E corr , the smaller the corrosion driving force, and the smaller the value of i corr, the lower the corrosion rate of the material [19][20][21][22]. With the increase in heat input, the E corr of each region of the welded joint shifted to the positive direction, and the E corr of WM was most affected by the welding heat input, rising from −677.6 mV to −627.8 mV, followed by FGHAZ and CGHAZ. The i corr of each region of the welded joint also showed a downward trend with the increase in heat input, and the i corr of the FGHAZ metal was smaller than that of the base metal when the heat input was 6.67 kJ/cm. The E corr in different regions of the welded joint with different heat inputs showed the same variation law: the E corr of FGHAZ was the most positive, and the E corr of CGHAZ was the most negative. An increase in heat input reduced the corrosion driving force and corrosion rate in various regions of the welded joint, similar to the findings of Huang et al. [21]. Figure 10 shows the potentiodynamic polarization curves of each region of the welded joint with different welding heat inputs in NACEA solution saturated with hydrogen sulfide. The results showed that their anodic and cathodic processes were controlled by the charge transfer process, and no passivation process occurred. The Tafel extrapolation method was used for fitting to obtain the corrosion potential (Ecorr) and corrosion current density (icorr). The fitting results are shown in Table 3 and Figure 11. The more positive the value of Ecorr, the smaller the corrosion driving force, and the smaller the value of icorr, the lower the corrosion rate of the material [19][20][21][22]. With the increase in heat input, the Ecorr of each region of the welded joint shifted to the positive direction, and the Ecorr of WM was most affected by the welding heat input, rising from −677.6 mV to −627.8 mV, followed by FGHAZ and CGHAZ. The icorr of each region of the welded joint also showed a downward trend with the increase in heat input, and the icorr of the FGHAZ metal was smaller than that of the base metal when the heat input was 6.67 kJ/cm. The Ecorr in different regions of the welded joint with different heat inputs showed the same variation law: the Ecorr of FGHAZ was the most positive, and the Ecorr of CGHAZ was the most negative. An increase in heat input reduced the corrosion driving force and corrosion rate in various regions of the welded joint, similar to the findings of Huang et al. [21]. Figure 12 shows the electrochemical impedance spectroscopy results for each sample Figure 12 shows the electrochemical impedance spectroscopy results for each sample in a hydrogen sulfide-saturated NACEA solution. The Nyquist plot consists of capacitive reactance arcs in the first quadrant and inductive reactance arcs in the fourth quadrant, indicating that pitting corrosion occurs at this time [23]. An R s (C dl R ct (LR L )) equivalent circuit was used to fit the results, in which R s was the solution resistance, C dl was the double-layer capacitor, R ct was the charge transfer resistance, L was the sense resistance, and R L was the sense resistance fitting. The results are shown in Table 4 and Figure 13. With the increase in heat input, the charge transfer resistance R ct in each region of the welded joint showed an upward trend, the R ct of the FGHAZ metal increased from 216.2 Ω·cm 2 to 412.5 Ω·cm 2 , and the value increases of CGHAZ and WM were also more than 100 Ω·cm 2 . Moreover, the size of R ct in each area of the joint with different heat inputs was expressed as CGHAZ < WM < FGHAZ. The larger the value of R ct , the higher the resistance of the sample and the better the corrosion resistance [24], consistent with the result of the polarization curve. Samples Rs ( Figure 13. Fitting results of R ct extracted from impedance spectrum curves of different samples. Figure 14 shows the electrochemical impedance spectroscopy results of different heat input welding joints after immersion in NACEA solution saturated with hydrogen sulfide for 96 h. The Nyquist diagrams were composed of capacitive reactance arcs in the first quadrant, without inductive reactance arcs. An R s (Q f (R ct (C d lR f ))) equivalent circuit was used to fit the results. In the circuit, R s was the solution resistance, C dl was the double-layer capacitor, Q f was the constant phase element of the corrosion products, R ct was the charge transfer resistance, and R f was the resistance of the corrosion products. The fitting results are shown in Table 5 and Figure 15. The effect of heat input on R f and R ct was the same as that without immersion, and the order of corrosion resistance in each region did not change. However, the R ct was elevated for each of the samples compared with the unimmersed state, indicating an increase in corrosion resistance, which was attributed to a film of sulfide corrosion products developed on the surface of the samples [25,26]. layer capacitor, Qf was the constant phase element of the corrosion products, Rct was the charge transfer resistance, and Rf was the resistance of the corrosion products. The fitting results are shown in Table 5 and Figure 15. The effect of heat input on Rf and Rct was the same as that without immersion, and the order of corrosion resistance in each region did not change. However, the Rct was elevated for each of the samples compared with the unimmersed state, indicating an increase in corrosion resistance, which was attributed to a film of sulfide corrosion products developed on the surface of the samples [25,26]. charge transfer resistance, and Rf was the resistance of the corrosion products. The fitting results are shown in Table 5 and Figure 15. The effect of heat input on Rf and Rct was the same as that without immersion, and the order of corrosion resistance in each region did not change. However, the Rct was elevated for each of the samples compared with the unimmersed state, indicating an increase in corrosion resistance, which was attributed to a film of sulfide corrosion products developed on the surface of the samples [25,26]. Effect of Heat Input on Microstructure The main feature of laser welding is rapid heating and cooling during the thermal cycle of the welded joint. During the thermal cycle of welded joints, due to the different distances from the heat source, the microstructure is affected by the thermal cycle to a different degree, and the phase transition mode is different, so the type, content, and size of the formed microstructure are different [27][28][29]. During welding, austenitic transformation occurrs due to heat input, and then, it will be cooled down in the air with different cooling rates depending on the heat input values. Different cooling rates lead to differences in microstructure [30,31]. The peak temperature of WM is above the liquidus line, so the metal in this region undergoes heating, phase transition, melting, solidification, and solid phase transition. When the heat input was small, the cooling rate was fast, the residence time at high temperatures was very short, and the carbon atoms in the structure had no time to diffuse and directly shear into lath martensite [32]. As the heat input increased, the cooling rate decreased, the stability of the austenite decreased, and the carbon atoms had sufficient time to diffuse, resulting in an increased in ferrite and bainite. The CGHAZ was closer to the weld, and the peak temperature was higher during the welding thermal cycle, and the austenite grains grew sharply. Finally, in the process of rapid cooling, coarse martensite and ferrite microstructures were obtained, and the microstructure was extremely inhomogeneous. With the increase in heat input, the cooling rate increased, the driving force of ferrite grain nucleation decreased, and finally lath-like bainite was formed [33]. Given that FGHAZ has a lower peak temperature, it was equivalent to a normalizing heat treatment. After the phase transformation and recrystallization, the austenite was mostly transformed into ferrite. Considering that the formation of ferrite was a process of carbon emission, the final phase transformed austenite has a higher carbon content and formed granular bainite, and the microstructure was uniformly refined [34]. Effect of Heat Input on Mechanical Properties The heat input mainly affects the mechanical properties by affecting the type and morphology of the microstructure. Generally, the toughness and plasticity of ferrite are higher than those of bainite and martensite, while martensite has the highest hardness, followed by bainite and ferrite [35][36][37]. Grain refinement also plays a large role in improving the strength of the material [38,39]. The structure of the base metal is mainly ferrite, so the hardness is the lowest and the elongation is the highest. With the increase in heat input, the coarse martensite in the welded joint gradually decreases and disappears, the content of ferrite increases, and the grains gradually becomes finer. Therefore, with the increase in heat input, the strength increases and the elongation decreases. Compared with other regions, the proportion of martensite and bainite in WM is higher, so the hardness is the highest. As the heat input increases, the content of martensite and bainite decreases, and the hardness also decreases [40]. FGHAZ is mainly composed of ferrite and bainite, which is quite different from the martensite structure in CGHAZ, so the hardness is below it. Effect of Heat Input on Corrosion Resistance X80 pipeline steel is a low-carbon and low-alloy steel, and its corrosion resistance is greatly affected by the microstructure. The type, proportion, and grain size of the microstructure all play a crucial role in corrosion resistance [41,42]. High welding heat input can increase the balance of the welded joint structure, increase the content of ferrite, and reduce the content of martensite and bainite, thereby improving corrosion resistance [42]. Each area of the welded joint also has different corrosion resistance due to different microstructures [43]. The difference in microstructure makes the E corr of each area different, and a galvanic effect is formed at the junction of the areas, so that the area with negative E corr is preferentially corroded while the E corr of the value is related to Kelvin potential. The larger the Kelvin potential, the larger the E corr . The relationship between them is shown in Equation (1). where W ref represents the work function for the reference electrode, E ref 2 represents the half-cell potential of the reference electrode, F represents the Faraday constant, and ϕ represents the Kelvin potential. Ferrite has a higher Kelvin potential, followed by bainite and martensite [44]. WM and CGHAZ have more martensite and bainite content, but CGHAZ has the worst corrosion resistance, which is due to coarse grains and irregular structure, aggravating the corrosion tendency [45][46][47]. FGHAZ is mainly composed of ferrite and granular bainite. The E corr value tends to be positive, the grain size is fine, the structure is uniform, and the corrosion resistance is good. To sum up, different welding heat input results in changes in the proportion of microstructure in each area of the welded joint, and the difference in microstructure leads to the difference in corrosion rate. The proportion of ferrite and the amount of heat input, the ratio of i corr to ferrite, and the ratio of R ct to ferrite were linearly fitted to obtain their functional models. The fitting results are shown in Figure 16a-c. and CGHAZ have more martensite and bainite content, but CGHAZ has the worst corrosion resistance, which is due to coarse grains and irregular structure, aggravating the corrosion tendency [45][46][47]. FGHAZ is mainly composed of ferrite and granular bainite. The Ecorr value tends to be positive, the grain size is fine, the structure is uniform, and the corrosion resistance is good. To sum up, different welding heat input results in changes in the proportion of microstructure in each area of the welded joint, and the difference in microstructure leads to the difference in corrosion rate. The proportion of ferrite and the amount of heat input, the ratio of icorr to ferrite, and the ratio of Rct to ferrite were linearly fitted to obtain their functional models. The fitting results are shown in Figure 16a Combined with their fitting results, the functional models of heat input and icorr and Rct can be obtained, respectively, as shown in Figure 16 d,e. The magnitudes of the slopes in the two figures clearly show that the welding heat input has a greater effect on the Combined with their fitting results, the functional models of heat input and i corr and R ct can be obtained, respectively, as shown in Figure 16d,e. The magnitudes of the slopes in the two figures clearly show that the welding heat input has a greater effect on the corrosion rate in the CGHAZ region, followed by WM and FGHAZ. Similarly, the correlation function model of heat input and the sum of the proportions of martensite and bainite can be obtained. Figure 17 is a schematic diagram of the reaction of the welded joint immersed in H 2 S saturated NACEA solution. The figure indicates the reactions occurring during immersion can be divided into anodic, cathodic, and other reactions [25,26]. 4 16 of 19 Figure 17. Schematic of the formation process of sulfide film on X80 pipeline steel surface after immersion in NACEA solution with saturated H2S gas at 50 °C. When immersed for 0 h, the anodic reaction dissolved the surface of the steel to produce Fe 2+ and released it into the corrosion solution, reducing the thickness of the steel [41,48]. With the progress of the reaction, the formation rate of the corrosion product mackinawite increased due to the increase of Fe 2+ and S 2− in the electrolyte. However, because the rate of the synthesis reaction was much smaller than that of the anodic reaction, the corrosion products would not cover the surface of the substrate and are loose and fall off easily [17,49,50]. At the same time, H in the solution may also lead to hydrogen permeation. The hydrogen permeation is defined as the process where hydrogen atoms (H) adsorbed on the outer surface (Hads) of a metal enter the metal, become absorbed on the inner surface (Habs). Some Hds produced by the cathode are chemically or electrochemically bound, adsorbed on the metal surface or leave as H2. A part of Hads enters the metal to become Habs. Some Habs accumulate inside the metal or in hydrogen traps, such as vacancies, dislocations, grain boundaries, and phase interfaces, embrittling the metal surface and exacerbating corrosion [51,52]. The presence of Cl − in the solution would also promote the progress of the anodic reaction and the formation of pitting holes, so the inductive arc appeared in Figure 11. During the reaction, these pitting pores act as anodes, attracting the gradual accumulation of S 2− , HS − , and Cl − [25]. With the prolongation of immersion time, the reaction of corrosion product synthesis accelerated, corrosion products gradually accumulated, and two interfaces, namely, Fe/FeS interface and FeS/solution interface, were gradually formed [53,54]. The corrosion product at this time is a mixture of mackinawite and cubic ferrous sulfide [25,50]. Some When immersed for 0 h, the anodic reaction dissolved the surface of the steel to produce Fe 2+ and released it into the corrosion solution, reducing the thickness of the steel [41,48]. With the progress of the reaction, the formation rate of the corrosion product mackinawite increased due to the increase of Fe 2+ and S 2− in the electrolyte. However, because the rate of the synthesis reaction was much smaller than that of the anodic reaction, the corrosion products would not cover the surface of the substrate and are loose and fall off easily [17,49,50]. At the same time, H in the solution may also lead to hydrogen permeation. The hydrogen permeation is defined as the process where hydrogen atoms (H) adsorbed on the outer surface (H ads ) of a metal enter the metal, become absorbed on the inner surface (H abs ). Some H ds produced by the cathode are chemically or electrochemically bound, adsorbed on the metal surface or leave as H 2 . A part of H ads enters the metal to become H abs . Some H abs accumulate inside the metal or in hydrogen traps, such as vacancies, dislocations, grain boundaries, and phase interfaces, embrittling the metal surface and exacerbating corrosion [51,52]. The presence of Cl − in the solution would also promote the progress of the anodic reaction and the formation of pitting holes, so the inductive arc appeared in Figure 11. During the reaction, these pitting pores act as anodes, attracting the gradual accumulation of S 2− , HS − , and Cl − [25]. With the prolongation of immersion time, the reaction of corrosion product synthesis accelerated, corrosion products gradually accumulated, and two interfaces, namely, Fe/FeS interface and FeS/solution interface, were gradually formed [53,54]. The corrosion product at this time is a mixture of mackinawite and cubic ferrous sulfide [25,50]. Some Fe 2+ produced by the dissolution of Fe reacted with ferrous sulfide, hydrogen sulfide, HS − , and S 2− to form a new type of sulfide film with relatively rich iron and low S content at the Fe/FeS interface. At this time, the corrosion products are relatively dense. In addition, other Fe 2+ ions were released into the solution through the sulfide film and reacted with hydrogen sulfide, HS − , and S 2− , resulting in the continuous coating of corrosion products on the steel surface [55]. The sulfide film also reacted with hydrogen sulfide, HS − and S 2− at the FeS/solution interface. Therefore, the thickness of the sulfide film increases at the FeS/solution interface, leading to an increase in R f [25]. At the same time, it prevents corrosion products such as Cl − and H abs from entering the substrate. The reaction of S 2− , HS − , and Fe 2+ accumulating in the pitting pits resulted in the precipitation of ferrous sulfide corrosion products on the walls of the pitting pits, finally filling the pitting pits [25,30]. Therefore, after immersion for 96 h, a dense corrosion products film was formed on the surface of the substrate, causing the corrosion difference from immersion for 0 h, increased R ct , and enhanced corrosion resistance. Conclusions In this study, X80 pipeline steel welded joints with different heat inputs were obtained by laser welding, and their microstructure, mechanical properties, and corrosion resistance (in saturated H 2 S NACEA solution) were analyzed. The following conclusions were obtained. 1. With the increase in heat input, the proportion of ferrite in each area of the welded joint gradually increased, the sum of the proportion of martensite and bainite decreased, the proportion of bainite in WM and CGHAZ increased, and the proportion of martensite structure decreased, the microstructure type of FGHAZ has changed very little. 2. With the increase in heat input, the strength and elongation increased, and the hardness of the weld center decreased. 3. The electrochemical results show that the corrosion resistance of welded joints increased with the increase in heat input, and the structure of FGHAZ is the least easily corroded in each area of welded joints, followed by WM and CGHAZ. The function models of heat input size and i corr and R ct values were established. In addition, the corrosion product film produced by the long-term immersion of the welded joint in the saturated H 2 S NACEA solution can hinder the development of corrosion and enhance the corrosion resistance to a certain extent. Institutional Review Board Statement: Not applicable.
2022-10-13T15:43:25.100Z
2022-09-30T00:00:00.000
{ "year": 2022, "sha1": "27ca5b64fbc9b80ad423c29027201f8c1fa79bab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4701/12/10/1654/pdf?version=1666596158", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "99c90afd955cf3be95d54c7007b02dc9f3836db7", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
125769414
pes2o/s2orc
v3-fos-license
Electro-osmotic consolidation of soil with variable compressibility, hydraulic conductivity and electro-osmosis conductivity In present study, the non-linear variations of soil compressibility, hydraulic and electro-osmosis conductivities were analyzed through laboratory experiments, and incorporated in a one-dimensional model. The analytical solutions for excess pore water pressure and degree of consolidation were derived, and numerical simulations were performed to verify its effectiveness. The results indicated that both the non-linear variations of hydraulic and electro-osmosis conductivities showed remarkable impacts on the excess pore water pressure and degree of consolidation, especially for soils with relative high compressibility. A further comparison with previous analytical solutions indicated that more accurate predictions could be obtained with the proposed analytical solutions. 2016 Elsevier Ltd. All rights reserved. Introduction Over the past few decades, there has been a substantial development of infrastructure on soft foundations worldwide. Various treatment methods have been proposed for the improvement of these soft foundations, among which electro-osmotic consolidation has proven to be a promising method, especially for soils with low permeability [1][2][3][4][5][6][7][8][9][10]. Unlike the traditional methods such as surcharge and vacuum preloading which dewater soil mass by applying a hydraulic gradient, electro-osmotic consolidation involves pairs of anodes and cathodes installed in the soil mass, through which an electrical field is applied and pore water is driven from the anode to cathode under the electrical gradient. Similar to Darcy's law, the velocity of pore water flow v e caused by the electrical gradient can be expressed as k e  i e , where i e means electrical gradient and k e means electro-osmosis conductivity that describes the velocity of pore water under a unit electrical gradient. For different soils, the hydraulic conductivity k h may change from about 1  10 À8 cm/s in clay to about 1  10 À4 cm/s in sand, while k e is generally in the range of 1  10 À5 to 1  10 À4 cm 2 / (V s). As a result, a small electrical gradient can balance flows caused by large hydraulic gradient in soft soil with low permeabil-ity, and electro-osmosis can be much more efficient than the traditional techniques for soft soil improvement [5,[11][12][13]. Based on the assumption that the pore water flow resulted from hydraulic gradient and electrical gradient can be linearly superimposed, the governing equation for electro-osmotic consolidation was developed and many analytical solutions were derived based on different conditions to analyze the development of pore water pressure [14][15][16][17][18][19][20]. Esrig [14] developed a one-dimensional (1D) model for electro-osmotic consolidation and obtained the analytical solutions for pore water pressure and degree of consolidation considering a permeable cathode and an impermeable anode. Wan and Mitchell [15] further coupled electro-osmotic consolidation with surcharge preloading in a 1D model. Shang [16] and Xu et al. [21] proposed a 2D model in vertical plane to account for the combined action of electro-osmosis with surcharge preloading and vacuum preloading. Su and Wang [22] presented a 2D model in horizontal plane and derived the analytical solutions under different boundary conditions. Li et al. [17] analyzed the average pore water pressure in soils submitted to an axisymmetric electrical field. Wu and Hu [19] developed an axisymmetric model with coupled horizontal and vertical seepage and derived the analytical solution without the equal strain hypothesis. These mathematical analyses have generated significant knowledge pertaining to electro-osmotic consolidation and provided useful formulas for engineering design. However, the electrical and mechanical properties of soil are assumed constant during the derivation of these http://dx.doi.org/10.1016/j.compgeo.2016.12.026 0266-352X/Ó 2016 Elsevier Ltd. All rights reserved. analytical solutions. In fact, the flow of pore water from anode to cathode during electro-osmosis causes the decrease in water content and void ratio of the treated soil, and leads to non-linear variations in soil properties such as compressibility, hydraulic conductivity and electro-osmosis conductivity [1,4,5,7,13,[23][24][25][26][27][28]. Such variations would inevitably affect the development of pore water pressure during electro-osmotic consolidation, and the predictions from the existing analytical solutions with constant soil properties would be inaccurate. Although being ignored in the analytical solutions for electroosmotic consolidation, the non-linear variations of soil compressibility and hydraulic conductivity have already been investigated in many consolidation theories [29][30][31][32][33][34][35][36][37][38]. Davis and Raymond [29] derived the analytical solution for pore water pressure with the assumption of non-linear compressibility and constant coefficient of consolidation. Poskitt [31] further coupled the relationships between void ratio (e) and effective stress (r 0 ), hydraulic conductivity (k h ) into a vertical consolidation model. Lekha et al. [33] presented closed form analytical solutions for the pore water pressure and degree of consolidation for the particular cases of e À log(r 0 ) and e À log(k h ) responses. In these studies, the relationships of e À log(r 0 ) and e À log(k h ) were developed and incorporated to account for the non-linear variations of soil compressibility and permeability. Compared to the traditional consolidation problem, electroosmotic consolidation involves not only the non-linear variations of soil compressibility and permeability but also the change in electro-osmosis conductivity. In this study, a series of experiments were performed to investigate the variations of hydraulic and electro-osmosis conductivities during electro-osmotic consolidation. Afterwards, the relationships between the hydraulic conductivity, electro-osmosis conductivity and void ratio were developed based on the experiment results, and further incorporated into a 1D model for electro-osmotic consolidation together with the conventional e À log(r 0 ) response. The analytical solutions for excess pore water pressure and degree of consolidation were derived and compared with that from Wan and Mitchell (1976) to investigate the effects of the non-linear variations of soil properties. Experimental study A kaolinite from Jiangsu Province, China was used to conduct permeability and electro-osmosis tests. The basic properties and chemical composition of the kaolinite were listed in Table 1. The as-received kaolinite was first oven dried, then mixed with water at a water content of 10% and compacted into the test devices in five layers according to the pre-determined void ratio in the range of 1.373-0.919, and finally saturated under a vacuum. The hydraulic conductivities of the kaolinite samples were monitored with the falling head permeability test, and the electro-osmosis conductivities were measured using a selfdesigned apparatus 90 mm in diameter and 400 mm in height as shown in Fig. 1. The anode platen was placed on the bottom of the kaolinite sample, while the porous cathode platen on the top, allowing the drainage of pore water into a graduated cylinder. In order to eliminate the effect of hydraulic gradient, the bottom of the kaolinite sample was connected to a water reservoir with a Nomenclature a, a 0 the coefficient of compressibility and the initial coefficient of compressibility b calculating factor related to C c , M and N A sectional area of the soil sample C c compression index C v0 the initial coefficient of consolidation C 0 calculating factor for ultimate excess pore water pressure e, e 0 void ratio and initial void ratio G n calculating factors H height of the analytical model I, J calculating factors k e , k e0 electro-osmosis conductivity and initial electro-osmosis conductivity k h , k h0 hydraulic conductivity and initial hydraulic conductivity L length of the soil sample M, N factors describing the change in hydraulic and electroosmosis conductivities resulted from the change in void ratio p 0 surcharge preloading q volume of the discharged water due to electro-osmosis Q defined variable related to the excess pore water pressure water level as high as the top surface of the kaolinite sample. After the saturation of the kaolinite sample and the connection to the water blank, the whole system was allowed to stand for at least 2 days to reach equilibrium. Afterwards, an electrical field was applied and the electro-osmosis conductivity was calculated with the following equation, where q is the volume of the discharged water due to electroosmosis; L and A are the length and cross sectional area of the soil sample; t is time period; V 0 is the applied voltage. Fig. 2 shows the hydraulic and electro-osmosis conductivities of the kaolinite. With the decrease in void ratio, both k h and k e decrease, and their relationships with void ratio can be expressed as following, Previous studies have investigated the variation of void ratio during the consolidation process [29][30][31]33], and the results indicated that the relationship between void ratio and effective stress could be written as, in which e 0 is initial void ratio, r 0 is initial effective stress, and C c is the compression index. Similar to Eq. (4), the general form for e À log(k h ) and e À log(k e ) responses can be obtained according to Eqs. (2) and (3), in which k h0 and k e0 are initial hydraulic and electro-osmosis conductivities corresponding to e 0 ; M and N are factors that reflect the changes in hydraulic and electro-osmosis conductivities resulted from the change in void ratio. Eqs. (4)-(6) together describe the non-linear variations of soil compressibility, hydraulic conductivity and electro-osmosis conductivity during the consolidation process. Theoretical analysis Similar to previous studies, a schematic diagram of the 1D model for electro-osmotic consolidation is developed as shown in Fig. 3, with the anode on the bottom and cathode on the top [1,[12][13]18]. The bottom boundary is impermeable and the top boundary is permeable. A surcharge preloading p 0 is applied on the top boundary of the model. The following assumptions are made to develop the analytical model for electro-osmotic consolidation. (1) The soil is homogeneous and fully saturated, and the pore water and soil grain are incompressible. (2) Both the drainage of pore water and the compression of soil layer occur in the vertical direction. (3) The velocity of pore water flow due to electro-osmosis is directly proportional to the electrical gradient, and can be linearly superimposed with that due to hydraulic gradient. (4) The relationships between e and r 0 , k h and k e in Eqs. (4)- (6) hold. (5) The loading is instantaneously applied, and the small strain hypothesis is adopted. (6) The pore water flow caused by thermal gradient and chemical concentration gradient is neglected. The combined pore water flow during electro-osmotic consolidation can be described as following [7,14,15,19], where v z is the pore water flow in the vertical direction; c w is the unit weight of water; u is excess pore water pressure; V is the voltage. According to the conservation of pore water in a saturated soilwater system, Substituting Eq. (7) into Eq. (8), and making use of the substitutions W = u/p0 and Z = z/H (H is the height of the model), the following equation can be obtained, where a denotes the coefficient of compressibility. Further simplifying Eq. (9) to, where a 0 is the initial coefficient of compressibility; T v is the time factor and equals to C v0 Át/H 2 ; C v0 is the initial coefficient of consolidation. According to Lekha et al. [33], the non-linear variations of soil compressibility, hydraulic conductivity and electro-osmosis conductivity can be rewritten as, where b is defined as (p 0 /r 0 )/(1 + p 0 /r 0 ). Substituting Eqs. (11)-(13) into Eq. (10), Defining a new parameter Q as, Then Eq. (14) is simplified to, According to the description of the model (Fig. 3), the boundary and initial conditions of the problem are, Eq. (16) is non-linear in Q and therefore does not have a general solution with the boundary and initial conditions in Eq. (17). In order to solve Q from Eq. (16), the coefficient terms on the right hand side of Eq. (16) are replaced by their weighted average value. Specifically, when t = 0, Q = (1 + p 0 /r 0 ) (Cc/MÀCc/NÀ1) À C 0 ÁZ; when t tends to infinity, the pore water flow caused by electro-osmosis from the anode to cathode is exactly balanced by that caused by hydraulic gradient from the cathode to anode, and the following equation can be obtained, Therefore, Fig. 3. Diagram of the one-dimensional analytical model for electro-osmotic consolidation. Considering the top boundary condition, Q equals 1 when t tends to infinity. A weighted average value for (Q + C 0 Z) is then assumed as following, where e is the weighted factor. Substituting Eq. (20) into Eq. (16), Combining Eq. (21) with Eq. (17), Q and u can be solved (details are given in Appendix A), where n n is a solution for tan(n/2) = n/h; k n = (n n 2 + h 2 )/4; G n can be expressed as, The degree of consolidation U can be calculated as, The integral in Eq. (25) is highly related to the value of 1/(1 + C c / N À C c /M) and cannot be expressed by elementary functions. Therefore, the Newton-Cotes formula is used to estimate its value. Discussion Due to the assumption about the coefficient terms in Eq. Table 2. Effect of the weighted factor In order to solve u from Eq. (16), a weighted average value for (Q + C 0 ÁZ) is introduced in Eq. (20). In fact, the term Q + C 0 ÁZ can be expressed as following, As afore analyzed, when t = 0, u = p 0 , and Q + C 0 ÁZ = (1 + p 0 /r 0 ) (Cc/MÀCc/NÀ1) , and when t approaches infinity, Q + C 0 ÁZ = 1 + C 0 ÁZ. Note that the value of the term (Q + C 0 ÁZ) is uniform along the vertical direction at the beginning, and linearly increases from 1 at Z = 0 to 1 + C 0 at Z = 1 at the end of electro-osmosis. The average value for this term is (1 + p 0 /r 0 ) (Cc/MÀCc/NÀ1) and 1 + 0.5C 0 at the beginning and end of electro-osmosis respectively, and (Q + C 0 ÁZ) av in Eq. (20) is actually a weighted average value along the time scale. The weighted factor e was assumed to be 0.5 in the consolidation theory from Lekha et al. [33], however, most of previous studies about electro-osmosis indicated that the variation of pore water pressure was highly nonlinear, therefore a weighted factor of 0.5 may not be appropriate for electro-osmotic consolidation in this study. In order to study the change in (Q + C 0 ÁZ) during electro-osmotic consolidation and evaluate the value of e, numerical simulations are performed considering the non-linear variations of soil properties (Eqs. (4)-(6)), and sensitivity analysis is conducted to study the effect of different factors (initial hydraulic and electro-osmosis conductivities, voltage, surcharge preloading, initial effective stress, compression index, M and N) by changing one parameter and keeping others remain as they are in Table 2 in the numerical G n e ÀknTe Á e À 1 2 hZ Á sinð 1 2 n n ZÞ þ 1 Table 2 Basic parameters for electro-osmotic consolidation in the analytical model. Parameter Value Unit weight of water, c sat (kN/m 3 ) 1 0 Initial hydraulic conductivity, k h0 (m/s) 2  10 À8 Initial electro-osmosis conductivity, k e0 (m 2 s À1 V À1 ) 2  10 À9 Surcharge preloading, p 0 (kPa) 50 Initial effective stress, r 0 (kPa) 10 Initial void ratio, e 0 2.0 Initial coefficient of compressibility, a 0 (MPa À1 ) 8.7 simulations. Fig. 4 shows the development of the values of (Q + C 0 ÁZ) with different initial hydraulic conductivities. Since the value of (Q + C 0 ÁZ) increases rapidly at the beginning and gradually becomes stable, the value of e should be smaller than 0.5 according to the area equivalent principle. Taking the time for U = 99.0% as the end of electro-osmotic consolidation, the weighted factor e can be estimated as 0.24, 0.29, 0.34, and 0.43 for k h0 = 2  10 À6 , 2  10 À7 , 2  10 À8 , and 2  10 À9 m/s respectively. Fig. 5 further displays the estimated weighted factor under different soil parameters. The value of e increases with the increase in k e0 and the decrease in k h0 , and the impacts of other factors are smaller than that of k h0 and k e0 . According to Fig. 5, the value of e is 0.33 when soil parameters listed in Table 2 are adopted. For a soil with different initial hydraulic and electro-osmosis conductivities, the value of e can be adjusted according to Fig. 5. It is worth noting that for most natural clays, k e0 is generally in the range of 1  10 À8 $ 1  10 À9 m 2 /(V s), therefore the effect of k e0 is small and the value of e is mainly dominated by k h0 . For example, for a clay with relative large k h0 (1  10 À7 m/s), the value of e is assessed to be about 0.30, while for a clay with relative small k h0 (1  10 À9 m/s), the value of e is about 0.45. Therefore, for a natural clay submitted to electro-osmosis, the value of e is generally in Fig. 4. Development of (Q + C 0 ÁZ) for varied hydraulic conductivity. gives the best agreement with the numerical results for the examined conditions. Therefore, the value of e = 0.33 is used in the following analysis. Excess pore water pressure According to Eqs. (5) and (6), the higher the value of M and N, the less sensitive of k h and k e to the change in void ratio, and when M and N approaches infinity, k h and k e are actually constant during the consolidation process. Therefore, in order to examine the effect of the non-linear variation of k h , the value of N is set to be 100, and the value of M is set to be 100 to investigate the effect of the nonlinear variation of k e . Fig. 7 shows the comparison of the analytical solutions and the numerical results at different depths, with C c = 0.2, N = 100 and M varying from 0.5 to 8.0. With the increase in M, the excess pore water pressure from Wan and Mitchell [15] remains constant, while the results from the analytical solution in this study and the numerical simulation decrease. For all the M values analyzed here, the excess pore water pressure calculated from the analytical solution in this study agrees well with that from the numerical simulation, while the results from Wan and Mitchell [15] is smaller, especially at the bottom of the model. According to previous theories about electro-osmotic consolidation, the effects of k h and k e on the development of excess pore water pressure are different, specifically, a larger k e results in a larger negative excess pore water pressure (under constant k h ), while a larger k h induces a smaller one (under constant k e ). Because k h gradually decreases during electro-osmotic consolidation, the calculated value of the ultimate excess pore water pressure in this study is larger than that from Wan and Mitchell [15]. With the increase in M, the effect of the non-linear variation of the hydraulic conductivity decreases and the difference between the calculated excess pore water pressure from this study and that from Wan and Mitchell [15] decreases. [15] remains constant, while the results from the presented analytical solution and the numerical simulation increase with the increase in N. For all the N values analyzed here, the results from the presented analytical solution in this study agree well with that from the numerical simulation. Since a larger k e leads to a larger excess pore water pressure, the value of the ultimate pore water pressure in this study is smaller than that from Wan and Mitchell [15]. With the increase in N, the effect of the non-linear variation of the electro-osmosis conductivity decreases and the calculated excess pore water pressure increases. The change in the coefficient of consolidation C v during electroosmotic consolidation depends on the values of C c and M. Lekha et al. [33] indicated that if C c = M, C v remained constant, if C c > M, C v decreased with the decrease in the void ratio, and if C c < M, C v increased. For the cases examined here, C c < M, and therefore C v increases during the consolidation process. As a result, the development of the excess pore water pressure calculated from the present analytical solution is faster than that from Wan and Mitchell [15] as shown in Figs. 7 and 8. Fig. 9 displays the effect of C c with M = 2.0 and N = 8.0. Since the change in C c leads to the change in C v and T v , t is used as the horizontal ordinate instead of T v . With the increase in C c , the soil compressibility increases and C v decreases, therefore the development of the excess pore water pressure becomes slower, both for the analytical solution in this study and that from Wan and Mitchell [15]. When the soil properties are assumed constant, the value of the ultimate negative excess pore water pressure is independent of C c , while when the non-linear variations of soil properties are considered, the value of the ultimate negative excess pore water pressure increases with the increase in C c . With a higher C c , the soil is more compressible and the changes in k h and k e are larger during electro-osmotic consolidation, which means that the effect of the non-linear variations of soil properties is more significant. According to Figs. 7 and 8, the non-linear variation of k h results in a larger value of the ultimate excess pore water pressure, while the nonlinear variation of k e leads to a smaller one. For the given values of k h , k e , M and N, the effect of the non-linear variation of k h is more remarkable than that of k e . As a result, a larger value of the ultimate excess pore water pressure is obtained with a higher C c . Degree of consolidation The comparison of the degree of consolidation is displayed in Figs. 10-12. The effects of M, N and C c are analyzed, respectively. Similar to that of the excess pore water pressure, with the change in M and N, the degree of consolidation obtained from Wan and Mitchell [15] remains constant, while the results from the present analytical solution and the numerical simulation increase with the increase in M, and decrease with the increase in N. As mentioned before, the decrease in k h during the consolidation process is smaller with a larger M, therefore the value of the ultimate excess pore water pressure is smaller and the coefficient of consolidation is larger. As a result, the degree of consolidation is larger for a larger M. Opposite to k h , the value of the ultimate excess pore water pressure is positively related to k e , and therefore is larger for a larger N, hence the degree of consolidation decreases with the increase in N. According to the above analysis, the non-linear variation of k h influences not only the excess pore water pressure but also the coefficient of consolidation, while the non-linear variation of k e only affects the excess pore water pressure. Consequently, the impact of the non-linear variation of k h is larger than that of k e , both for the excess pore water pressure and the degree of consolidation as shown in Figs. 7, 8, 10 and 11. The increase in C v during the consolidation process is neglected since the soil compressibility is assumed constant in the theory of Wan and Mitchell [15], therefore the degree of consolidation calcu-lated from Wan and Mitchell [15] is smaller than that from the present analytical solution and the numerical simulation (Figs. 10 and 11). The impact of C c is further shown in Fig. 12. Compared to the results of Wan and Mitchell [15], the degree of consolidation obtained from the present analytical simulation is larger since C v increases during the consolidation process when C c < M. With the increase in C c , C v decreases and therefore the degree of consolidation decreases. Figs. 7-9 illustrates that the distribution of the excess pore water pressure during electro-osmotic consolidation calculated from the present analytical solution agrees well with that from the numerical simulation, and Figs. 10-12 further indicates that the degree of consolidation can also be well predicted with the present analytical solution. In the above analysis about the excess pore water pressure and the degree of consolidation, the values of M and N are set to change from 0.5 to 100 in order to analyze the impact of the non-linear variations of k h and k e respectively. In fact, according to previous studies, k e is generally less sensitive to the change in void ratio compared to k h [5,7,39,40]. Therefore, the above-mentioned case of M < N is more common in practical situation. 21), there is a slight deviation between the analytical solution and the numerical simulation as illustrated by Fig. 6. However, it is worth noting that the ultimate excess pore water pressure achieved after T v = 1 is almost the same in the analytical solution and the numerical simulation, regardless of the value of e. Figs. 7-9 further demonstrate that although the excess pore water pressure (absolute value) obtained from the proposed analytical solution is slightly different from the numerical simulation, the ultimate excess pore water pressure calculated from the two methods agrees quite well with each other. It seems that the ultimate excess pore water pressure calculated from Eq. (23) with t approaching infinity is not affected by the introduction of the weighted average value for (Q + C 0 Z). In fact, the solution for the ultimate excess pore water pressure can be directly derived from the equilibrium of the hydraulic flow and electro-osmosis flow at the end of electro-osmotic consolidation process. At equilibrium, the pore water flow in the soil mass is 0, and the following equation can be obtained, in which u ult denotes the excess pore water pressure at the end of electro-osmotic consolidation. Making the substitutions W ult = u ult /p 0 and Z = z/H, Substituting Eqs. (12) and (13) into Eq. (28), the following equation is obtained, Defining Q ult as, Then Eq. (29) can be written as, Integrating Eq. (31) in Z direction and considering the boundary condition that both the excess pore water pressure and voltage are 0 at the cathode, the solution for u ult can be obtained, The above derivation indicates that the ultimate excess pore water pressure can be solved from Eq. (27) without any simplification or assumption. In fact, the solution for u ult shown in Eq. (32) can also be obtained from the solution for u (Eq. (23)) by making t infinity. According to Eq. (32), the ultimate excess pore water pressure (absolute value) decreases with the increase in M and increases with the increase in N (Figs. 7 and 8). The non-linear variation of k h leads to a larger u ult , while the non-linear variation of k e results in a smaller one. When both M and N approach infinity or M = N, the effect of the non-linear variations of k h and k e is eliminated or balanced by each other, and therefore the analytical solution for the ultimate excess pore water pressure degrades to the equation proposed by Esrig [14] and Wan and Mitchell [15], Summary and conclusions In this study, the non-linear variations of soil compressibility, hydraulic conductivity and electro-osmosis conductivity during the consolidation process are analyzed through laboratory tests and incorporated in a 1D model for electro-osmotic consolidation. The analytical solutions for the excess pore water pressure and degree of consolidation are derived, and compared with the results from numerical simulations for verification. Both the analytical solutions and numerical results are further compared with previous solutions from Wan and Mitchell [15] to analyze the effect of the non-linear variations of soil properties. With the decrease in void ratio, both the hydraulic conductivity and electro-osmosis conductivity decrease, and linear relationships between the logarithm of them and the void ratio are found from the experiment results. Two empirical formulas are further developed to account for the non-linear variations of the hydraulic conductivity and electro-osmosis conductivity. The non-linear variations of soil properties show remarkable impact on the development of the excess pore water pressure and degree of consolidation during electro-osmotic consolidation. Specifically, the non-linear variation of hydraulic conductivity results in a larger excess pore water pressure, while the nonlinear variation of electro-osmosis conductivity leads to a smaller one. The more sensitive of hydraulic and electro-osmosis conductivities to the change in void ratio, the more significant the impact they show. With the increase in the initial soil compressibility, the development of the excess pore water pressure becomes slower, and the impact of the nonlinear variation of the hydraulic and electro-osmosis conductivities becomes more remarkable since the changes in the void ratio are larger with higher compressibility. The coefficient of consolidation is related to soil compressibility and hydraulic conductivity. When the decrease in soil compressibility is larger than the decrease in hydraulic conductivity, the coefficient of consolidation increases during the consolidation process, therefore the degree of consolidation calculated from the present analytical solution is larger than that from previous solutions with constant soil properties. The present analytical results agree well with the numerical results, both for the excess pore water pressure and the degree of consolidation. Compared to the previous analytical solutions, the newly proposed analytical solutions give more accurate predictions, and can be used to analyze the consolidation behavior of soil treated by electro-osmotic consolidation.
2019-04-22T13:04:32.844Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "43b7ee3bfd90f93de087deb656dce0eaa6361235", "oa_license": "CCBYNCSA", "oa_url": "http://dspace.imech.ac.cn/bitstream/311007/60905/1/JouArt-2017-103.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "a3847c1f420a18415c8a7cb8d7301507a4f2379e", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
118479669
pes2o/s2orc
v3-fos-license
Role of nonlinear anisotropic damping in the magnetization dynamics of topological solitons The consequences of nonlinear anisotropic damping, driven by the presence of Rashba spin-orbit coupling in thin ferromagnetic metals, are examined for the dynamics of topological magnetic solitons such as domain walls, vortices, and skyrmions. The damping is found to affect Bloch and N\'eel walls differently in the steady state regime below Walker breakdown and leads to a monotonic increase in the wall velocity above this transition for large values of the Rashba coefficient. For vortices and skyrmions, a generalization of the damping tensor within the Thiele formalism is presented. It is found that chiral components of the damping affect vortex- and hedgehog-like skyrmions in different ways, but the dominant effect is an overall increase in the viscous-like damping. I. INTRODUCTION Dissipation in magnetization dynamics is a longstanding problem in magnetism [1][2][3]. For strong ferromagnets such as cobalt, iron, nickel, and their alloys, a widely used theoretical approach to describe damping involves a local viscous form due to Gilbert for the Landau-Lifshitz equation of motion, which appears as the second term on the right hand side, proportional to the damping constant α 0 . This equation describes the damped magnetization precession about a local effective field H eff = −(1/µ 0 M s )δU/δm, which is given by a variational derivative of the magnetic energy U with respect to the magnetization field described by the unit vector m, with γ 0 = µ 0 γ being the gyromagnetic constant and M s is the saturation magnetization. Despite the multitude of physical processes that underlie dissipation in such materials, such as the scattering of magnons with electrons, phonons, and other magnons, the form in Eq. (1) has proven to be remarkably useful for describing a wide range of dynamical phenomena from ferromagnetic resonance to domain wall motion. One feature of the dissipative dynamics described in Eq. (1) is that it is local, i.e., the damping torque only depends on the local magnetization and its time dependence. With the advent of magnetic heterostructures, however, this restriction of locality has been shown to be inadequate for systems such as metallic multilayers in which nonlocal processes can be important [4]. A striking example involves spin pumping, which describes how spin angular momentum can be dissipated in adjacent magnetic or normal metal layers through the absorption of spin currents generated by a precessing magnetization [5,6]. Early experimental observations of this phenomena involved iron films sandwiched by silver layers [7] and permalloy films in close proximity with strong spin-orbit normal metals such as palladium and platinum [8,9], where * joo-von.kim@u-psud.fr ferromagnetic resonance line widths were shown to depend strong on the composition and thickness of the adjacent layers. Such observations also spurred other studies involving ferromagnetic multilayers separated by normal metal spacers, where spin pumping effects can lead to a dynamic coupling between the magnetization in different layers [10,11]. In the context of damping, such dynamic coupling was shown to give rise to a configuration dependent damping in spin-valve structures [12,13]. A generalization of the spin-pumping picture in the context of dissipation was given by Zhang and Zhang, who proposed that spin currents generated within the ferromagnetic material itself can lead to an additional contribution to the damping, provided that large magnetization gradients are present [14]. This theory is based on an sd model in which the local moments (4d) are exchange coupled to the delocalized conduction electrons (3s), which are treated as a free electron gas. The spin current "pumped" at one point in the material by the precessing local moments are dissipated at another if the current encounters strong spatial variations in the magnetization such as domain walls or vortices -a mechanism that can be thought of as the reciprocal process of current-induced spin torques in magnetic textures [15][16][17][18]. For this reason, the mechanism is referred to as "feedback" damping since the pumped spin currents generated feed back into the magnetization dynamics in the form of a dissipative torque. This additional contribution is predicted to be both nonlinear and nonlocal, and can have profound consequences for the dynamics of topological solitons such as domain walls and vortices as a result of the spatial gradients involved. Indeed, recent experiments on vortex wall motion in permalloy stripes indicate that such nonlinear contributions can be significant and be of the same order of magnitude as the usual Gilbert damping characterized by α 0 in Eq. (1) [19]. An extension to this feedback damping idea was proposed recently by Kim and coworkers, who considered spin pumping involving a conduction electron system with a Rashba spin-orbit coupling (RSOC) [20]. By building upon the Zhang-Zhang formalism, it was shown that the feedback damping can be expressed as a generalization of the Landau-arXiv:1502.04695v1 [cond-mat.mtrl-sci] 16 Feb 2015 Lifshitz equation [14,20], where the 3×3 matrix D LL represents the generalized damping tensor, which can be expressed as [20] Here, α 0 is the usual Gilbert damping constant, η = gµ B G 0 /(4e 2 M s ) is a constant related to the conductivity G 0 of the spin bands [14], F ki = (∂m/∂x k ) i are components of the spatial magnetization gradient,α R = 2α R m e / 2 is the scaled Rashba coefficient, i jk is the Levi-Civita symbol, and the indices (i jk) represent the components (xyz) in Cartesian coordinates. In addition to the nonlinearity present in the Zhang-Zhang picture, the inclusion of the α R term results in an anisotropic contribution that is related to the underlying symmetry of the Rashba interaction. Numerical estimates based on realistic parameters suggest that the Rashba contribution can be much larger than the nonlinear contribution η alone [20], which may have wide implications for soliton dynamics in ultrathin ferromagnetic films with perpendicular magnetic anisotropy, such as Pt/Co material systems, in which large spin-orbit effects are known to be present. In this article, we explore theoretically the consequences of the nonlinear anisotropic damping given in Eq. (3) on the dynamics of topological magnetic solitons, namely domain walls, vortices, and skyrmions, in which spatial gradients can involve 180 • rotation of the magnetization vector over length scales of 10 nm. In particular, we examine the role of chirality in the Rashba-induced contributions to the damping, which are found to affect chiral solitons in different ways. This article is organized as follows. In Section II, we discuss the effects of nonlinear anisotropic damping on the dynamics of Bloch and Néel domain walls, where the latter a stabilized by the Dzyaloshinskii-Moriya interaction. In Section III, we examine the consequences of this damping for vortices and skyrmions, and we derive a generalization to the damping dyadic appearing in the Thiele equation of motion. Finally, we present some discussion and concluding remarks in Section IV. II. BLOCH AND NÉEL DOMAIN WALLS The focus of this section are domain walls in ultrathin films with perpendicular magnetic anisotropy. Consider a 180 • domain wall representing a boundary separating two oppositely magnetized domains along the x axis, with z being the uniaxial anisotropy axis that is perpendicular to the film plane. We assume that the magnetization remains uniform along the the y axis. The unit magnetization vector m(x, t) can be parametrized in spherical coordinates (θ, φ), where the spher-ical angles for the domain wall profile can be written as where X 0 (t) denotes the position of the domain wall, ∆ = √ A/K 0 represents the wall width parameter that depends on the exchange constant A and the effective uniaxial anisotropy K 0 , and the azimuthal angle φ 0 (t) is a dynamic variable but spatially uniform. In this coordinate system, a static Bloch wall is given by φ 0 = ±π/2, while a static Néel wall is given by φ 0 = 0, π. A positive sign in the argument of the exponential function for θ in Eq. (4) describes an up-to-down domain wall profile going along the +x direction, while a negative sign represents a down-to-up wall. To determine the role of the nonlinear anisotropic damping term in Eq. (3) on the wall dynamics, it is convenient to compute the dissipation function W(Ẋ 0 ,φ 0 ) for the wall variables, where the notationẊ 0 ≡ ∂ t X 0 , etc., denotes a time derivative. The dissipation function per unit surface area is given by where m i = m i x − X 0 (t), φ 0 (t) and the Einstein summation convention is assumed. By using the domain wall ansatz (4), the integral in Eq. (5) can be evaluated exactly to give W = W 0 + W NL , where W 0 represents the usual (linear) Gilbert damping, while W NL is the additional contribution from the nonlinear anisotropic damping, where α 1 ≡ η/∆ 2 , α 2 ≡ ηα R /∆, and α 3 ≡ ηα 2 R are dimensionless nonlinear damping constants. In contrast to the linear case, the nonlinear anisotropic dissipation function exhibits a configuration-dependent dissipation rate where the prefactors of theẊ 2 0 andφ 2 0 terms depend explicitly on φ 0 (t). In addition to the nonlinearity a chiral damping term, which is proportional to α 2 , appears as a result of the Rashba interaction and is linear in the Rashba coefficient α R . The sign of this term depends on the sign chosen for the polar angle θ in the wall profile (4). To appreciate the chiral nature of this term, we consider small fluctuations about the static configuration by writing φ 0 (t) = φ 0 +δφ(t), where δφ(t) π is a small angle. This approximation is useful for the steady state regime below Walker breakdown. For up-to-down Bloch walls (φ 0 = ±π/2), the nonlinear part of the dissipation function to first order in δφ(t) becomes The quantity C i = ±1 is a component of the chirality vector [21], which characterizes the handedness of the domain wall. For a right-handed Bloch wall, φ 0 = −π/2 and the only nonvanishing component is C x = 1, while for a left-handed wall (φ 0 = −π/2) the corresponding value is C x = −1. Thus, the term proportional to α 2 depends explicitly on the wall chirality. Similarly for up-to-down Néel walls, the same linearization about the static wall profile leads to where C y = 1 for a right-handed Néel wall (φ 0 = 0) and C y = −1 for its left-handed counterpart (φ 0 = π). Since the fluctuation δφ(t) is taken to be small, the chiral damping term is more pronounced for Néel walls in the steady-state velocity regime since it does not depend on the fluctuation amplitude δφ(t) as in the case of Bloch walls. To better appreciate the magnitude of the chiralitydependent damping term, it is instructive to estimate numerically the relative magnitudes of the nonlinear damping constants α 1 , α 2 , α 3 . Following Ref. 20, we assume η = 0.2 nm 2 and α R = 10 −10 eV m. If we suppose ∆ = 10 nm, which is consistent with anisotropy values measured in ultrathin films with perpendicular anisotropy [22], the damping constants can be evaluated to be α 1 = 0.002, α 2 = 0.052, and α 3 = 1.37. Since α 0 varies between 0.01-0.02 [23] and 0.1-0.3 [24] depending on the material system, the chiral term α 2 is comparable to Gilbert damping in magnitude, but remains almost an order of magnitude smaller than the nonlinear component α 3 that provides the dominant contribution to the overall damping. The full equations of motion for the domain wall dynamics can be obtained using a Lagrangian formalism that accounts for the dissipation given by W [25,26]. For the sake of simplicity, we will focus on wall motion driven by magnetic fields alone, where a spatially-uniform magnetic field H z is applied along the +z direction. In addition, we include the Dzyaloshinskii-Moriya interaction appropriate for the geometry considered [27,28] when considering the dynamics of Néel walls. From the Euler-Lagrange equations with the Rayleigh dissipation function, with an analogous expression for φ 0 , the equations of motion for the wall coordinates are found to bė where D ex is the Dzyaloshinskii-Moriya constant [28] and K ⊥ represents a hard-axis anisotropy that results from volume dipolar charges. The Dzyaloshinskii-Moriya interaction (DMI) is expected in ultrathin films in contact with a strong spin-orbit coupling material [29,30] and is required to stabilize a Néel wall profile [31,32]. Furthermore, the DMI itself can appear as a consequence of the Rashba interaction and therefore its inclusion here is consistent with the nonlinear anisotropic damping terms used [20,33]. Results from numerical integration of these equations of motion for Bloch and Neel walls are presented in Fig. 1. We used parameters consistent with ultrathin films with perpendicular anisotropy, namely α 0 = 0.1, M s = 1 MA/m, ∆ = 10 nm, and K ⊥ = µ 0 N x M 2 s /2 with the demagnetization factor N x = 0.0224 [28]. In order to consider the dynamics of a Néel wall profile, which is not favored by the volume dipolar interaction represented by K ⊥ , we assumed D ex = 1 mJ/m 2 . As in the discussion on numerical estimates above, we assumed η = 0.2 nm 2 but considered several different values for the Rashba coefficient α R . The steady-state domain wall velocity, Ẋ 0 , was computed as a function of the perpendicular applied magnetic field, H z . In the precessional regime above Walker breakdown in which φ 0 (t) becomes a periodic function in time, Ẋ 0 is computed by averaging the wall displacement over few hundred periods of precession. For the Bloch wall [ Fig. 1(a)], the Walker field is observed to increase with the Rashba coefficient, which is expected from the overall increase in damping experienced by the domain wall. However, there are two features that differ qualitatively from the behavior with linear damping. First, the Walker velocity does not appear to be attained for finite α R , where the peak velocity at the Walker transition is below the value reached for α R = 0. Second, the field dependence of the wall velocity below Walker breakdown is nonlinear and exhibits a slight convex curvature, which becomes more pronounced as α R increases. It is interesting to note that the nonlinear damping terms affect the Dzyaloshinskii (Néel) wall motion differently. In contrast to the Bloch wall, the Walker velocity is reached at breakdown for the different values of α R , which can be seen by the arrows marking the Walker transition in Fig. 1(b). In addition, the field dependence of the velocity exhibits a concave curvature below breakdown. This behavior is consistent with experimental reports of field-driven domain wall motion in the Pt/Co (0.6 nm)/Al 2 O 3 system [34], a material system in which recent nanoscale magnetometry experiments have confirmed the presence of Néel-like domain wall profiles [35]. The differences between the two wall profiles originate from the DMI, rather than the chiral damping term that is proportional to α 2 . This was verified by setting α 2 to zero for the case with DMI, which did not modify the overall behavior of the field dependence of the velocity. In the onedimensional approximation for the wall dynamics, the DMI enters the equations of motion like an effective magnetic field along the x axis, which stabilizes the wall structure by minimizing deviations in the wall angle φ 0 (t). III. VORTICES AND SKYRMIONS The focus of this section is on the dissipative dynamics of two-dimensional topological solitons such as vortices and skyrmions. The equilibrium magnetization profile for these micromagnetic objects are described by a nonlinear differential equation similar to the sine-Gordon equation, where the dispersive exchange interaction is compensated by dipolar interactions for vortices [36,37] and an additional uniaxial anisotropy for skyrmions [38]. The topology of vortices and skyrmions can be characterized by the skyrmion winding number Q, While the skyrmion number for vortices (Q = ±1/2) and skyrmions (Q = ±1) are different, their dynamics are similar qualitatively and can be described using the same formalism. For this reason, vortices and skyrmions will be treated on equal footing in what follows and distinctions between the two will only be drawn on the numerical values of the damping parameters. A key approximation used for describing vortex or skyrmion dynamics is the rigid core assumption, where the core represents the region over which the spatial gradients in magnetization are largely (or entirely) localized. Within this approximation, the dynamics is given entirely by the position of the core in the film plane, X 0 (t) = (X 0 (t), Y 0 (t)), which al- lows the unit magnetization vector to be parametrized as where q is a topological charge and p is a winding number. An illustration of the magnetization field given by the azimuthal angle φ(x, y) is presented in Fig. 2. q = 1 corresponds to a vortex or skyrmion, while q = −1 represents the antivortex or antiskyrmion. The dynamics of a vortex or skyrmion in the rigid core approximation is given by the Thiele equation, where is the gyrovector and U(X 0 ) is the effective potential that is obtained from the magnetic Hamiltonian by integrating out the spatial dependence of the magnetization. The damping dyadic in the Thiele equation, D T , can be obtained from the dissipation function in the rigid core approximation, W(Ẋ 0 ), which is defined in the same way as in Eq. (5) but with the ansatz given in Eq. (15). For this system, it is more convenient to evaluate the dyadic by performing the integration over all space after taking derivatives with respect to the core velocity. In other words, the dyadic can be obtained using the Lagrangian formulation by recognizing that By using polar coordinates for the spatial coordinates, (x, y) = (r cos ϕ, r sin ϕ), assuming translational invariance in the film plane, and integrating over ϕ, the damping dyadic is found to be where I is the 2 × 2 identity matrix and the dimensionless damping constants are defined as α 1 ≡ η/r 2 c , α 2 ≡ ηα R /r c , and α 3 ≡ ηα 2 R , by analogy with the domain wall case where the core radius r c plays the role here as the characteristic length scale. The coefficients D i depend on the core profile and are given by where the expression for D 0 is a known result but D 1 , D 2 and D 3 are new terms that arise from the nonlinear anisotropic damping due to RSOC. The coefficients a 11 and a 22 are configuration-dependent and represent the chiral component of the Rashba-induced damping. For vortex-type spin textures (p = 1, 3 and q = 1), a 11 = a 22 = 0, which indicates that the α 2 term plays no role for such configurations. This is consistent with the result for Bloch domain walls discussed previously, since the vortextype texture [ Fig. 2(b)], particularly the vortex-type skyrmion [ Fig. 2(c)], can be thought of as being analogous to a spin structure generated by a 2π revolution of a Bloch wall about an axis perpendicular to the film plane. The rigid core approximation implies that fluctuations about the ground state are neglected, which is akin to setting δφ(t) = 0 in Eq. (8). As such, no contribution from α 2 is expected for vortex-type textures. On the other hand, a finite contribution appears for hedgehogtype vortices and skyrmions (q = 1), where a 11 = a 22 = 1 for p = 0 and a 11 = a 22 = −1 for p = 2. This can be understood with the same argument by noting that hedgehogtype textures can be generated by revolving Néel-type domain walls. A summary of these coefficients is given in Table I. For antivortices (q = −1), it is found that the coefficients a ii are nonzero for all winding numbers considered. We can understand this qualitatively by examining how the magnetization varies across the core along two orthogonal directions. For example, for p = 0, the variation along the x and y axes across the core are akin to two Néel-type walls of different chiralities, which results in nonvanishing contributions to a 11 and a 22 but with opposite sign. The sign of these coefficients depends on how these axes are oriented in the film plane, as witnessed by the different winding numbers p in Fig. 2. Such damping dynamics is therefore strongly anisotropic, which may have interesting consequences on the rotational motion of vortex-antivortex dipoles, for example, where the antivortex configuration oscillates between the different p values in time [39]. For vortex structures, we can provide numerical estimates of the different damping contributions α i D i by using the Usov ansatz for the vortex core magnetization, Let L represent the lateral system size. The coefficients D i are then found to be D 0 = π [2 + ln (L/r c )], D 1 = D 2 = 14π/3, and D 3 = π [4/3 + ln (L/r c )]. We note that for D 0 and D 3 , the system size L and core radius r c appear as cutoffs for the divergent 1/r term in the integral. By assuming parameters of α 0 = 0.1, η = 0.05 nm 2 , and α R = 0.1 eV nm, along with typical scales of r c = 10 nm and L = 1 µm, the damping terms can be evaluated numerically to be α 0 D 0 ≈ 2.1, α 1 D 1 ≈ 0.0073, α 2 D 2 ≈ 0.19, and α 3 D 3 ≈ 6.4. As for the domain walls, the Rashba term α 3 D 3 is the dominant contribution and is of the same order of magnitude as the linear damping term, while the chiral term α 2 D 2 is an order of magnitude smaller and the nonlinear term α 1 D 1 is negligible in comparison. For skyrmion configurations, a similar ansatz can be used for the core magnetization, We note that this differs from the "linear" profiles discussed elsewhere [38], but the numerical differences are small. The advantage of the ansatz in Eq. (25) is that the integrals for D i have simple analytical expressions. Because spatial variations in the magnetization for a skyrmion are localized only to the core, in contrast to the circulating in-plane moments of vortices that extend across the entire system, the damping constants D i have no explicit dependence on the system size. By using Eq. (25), we find D 0 = D 3 = 16π/3, D 1 = 496π/15, and D 2 = 52π/5. By using the same values of α 0 , η, and α R as for the vortices in the preceding paragraph, we find α 0 D 0 ≈ 1.7, α 1 D 1 ≈ 0.052, α 2 D 2 ≈ 0.43, and α 3 D 3 ≈ 3.3. IV. DISCUSSION AND CONCLUDING REMARKS A clear consequence of the nonlinear anisotropic damping introduced in Eq. (3) is that it provides a mechanism by which the overall damping constant, as extracted from domain wall experiments, for example, can differ from the value obtained using linear response methods such as ferromagnetic resonance [19]. However, the Rashba term can also affect the ferromagnetic linewidth in a nontrivial way. To see this, we consider the effect of the damping by evaluating the dissipation function associated with a spin wave propagating in the plane of a perpendicularly magnetized system with an amplitude c(t) and wave vector k || . The spin wave can be expressed as m = c(t) cos(k || · r || ), c(t) sin(k || · r || ), 1 , which results in a dissipation function per unit volume of where a term proportional to the chiral part ηα R spatially averages out to zero. The Rashba contribution α 3 ≡ ηα 2 R leads to an overall increase in the damping for linear excitations and plays the same role as the usual Gilbert term α 0 in this approximation, which allows us to assimilate the two terms as an effective FMR damping constant, α FMR ≈ α 0 + α 3 . On the other hand, the nonlinear feedback term proportional to η is only important for large spin wave amplitudes and depends quadratically on the wave vector. This is consistent with recent experiments on permalloy films (in the absence of RSOC) in which the linear Gilbert damping was recovered in ferromagnetic resonance while nonlinear contributions were only seen for domain wall motion [19]. This result also suggests that the large damping constant in ultrathin Pt/Co/Al 2 O 3 films as determined by similar time-resolved magneto-optical microscopy experiments, where it is found that α FMR = 0.1 − 0.3 [24], may partly be due to the RSOC mechanism described here (although dissipation resulting from spin pumping into the platinum underlayer is also likely to be important [40]). Incidentally, the nonlinear term η c(t) 2 may provide a physical basis for the phenomenological nonlinear damping model proposed in the context of spin-torque nano-oscillators [41]. For vortices and skyrmions, the increase in the overall damping due to the Rashba term α 3 can have important consequences for their dynamics. The gyrotropic response to any force, as described by the Thiele equation in Eq. (16), depends on the overall strength of the damping term. This response can be characterized by a deflection angle, θ H , that describes the degree to which the resulting displacement is noncollinear with an applied force. This is analogous to a Hall effect. By neglecting the chiral term α 2 D 2 , the deflection or Hall angle can be deduced from Eq. (16) to be where G 0 = 2π for vortices and G 0 = 4π for skyrmions. Consider the skyrmion profile and the magnetic parameters discussed in Section III. With only the linear Gilbert damping term (α 0 D 0 ) the Hall angle is found to be θ H = 82.3 • , which highlights the largely gyrotropic nature of the dynamics. If the full nonlinear damping is taken into account [Eq. (27)], we find θ H = 68.3 • , which represents a significant reduction in the Hall effect and a more Newtonian response to an applied force. Aside from a quantitative increase in the overall damping, the presence of the nonlinear terms can therefore affect the dynamics qualitatively. Such considerations may be important for interpreting current-driven skyrmion dynamics in racetrack geometries, where the interplay between edge repulsion and spin torques is crucial for determining skyrmion trajectories [42,43]. Finally, we conclude by commenting on the relevance of the chiral-dependent component of the damping term, α 2 . It has been shown theoretically that the Rashba spin-orbit coupling leading to Eq. (3) also gives rise to an effective chiral interaction of the Dzyaloshinskii-Moriya form [33]. This interaction is equivalent to the interface-driven form considered earlier, which favors monochiral Néel wall structures in ultrathin films with perpendicular magnetic anisotropy. Within this picture, a sufficiently strong Rashba interaction should only favor domain wall or skyrmion spin textures with one given chirality as determined by the induced Dzyaloshinskii-Moriya interaction. So while some non-negligible differences in the chiral damping between vortices and skyrmions of different chiralities were found, probing the dynamics of solitons with distinct chiralities may be very difficult to achieve experimentally in material systems of interest.
2015-06-04T12:41:03.000Z
2015-02-16T00:00:00.000
{ "year": 2015, "sha1": "f42b3bccec20757bf7bf28d509c2b8914ff8f2ae", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1502.04695", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f42b3bccec20757bf7bf28d509c2b8914ff8f2ae", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
252361893
pes2o/s2orc
v3-fos-license
Multi-Objective Optimization of Sustainable Steel AISI 1045 Turning Energy Parameters Under MQL Condition Sustainable production requires reducing of production waste, energy consumption, and more efficient machining processes. However, in machining must be introduced advanced techniques for cooling and lubrication of cutting zone. An advanced techniques is minimum quantity lubrication (MQL), can be considered as a step towards sustainable machining. However, it is important to analyze cutting processes regard to energy consumption indicators, especially when machining materials that have a wide range of applications, such AISI 1045. In this study, the influence of process parameters on turning energy performance during turning of mentioned steel under MQL lubrication conditions were investigated. Full experiment plan was used, ANOVA was used for effect analyze, and RSM was used for modelling. Multi-objective optimization of process parameters, based on minimizing energy indicators, was performed. Procedure was defined cutting speed of 210 m/min, depth of cut of 1.5 mm, and feed rate of 0.224 mm/rev as optimal parameters. These parameters and MQL conditions can be used to get minimum energy indicators in AISI 1045 turning, especially in large-scale production. INTRODUCTION Traditionally, cutting cooling and lubrication fluids (CFL) are applied to increase the efficiency and performance of the machining processes. In other hand, it causes environmental problems due to chemical contents. In manufacturing, costs relating to using this fluids are about 7-17% of the total costs in machining process [1]. In order to eliminate the negative effects of CFL, the machining industries are continually seeking for new and improved current cooling and lubricating techniques by taking into account the environmental and financial issues. In the past years, a tremendous effort has been made to minimize or even completely avoid usage of cutting fluids. Minimum quantity of lubrication (MQL) have been proposed as a good alternative between completely dry and fully wet machining. MQL is one of the most adequate replacements for current flooding techniques in order to reduce the amount of lubricant due better economic, RESEARCH environmental and process performance [2]. The use of MQL can reduce the wastages of cutting fluid by several times as compared with flood cooling [3]. Its application in turning process has been investigated by many researchers. In a general, machining using MQL offers several advantages over dry and wet machining, such as improved tool life [4,5], low cutting temperature [6], improve dimensional accuracy [7], reduced cutting forces [8,9], improved surface quality [10], and material machinability [11]. In addition, the MQL improve the surface topography [12]. Different studies have focused on employing different techniques to optimize MQL assisted turning. Anamalai et al. [13] performed multiobjective optimization during turning of SUS 304 stainless steel with bio-inspired nanofluid based MQL. Minimum cutting temperature and maximum heat transfer were selected as the optimization criteria. Revuru et al. [14] optimized turning titanium alloy under dry and MQL conditions using Taguchi analysis, based on the effects of machining parameters on cutting force, tool wear and surface quality. Suneesh and Sivapragash [15] optimize surface quality, cutting force, specific power consumption and cutting temperature in turning of a magnesiumalumina composite under dry and MQL cutting conditions. Mia et al. [16] studied the surface quality in MQL assisted turning of high hardness steel, with developing the predictive and optimization model. Tamang et al. [17] investigated the effect of cutting parameters on tool wear, surface quality and cutting power in dry and MQL machining conditions. Sarıkaya and Güllü [18] analyzed the effect of the machining parameters such as cutting speed, feed rate and depth of cut on surface roughness when turning of AISI 1050 steel under dry, wet, and MQL condition. Optimal level of process parameters was determined using the S/N ratio and desirability function analysis. Sohrabpoor et al. [19] applied grey relational analysis for optimizing machining conditions in MQL assisted turning by multiple performance characteristics. Thakur et al. [20] utilized Taguchi method to determine the best combination of the process parameters in MQL assisted high speed turning of superalloy Inconel 718. Comparison of different modeling methods were analyzed by Bustillo et al. [21]. They conducted experiments under dry, MQL, and flooding condition. They concluded that machine-learning techniques can be used for cutting process modeling. Pimenov et al. [22], were studied optimum condition in milling AISI 1045, based on minimization of energy consumption and tool wear, and maximization of productivity. For it, they are employed grey relational analysis. MQL with nanoparticle, regard turning sustainability, was analyzed by Abbas et al. [23]. There are concluded benefits of using nanoparticle. Influence of nanoparticle in MQL on surface roughness, in case of highest cutting speed was analyzed by Şafak and Kaçal [24]. In [25], Abbas et al. used artificial neural network with Edgeworth-Pareto method for obtaining of optimal parameters in face milling. Kuntoğlu et al. [26] used improved natureinspired method H-ABC, and compared it with standard methods of optimization. In this study, productivity and cutting forces was base for optimal parameters obtaining. From previous studies it was found MQL has a significant role in order to towards sustainable machining. In this present paper, higher value of cutting speeds and feed rates were used, due to increase productivity in turning of steel AISI 1045 under MQL conditions. Main target is investigation of energy indicators for this higher values, and finding sustainable condition. The influence of process parameters, as most easily managed parameters, on energy performance characteristics as machining force, cutting power and cutting pressure were studied. Afterwards, the simultaneous optimization of process parameters based on minimizing energy performance was performed. EXPERIMENTAL SETUP In this research, process parameters, cutting speed (v), depth of cut (a) and feed rate (f), were considered as controlling factors, and changed on three level each. Full experimental plan as L27 orthogonal array with three columns for controlled factors, and twenty-seven rows for their combinations, was used in the present analysis. This full plan of experiment was chosen due to obtaining more precise cutting process energy indicators models. More precise models are a better basis for optimization and installation in control systems. Table 1 shows the cutting parameters and their levels for the experiments. The levels of cutting parameters where chosen according to the cutting tool and machine tool specifications. Turning experiments were performed in MQL conditions using a universal lathe Boehringer with 8 kW spindle power. During turning trials, MQL flow rate were and air pressure of MQL system were 30 ml/h and 0.3 MPa, respectively. The mixture of pressurized air and cutting fluid were supplied to the cutting zone through the nozzle. Nozzle was located 30 mm away from tool tip, at an angle of 90° of the cutting edge, and at angle of 30° from clearance face. In this way, needed lubricating is provided. The chosen coated carbide insert was SNMG 120408. The cutting insert is square shaped with 0.8 mm nose radius and with simple chip breaker. Rake angle of tool was γ = 10°, clearance angle was α = 10° defined by tool holder. Tool holder is codified as PSDN R 2525 M12, which forming entering angle of κ = 45°. The experimental setup on machine tool, work material, cutting tool, and MQL system, dynamometer is displayed in Fig. 1. The workpiece material was carbon steel AISI 1045, cold drawn, which chemical composition is given in Table 2. Workpiece material had tensile strength of 820 N/mm², and converted hardness 42 HRc. The workpiece geometry was cylindrical, with diameter of 220 mm and overhang length 350 mm. I was fixed in standard lathe jaws, and bolstered by lathe spike, on other end. The three components of the cutting forces, mutually perpendicular, and defined as main cutting force (Fc), feed force (Ff) and passive force (Fp), were recorded using a three-component Kistler 9259A dynamometer. The measurement chain also involves the charge amplifier (Kistler 5001), spectrum analyser (HP3567A) and personal computer for data acquisition and analysis. The standard piezo-effect dynamometer was rigidly mounted on the lathe using a custom designed adapter. The cutting force components was coincided with the lathe and workpiece axes. Main cutting forces was measured in direction of cutting speed vector, feed force was measured in direction of feed rate velocity, and passive force in normal direction on workpiece z-axis. Experimental runs were repeated two or three time, and the mean value was recorded. Every run is performed on 30 seconds of machining time, which gave enough time for the signal stabilisation. The machining force (FR), cutting power (Pc) and cutting pressure (Ks) are defined in form of the following equations [27]: Cutting power describes converted energy per time, which used in chip separation process, and there is formulated as depending on velocity of main movement and force in direction of main movement. It is analysed as the very important part of tool machine total energy consumption, which can be measured by special electric devices. Cutting pressure is mechanical pressure on cutting tool edge area, and can be connected with stress in cutting tool material. RESULTS AND DISCUSSION In Table 3 results of cutting force component measuring are shown. Values of machining force, cutting power, and cutting pressure are calculated. Analysis of variance (ANOVA), as common statistical method, was employed for analysis of experimental results. In it, machining force, the cutting power, and the cutting pressure models were made based on least square method, and analysing the influence of cutting speed (v), depth of cut (a) and feed rate (f) on the results. Tables 4-6 show statistics for FR, Pc and Ks, respectively. ANOVA was carried out for a 5% significance level, i.e., for a 95% confidence. This is to be noted that the tables include only those model coefficients whose effects on the results are statistically significant (P-value < 0.05). However, the effect of depth of cut is the most significant factor associated with machining force. Effects of turning parameters on the response factors Based on the experimental result and after determining the significant terms, the second order (quadratic) response surface methodology (RSM) models were formulated. Mathematical models for the response variables, machining force, cutting energy and cutting pressure in terms of cutting speed, depth of cut and feed rate are given by Eq. (6) In this part of study, 3D plots for the models responses were plotted based on the developed RSM models (Eqs (4) to (6)) in order to examine the effect of turnng parameters on individual response. Fig. 2 illustrates the two-factor interaction effects of cutting parameters on machining force. As seen from Fig. 2(a), the machining force is low when the depth of cut is at low level for all the values of cutting speed. Furthermore, for any given depth of cut, the machining force does not vary considerably with variations in cutting speed. The machining force is very sensitive to feed rate variations at all values of cutting speed. As observed from Fig. 2(b), it can be seen that, the machining force is highly sensitive to feed rate variations as compared to cutting speed. Fig. 2(c) shows the effect of feed rate and depth of cut on machining force. It is clearly evidenced from these figure that the machining force is minimal at low values of both the feed rate and depth of cut. Further, the machining force is more sensitive to depth of cut variations compared to feed rate effect. According to the previous analysis, Table 3 indicates that the effect of depth of cut on machining force has a highest statistical importance. Hence, by combining the aforementioned interaction effects, it is evident that the selection of low values of depth of cut and feed rate are necessary for minimizing the machining force. Previously assertions is in accordance with the physics of the chip separation process. By increasing the depth of cut and the feed rate, a larger chip cross section is obtained, which leads to a higher load on the cutting tool, i.e. an increase in the cutting forces. The depth of the cut has a greater influence due to the direct connection with the separated chip width, and the chip shear plane width. Influence of depth of cut has linear functionality, and influence of feed rate has linear functionality almost. The increase of the machining cutting force caused by the increase of the cutting speed is a consequence of the absence of workpiece material built up edge on the cutting tool wedge, and the faster flow of separated chip over the rake surface of the tool. The decrease of machining force due to the increase in cutting speed is more pronounced for larger feed rates. Fig. 3 shows the effects of two factor interactions on cutting power. Figure 3(a) exhibits the estimated response surface for cutting power in relation to the process parameters of depth of cut and cutting speed. It is obvious that the power is highly sensitive to depth of cut as well as cutting speed variations. Further, it is also observed that the cutting power is also highly sensitive to cutting speed ( Fig. 3(b)) and depth of cut (Fig. 3(c)) variations for a specified feed rate. The minimum cutting power is required at low values of depth of cut, cutting speed and feed rate. Thus, the cutting power can be controlled by appropriately setting these cutting parameters. Compared to the machining force, the cutting power has a slightly different behaviour depending on the process parameters. It is clear from the definition of power, that an increase in speed significantly affects the increase in cutting power, due to the multiplication of machining force and cutting speed. The increase in the depth of cut, and especially the feed rate, have a slightly smaller effect than the cutting speed, because its influence is contained in the force. The influences of the process parameters have an almost linear dependence. The response surface plots showing the interaction effects of turning parameters on cutting pressure is shown in Fig. 4. As seen in Fig. 4(a), with high depth of cut and cutting speed, the cutting pressure can be reduced. It is evident from Fig. 4(b) that the cutting pressure will be minimal at higher cutting speed for middle value of feed rate. The effects of feed rate and depth of cut on cutting pressure are shown in Fig. 4(c). It is obvious that depth of cut strongly influenced cutting pressure and that cutting pressure decreases with the increase in depth of cut. The cutting pressure is directly related to the tool wear mechanism and tool life. The effects of pressure parameters are nonlinear. As the cutting speed increases, the cutting force decreases, and so does the cutting pressure. The influence of the cutting speed on the cutting pressure is identical to the influence on machining force, which is clear from the cutting pressure formulation. The influence of depth of cut and feed rate on cutting pressure is nonlinear. The nonlinearity is consequence of combined influence of feed rate and depth of cut on the force value with their combination in the counter of cutting pressure formulation. As the depth of cut and feed rate increase, the contact area of the workpiece material and the cutting tool wedge increases, leading to a lower pressure. In terms of values, the lowest pressure values are not strictly related to the minimum or maximum parameter values, leading to stronger optimization needs. Turning process optimisation In optimisation of machining processes, can be used a more different optimisation methods, as shown in the mentioned previous researches. In this research, the use of the genetic algorithm coupled with principal component analysis (PCA) for determining the optimal turning parameters is reported. This method was chosen because it throws out irrelevant predictors and conducts the process with transformed predictors and with fewer of them. In this way, PCAs are used to reduce predictor's dimensionality, their independence, and avoid their interpretability. The optimal turning parameters can be derived in accordance to the preference towards the three objectives, namely, machining force, cutting energy and cutting pressure. In order to establish the optimization problem, the four above-mentioned second order regression Eqs. (4)(5)(6) are used to formulate the fitness function. In present study, the target of the optimization process is to estimate the optimal levels of turning parameters that contribute to the minimum value of machining force, cutting energy and cutting pressure. For multi-objective optimization of the MQL turning process, where noted FRmin as the minimum value of FR, Pcmin as the minimum value of Pc and Ksmin as the maximum value of Ks, the following objective function is created: These minimum and maximum values of the responses are obtained from the experimental results. The values of the weights w1, w2 and w3 assigned to FR, Pc and Ks, respectively. Can be provided that w1 + w2 + w3 = 1. Here, the weighting values of each performance characteristics are determined using principal component analysis. Experimental results were used to evaluate the correlation coefficient matrix and determine the corresponding eigenvalues. The mentioned eigenvalues are shown in Table 7. The minimization of the fitness function value of Eq. (7) is subjected to the boundaries of the cutting parameters. The range of values of experimental conditions in Table 1 were considered in this study. Once the optimization problem was formulated, it was then solved using genetic algorithm (GA). The parameters of genetic algorithm were set as follows: the number of generations was 1880, the population size was 90, the mutation probability was 0.025 and the crossover probability was 0.8. The results of it, show that the best combination turning parameters values for simultaneously optimizing performance characteristics of the MQL assisted turning using proposed fitness functions is: 210 m/min, 1.5 mm, and 0.224 mm/rev for cutting speed, depth of cut and feed rate, respectively. With purpose to verify the optimum cutting conditions a confirmation experiment at the optimum settings was performed, indicating the optimal machining force is 981 N, the cutting power is 2.87 kW and cutting pressure is 2437.5 MPa. CONCLUSIONS In this research, turning of AISI 1045 steel under MQL condition was investigated. Standard L27 orthogonal array based on the full plan design of experiments was employed. Cutting speed, depth of cut and feed rate at three levels each were considered as controlling factors, while machining force, cutting power and cutting pressure were considered as responses. The reduced second order models developed using response surface methodology confirmed to be a powerful tool for modelling machining force, cutting power and cutting pressure in MQL turning. The relative error of developed models is very low, under 8%. All models have high regression coefficient over 0.9. The machining force is highly sensitive to depth of cut and feed rate variations at all values of cutting speed. The minimum machining force is required at low values of both depth of cut and feed rate. According to the presented results, the cutting power is highly sensitive to cutting speed and depth of cut variations as compared to feed rate. However, the combination of low cutting speed, depth of cut and feed rate is necessary for minimizing the cutting power. The cutting pressure is significantly affected by depth of cut as compared to cutting speed and feed rate. Higher values of depth of cut and feed rates are necessary to minimize the cutting pressure. A genetic algorithm multiple objective optimization technique has proved very useful for determining optimal cutting parameters within the showed turning conditions. The conducted and presented experimental, statistic and optimization approaches provide reliable methodologies to improve turning process of AISI 1045 steel under MQL condition. Turning process was carried out with higher values of cutting speed, which gives more significant productivity in MQL condition. These procedures can lead to finding the optimal values of depth of cut, feed rate and speed, as the most easily variable input turning parameters. As a result, it is clear that the optimal energy situation of the turning process can be achieved. In industry, presented explicit mathematical models and optimization procedures can be integrated in expert systems for sustainable process planning, and serve to establishment of smart machining.
2022-09-19T15:08:08.075Z
2022-09-15T00:00:00.000
{ "year": 2022, "sha1": "2e292fc864885e9f90609a327bcc2bd3d74791f1", "oa_license": "CCBYNC", "oa_url": "http://www.tribology.rs/journals/2022/2022-3/11-1301.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5a9b454c760cd5f1e9a93973e2aaaddb13cbb6eb", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
226757652
pes2o/s2orc
v3-fos-license
Prediction of the spatial variability of coal-bearing rocks at the Elginsky coal mine . The article presents a brief analysis of the key methods used for spatial modelling of mining and geological indicators describing the composition, structure and state of rock deposits. The main limitations of the analysed methods when applied under real conditions are outlined. It is proposed to overcome these limitations using Markov nonlinear algorithms. By applying the principles of multi-dimensional Markov modelling to a geological object, interval types were determined for modelling mining and geological parameters of the Elginsky coal mine. As an example, the article presents the results of predicting the ash content for the U5 section of the Elginsky coal mine on the basis of one of the cross-sections of the developed three-dimensional model. Introduction The methods of spatial modelling are widely used in mining and geological research for creating accurate models of the composition, structure and state of rock massifs (RMs). In practice, RMs are spatially represented using the graphical images of sections along observation lines. Such sections are constructed using data derived from the plane of observation lines, such as wells located along exploration lines, seismic profiles, electrical survey lines, etc. In essence, the as-constructed sections comprise 2-dimensional models. The widespread application of modern computing technologies in geological exploration practice has required the development of new methodological approaches to provide for the most efficient use of electronic facilities when modelling RMs. One of the challenges arising in predicting the spatial variability of parameters in coal deposits involves the creation of interpolation 3-D models, which can represent adequately the distribution of the main properties of geological objects. This issue is currently addressed using the following two approaches. In the first case, the volume of a geological body is divided into an array of planes, for each of which the observed values are re-calculated into the nodes of a regular grid [1]. The set of planes corresponds to the 3dimensional distribution of a particular parameter. Another approach involves determining the coefficients of polynomial regression equations in Cartesian coordinates. A significant drawback of these both approaches is the assumption that the properties of the array under study are varying continuously (the function is differentiable at all points of the volume). However, it is common knowledge that geological bodies are inherently divided into layers, both stratigraphic (layers of various lithotypes) and tectonic (disturbances). Therefore, the continuity of variations in properties can only be considered as a special case. When solving geological problems, one should assume the fundamental non-linearity of geological properties. A popular mathematical approach that can be used in such cases is Markov nonlinear algorithms [5][6][7][8][9][10]. In this work, we develop an algorithm for Markov N-dimensional modelling on the basis of an approach implying the construction of a probabilistic image of geological bodies on an N-dimensional grid, rather than assigning coordinate points of space to the values of geological and geophysical indicators [5][6][7]. The creation of a probabilistic image is carried out on a training set, in which the space coordinates of points are considered as those of the phase space, a sequence of numerical values that are presented in the form of a Markov sequence with a strictly defined number and sequence of steps. For the unambiguous localization of points in the phase space, all the variants of transitions (permutations) of the coordinate axes X, Y, Z, and T (transposed vector) are used. The number of such combinations is determined by the formula: where N is the number of variables in the vector; n is the number of initial variables included in the vector. Thus, under 2, 4 and 5 initial variables, the vector will consist of 3, 12 and 20 variables, respectively. Notably, such an increase in the number of vector variables does not affect the size of the matrices of transition probabilities, since the number of states that determine the size of the matrices does not increase. The use of sequences of this type increases the recognition efficiency for nominal variables rather significantly. A successive accumulation of coordinate vectors (sequences of values) produces matrices of accumulated frequencies describing a specific value or a range of values (class) of the geological parameter under modelling. Further transformation of the obtained frequency matrices into transition probabilities matrices creates a model representing the distribution of the analysed parameter in the coordinates of the N-dimensional space. An essential aspect in the formalization (abstraction) of parameters and variables for the purpose of using their mathematical properties and relations consists in the introduction of a variable (parameter) representing the image of an individual property. The observation channel, through which the ai property is represented by the Vi variable, is implemented by the function: This function is homomorphic with respect to the assumed properties of the Ai and Vi sets. For some parameters and variables, observation channels comprise explicitly specified ai functions; for others, the function should be specified by the researcher. In other cases, a set of properties can be distinguished into the groups corresponding to subsets bounded by the boundaries when the parameter scale is evenly divided. Other methods for creating the image of a property are also possible. The approach involving division into classes limits their number by the magnitude of the observation error, the volume and density of distribution across the scale of variable states. The next step is to assign a code value to the variable under study in order to indicate both its state and its label. This is achieved by assigning an integer value in the interval (1... t) to this variable. The next variable is assigned a value in the interval (t+1, ... n), etc. Thus, the label is an interval, and the state of the variable is a number in this interval. In comparison with distinct channels used for observing variables and parameters, the model in this paper uses uncertain (fuzzy) channels, for which an appropriate representing function is introduced. In this case, distinct channels will be considered as a special case of fuzzy channels. The fundamental difference between distinct and fuzzy observation channels is the following: in the former case, the probability of a transition is related to the modal value of the class; in the latter -by the boundaries of the class. The variables and parameters transformed in the described way create the mathematical image of a system, the further study of which will be performed using the Markov chains of integer values for variables with branching values for fuzzy channels. Markov recognition -classification. There is an alphabet A = {ai} and its division into classes B = {bj = {aij}}. It is known that, in the C = (c1, c2, ...) chain of symbols from A, the alternation of classes obeys the 1st order Markov process with the alternation matrix M = {Pkl}. The task is to classify the elements of C such that the entropy of the chain over M is maximized. The Markov linkage algorithm itself is a kind of dynamic programming algorithm that maximizes the objective function C on the set of chains {L = (l 1 , l 2 ,... ln)}, where li is a cluster defined on the set of Markov chains. It is proposed to use three types of intervals for the parameter under modelling: -an interval represented by one lithotype; -an interval of rocks with similar physical characteristics; -a section interval typified on the basis of a set of Markov characteristics (lithological or geophysical). The recognition is carried out by points in the phase (Cartesian) space. Point coordinates are generated according to the section planes selected by the researcher. The boundary points of the coordinates and the sampling step are set. As the result, a probabilistic estimate for the distribution of geological and geophysical characteristics of the model on the plane of an RM section or a sequence of sections in the selected time interval is generated. The forecast is visualized on a display in two-dimensional coordinates. The described algorithm was tested using prototypes of Markov multifactor forecasting programs on distinct channels when assessing the quality indicators of coals [11][12][13][14][15][16][17]. Figure 1 shows an example of a coal deposit section. The coal seam U5 of the Elginsky coal deposit is represented by a change in the ash content along one of the sections (U44-U44). The changes in the ash content used in the construction were determined in separate layer intersections by laboratory methods. Using a simulation program, a 3-dimensional model of changes in the ash content was calculated. Setting Y = 44, the values of X (across the value range of 90-120) and Z (across the value range of 0-14) were sequentially generated. For the sake of convenience, the obtained data are presented by the numerical values of ash content. Thus, the fundamental possibility of a three-dimensional modelling of RMs using the potential of nonlinear Markov statistics has been demonstrated. It should be noted that the proposed technique is characterized by the following important feature. Each layer identified in the interval of a coal seam was considered as an independent geological body. Similarly, the underlying and upper layers, including coal ones, were considered as independent geological bodies genetically related to the layer under study. Such an assumption is necessary, since thick coal seams of a complex structure exhibit a significant variability in petrophysical characteristics as a result of changing formation conditions. The criterion for dividing the coal seam into layers is the change in ash content, which manifests itself on geophysical diagrams, as well as the presence of interlayers with a thickness of above 0.10 m. Thus, several layers can be distinguished in a geological body, with their number varying depending on the RM area In this regard, it became necessary to construct 3-dimensional models representing the variability of quality indicators, reflecting both lateral and vertical components. Due to the high information content of such models, their construction requires significant time and effort. The construction of a 3-d model for a section of the U5 seam located in the South-Eastern part of the Elginsky coal deposit was carried out. The U5 seam lying at absolute elevations from 40 to 400 m is characterized by a thickness of 7 to 14 meters and the ash content of coal packs from 8 to 38%. For each sub-section of this seam crossed with a well, both within and beyond the boundaries of the selected area, a per-pack forecast of ash content was given in accordance with the method of batch Markovian modelling [5][6][7]. In addition to genetic factors, the model considered the influence of the hypergenesis zone and permafrost. For reservoir intersections, graphs of changes in the ash content along the section were constructed. Figure 1 shows an example of a section of the U5 coal-bearing layer at the Elginsky coal mine, demonstrating changes in the ash content along one of the sections (U44-U44). The changes in the ash content used in the construction were determined in separate layer intersections by laboratory methods. Using a simulation program, a 3-dimensional model of changes in the ash content was calculated. Setting Y = 44, the values of X (across the value range of 90-120) and Z (across the value range of 0-14) were generated. In this figure, coal is depicted in black; coal rocks -in dark grey; rocks -in light grey. Thus, the fundamental possibility of three-dimensional modelling of the RM using a mathematical apparatus of nonlinear Markov statistics has been demonstrated.
2020-10-28T18:47:34.392Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "16b9b26e2a2c59cd3f352e304202a989073e5a97", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/52/e3sconf_pcdg2020_04007.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2246025dba0f4816e3ab7ff1a1d227acbd119db4", "s2fieldsofstudy": [ "Geology", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
267254401
pes2o/s2orc
v3-fos-license
Elevated tolerance of both short-term and continuous drought stress during reproductive stages by exogenous application of hydrogen peroxide on soybean The global production of soybean, among other drought-susceptible crops, is reportedly affected by drought periods, putting more pressure on food production worldwide. Drought alters plants’ morphology, physiology and biochemistry. As a response to drought, reactive oxygen species (ROS) concentrations are elevated, causing cellular damage. However, lower concentrations of ROS were reported to have an alleviating role through up-regulating various defensive mechanisms on different levels in drought-stressed plants. This experiment was set up in a controlled environment to monitor the effects of exogenous spray of different (0, 1, 5 and 10 mM) concentrations of H2O2 on two soybean genotypes, i.e., Speeda (drought-tolerant), and Coraline (drought-susceptible) under severe drought stress conditions (induced by polyethylene glycol) during flowering stage. Furthermore, each treatment was further divided into two groups, the first group was kept under drought, whereas drought was terminated in the second group at the end of the flowering stage, and the plants were allowed to recover. After 3 days of application, drought stress significantly decreased chlorophyll-a and chlorophyll-b, total carotenoids, stomatal conductance, both optimal and actual photochemical efficiency of PSII (Fv/Fm and Df/Fm, respectively), relative water content, specific leaf area, shoot length and dry weight, and pod number and fresh weight, but significantly increased the leaf concentration of both proline and total soluble sugars, the root length, volume and dry weight of both genotypes. The foliar application of 1 mM and 5 mM H2O2 on Speeda and Coraline, respectively enhanced most of the decreased traits measurably, whereas the 10 mM concentration did not. The group of treatments where drought was maintained after flowering failed to produce pods, regardless of H2O2 application and concentration, and gradually deteriorated and died 16 and 19 days after drought application on Coraline and Speeda, respectively. Overall, Speeda showed better performance under drought conditions. Low concentrations of foliar H2O2 could help the experimented soybean genotypes better overcome the influence of severe drought during even sensitive stages, such as flowering. Furthermore, our findings suggest that chlorophyll fluorescence and the cellular content of proline and soluble sugars in the leaves can provide clear information on the influence of both drought imposition and H2O2 application on soybean plants. osmolyte production (e.g., soluble sugars, proline, etc.) which maintains cellular capacity of water retention through the osmolytes' anti-dehydration characteristic 10 .Proline can also protect the enzymatic system during drought occasions with its protective ability for several enzymes, in addition to its role in redox regulation 11 .It was previously reported that proline accumulation under drought stress conditions in soybean was associated with better seed yield 12 .However, this defensive system can widely vary among plant species, and might differ depending on the species' developmental stage. In order for these changes to happen, chemical signals are initiated in the root system, including elevated abscisic acid levels, leading to reactive oxygen species (ROS) production in higher levels.If drought continues, excessive ROS production can lead to oxidative stress that can lead to massive damages on the cellular level 13 .It was reported earlier that ROS accumulation in the leaves can harm the photosynthetic pigments, leading to rapid leaf senescence 14,15 . Low concentrations of ROS, however, were reported to potentially regulate the gene expression and the stressresponsive pathways, facilitate certain molecular and physiological alterations, cause a moderate accumulation of ROS which up-regulates the antioxidant system and, hence, partially alleviate the negative influence of several abiotic stresses, including drought [16][17][18][19][20] . It was previously reported that the application of methyl viologent 21 , melatonin 22 , acetic acid 23,24 , abscisic acid 25 , salicylic acid 26,27 and hydrogen peroxide 28 positively helped in alleviating stress.Hydrogen peroxide is one of the most stable molecules among ROS that is naturally found in plant tissues, with several vital co-tasks on the cellular level including stomatal opening and cell growth and development [29][30][31][32] .The positive effects of exogenous H 2 O 2 spray at different concentrations on several plant species are well documented; however, most of these studies focused on the seedlings of these species (e.g.1.5 mM H 2 O 2 on cucumber seedlings 33 , 0.5 mM H 2 O 2 on tomato plants 34 , 1 MM H 2 O 2 on soybean 35 , 10 mM H 2 O 2 on maize seedlings 36 or on the pre-treatment with H 2 O 2 (e.g. 37on cucumber) rather than later developmental stages.Moreover, whether exogenous H 2 O 2 spray can help the recovering plants after drought is finished and/or plants that are suffering from continuous drought is not well documented.It would therefore be of vital importance to address these questions properly. Soybean is reported to be drought-susceptible crop 38 .Its susceptibility is widely different among its different varieties and, more importantly, depending on the developmental stage at which drought is imposed 39 .For example, it has been reported that drought during flowering stages 40 and during the following stages 35 massively reduces soybean yield by affecting both pod setting and seed filling 41 .Moreover, whether drought continues throughout several stages or is only occurring during certain developmental stage is another important issue to consider, and research is lacking on this particular issue.That said, it would be of considerable importance to understand the response of different soybean genotypes to either continuous or temporal drought at the more-sensitive reproductive stages.We hypothesized that low concentrations of H 2 O 2 will have positive influence on the morpho-physiology and the biochemistry of soybean plants that suffer from drought stress during the sensitive flowering stage.We also hypothesized that the response of these drought-stressed plants to H 2 O 2 would differ in case the drought was temporal as compared to continuous drought.This experiment aimed at evaluating the response of two soybean genotypes to short-term and continuous drought stress, in addition to evaluating the effects of exogenous application of H 2 O 2 on the morpho-physiology and biochemistry of the drought-stressed soybeans. Materials and methods This experiment was conducted through the hydroponic system in the controlled-climate chamber of the department of applied plant biology, University of Debrecen in 2022 to investigate the effects of exogenous application of different concentrations of hydrogen peroxide on soybean morpho-physiology and biochemistry under severe drought stress during the flowering (R1 and R2) stages 42 .In addition, this experiment aimed at monitoring the recovery path of soybean plants post-drought relative to continuous drought during reproductive stages (R1 onwards), and whether exogenously applied H 2 O 2 might have a protective role.During the whole experimental period, the day/night temperature was kept at 26/19 °C with 65% relative humidity and light intensity of 300 µmol m −2 s −1 during the light period. In a big field experiment, a total of 25 soybean genotypes were subjected to drought stress during 2017, 2018 and 2019 cropping years 43 .Based on their performance, two genotypes; Coraline (drought-susceptible) and Speeda (drought-tolerant) were chosen for this study.Severe drought stress was applied using polyethylene glycol (PEG 6000) (VWR International bvba Geldenaaksebaan, Leuven, Belgium) at a concentration of 10% (w/v) (equivalent to an osmotic potential of -0.19 MPa 44 ) dissolved properly and completely in the nutrient solution of each pot (except control treatment).PEG is a widely used aqueous substance to conduct experiments on drought stress's effects on plants.It unites with water molecules, but can't enter the cells due to its high molecular weight.Drought stress was applied starting from R1 stage, and then either lifted at the end of R2 stage, or kept in place afterwards.Three H 2 O 2 concentrations; 1, 5 and 10 mM were exogenously sprayed each other day throughout the flowering stages and a control treatment was alternatively sprayed with distilled water (DW).At the end of the flowering stage, the pots of each genotype were further divided into two groups; the first group was allowed to recover from drought stress by terminating PEG application, whereas the second group was kept under continuous drought stress conditions.Thus, there was 9 treatments for each genotype; 4 treatments sprayed with either 0, 1, 5 or 10 mM H 2 O 2 (D, D1, D5 and D10, respectively) under drought stress imposed between R1 and R2 stages, 4 treatments sprayed with either 0, 1, 5 or 10 mM H 2 O 2 (CD, CD1, CD5 and CD10, respectively) under continuous drought stress imposed from R1 stage onwards, in addition to a control treatment, where the plants were kept under optimum conditions and were sprayed with DW whenever the other treatments were sprayed with any concentration of H 2 O 2 . To monitor the response of the different treatments, sampling for the different traits was made at 3 different occasions; 3 days after drought stress application (3 days after the beginning of R1 stage, equivalent to 51 and 59 days after sowing (DAS) in Coraline and Speeda, respectively), 3 days after terminating the drought stress as mentioned earlier (3 days after the ending of R2 stage and the beginning of R3 stage, equivalent to 64 and 73 DAS in Coraline and Speeda, respectively) and at the end of the experiment (at the end of R4 stage, equivalent to 89 and 101 DAS in Coraline and Speeda, respectively).At each sampling occasion, the second-most developed leaf was selected.The experiment was set in a randomized complete block design with 3 replications, so the final pot number was 54 (2 genotypes * 9 treatments * 3 replications). Seeds of both genotypes were surface sterilized using 6% (v/v) H 2 O 2 for 20 min, rinsed extensively with deionized water and germinated geotropically between moisten filter papers at 22 °C.After germination, 10 homogenous seedlings with good vigor were transferred into 3-L pots, and the number of seedlings was reduced later to 7 homogenous seedlings per pot.Each pot received 300 ml of dicot nutrient solution of the following: 0.7 mM K 2 SO 4 , 2.0 mM Ca(NO 3 ) 2 , 0.1 mM KH 2 PO 4 , 0.5 mM MgSO 4 , 0.5 μM MnSO 4 , 0.1 mM KCl, 10 μM H 3 BO 3 , 0.2 μM CuSO 4 , 0.5 μM ZnSO 4 .Iron was supplied in the form of 10-4 M Fe-EDTA 45 , in addition to corresponding PEG solution.The nutrient solution was renewed every 3 days. Stomatal conductance (gs) was measured with AP4 porometer (Delta-t devices, UK).Chlorophyll fluorescence was measured for dark-adapted leaves (20 min of dark adaptation) by attaching light exclusion clips to the central region of each leaf.Chlorophyll fluorescence parameters were measured with a portable chlorophyll fluorometer-PAM-2100 (WALZ, Germany) as described by 46 .chlorophyll-a, chlorophyll-b and total carotenoids were calculated as described by 47 .The extract content of the pigment was measured with UV-VIS spectrophotometry (Metertech SP-830 PLUS, Taiwan) at three wavelengths; 480, 647 and 664 nm, and chlorophyll-a and chlorophyll-b, in addition to total carotenoid contents were determined according to 47 . The specific leaf area (SLA) was measured as described by 48 .Root and shoot dry weights were calculated after freeze-drying the samples (Christ Gefriertrocknungsanlagen Freeze Dryer, Type 101,041, Germany).Root and shoot lengths were measured using a standard ruler.Root volume was measured by placing the root in a suitable, graded tube containing a known volume of DW and then calculating the increase in the overall volume.The flower number was counted for each plant in each pot at R2 stage.Pod number and weight were calculated by harvesting the pods of 3 plants from each pot.Proline content was calculated as described by 49 .Total soluble sugar content was calculated as described by 50 . GenStat 20th edition (VSN International Ltd, UK) software was used to conduct the Analysis of Variance test, followed by Duncan's Multiple Range Test 51 to identify the statistically different treatments.All values are the means of 3 replicates (indicated by columns within each figure) ± standard errors (indicated by vertical whiskers on each respective column). Plant material The collection of plant material comply with relevant institutional, national, and international guidelines and legislation. Results The group of treatments, where drought stress was continuously imposed starting from R1 stage (i.e., CD, CD1, CD5 and CD10), gradually deteriorated until complete death 16 and 19 days after drought stress application (i.e., 5 and 7 days after the beginning of R3 stage) on Coraline and Speeda, respectively. Root dry weight The root dry weight of both genotypes was higher in treatments that were subjected to drought stress and received no foliar spray as compared to control treatments.The difference was more obvious and distinct in Coraline, where the root dry weight was even significantly higher (by 64.5%, 47.1% and 41.3% after 3 days of drought stress application, after 3 days from the beginning of R3 stage and at the end of R4 stage, respectively) than in the control treatment (Fig. 1).The application of H 2 O 2 foliar spray decreased the dry weight of the roots of both genotypes as compared to the non-sprayed counterparts.However, the 10 mM concentration increased the root dry weight of Coraline as compared to both the 1 mM and the 5 mM concentrations but decreased it in Speeda.At podding stages, Coraline plants that were sprayed with 10 mM H 2 O 2 had significantly higher root dry weight as compared to the 1 mM or 5 mM H 2 O 2 concentrations; however, the non-sprayed plants still had significantly higher root dry weight.In Speeda, the 1 mM concentration resulted in significantly higher dry root weight as compared to the other concentrations.Similar results were recorded at the end of the podding stage of both genotypes (Fig. 1). Root length After 3 days of drought stress application, the root length of both genotypes was significantly higher in all drought-stressed treatments than that of control treatment.In Coraline, the treatment which was sprayed with 10 mM H 2 O 2 had significantly shorter roots (by 3.5%) compared to the treatment that was not sprayed.On the other hand, Speeda plants that received 1 mM H 2 O 2 foliar spray had significantly longer roots (by 3.6%) compared to the non-sprayed counterpart.Within the group of treatments that was allowed to recover from drought, Coraline plants that were sprayed with either 1 mM or 5 mM H 2 O 2 and Speeda plants that were sprayed with 1 mM H 2 O 2 had significantly higher root lengths compared to their counterpart treatments that did not receive foliar spray.Within the group of treatments that were continuously under drought stress conditions, the root length of Coraline plants that were sprayed with either 1 mM or 5 mM H 2 O 2 was significantly higher than the non-sprayed counterpart, whereas the foliar spray did not enhance this trait for Speeda plants at this point.At the end of the podding stage, the root length of all the treatments that received any concentration of H 2 O 2 foliar spray was higher than the control treatments of both genotypes (Fig. 2). After 3 days of drought stress application, the root length was not measurably different between the studied genotypes, regardless of H 2 O 2 treatment and concentration.Later however, Coraline had longer roots in most treatments, especially within the group of treatments that were allowed to recover from drought.This last observation was more obvious at the end of the podding stage, where the root length of all treatments of Coraline was significantly higher than the counterpart treatments of Speeda (Fig. 2). Root volume Significant increase in the root volume of all sprayed plants from both genotypes was recorded 3 days after drought stress application as compared to control counterparts.In Coraline, H 2 O 2 foliar spray application at any concentration significantly reduced the root volume as compared to the non-sprayed treatment (by 15.5%, 14.2% and 16.4% in the treatments that received 1, 5 and 10 mM, respectively), whereas a 1 mM H 2 O 2 foliar spray significantly increased the root volume of Speeda plants.In Coraline, the root volume was significantly higher for the recovering plants that were sprayed with 5 mM H 2 O 2 foliar spray compared to the other concentrations, whereas the foliar spray at all concentrations resulted in significant root volume decrease in the treatments that were continuously subjected to drought.In Speeda, the root volume of the treatment sprayed with 1 mM H 2 O 2 was significantly higher than the other sprayed treatments for both groups (recovered, unrecovered).Similar conclusion was obtained on the recovered groups of both genotypes at the end of the podding stage (Fig. 3). Shoot dry weight Drought stress significantly decreased the shoot dry weight of both genotypes 3 days after application, and the foliar spray of H 2 O 2 , regardless of concentration, did not enhance this trait.After removing drought, the shoot dry weight of the plants that were sprayed with any concentration of H 2 O 2 foliar spray was significantly higher compared to the plants that were allowed to recover without receiving any concentration of foliar spray.Under continuous drought stress conditions, the shoot dry weight of both genotypes was significantly less than that of the recovering plants.However, the foliar spray of H 2 O 2 at any concentration significantly enhanced the dry shoot weight in Coraline (by 13.6%, 22.4% and 17.7% in CD1, CD5 and CD10 treatments, respectively as compared to CD treatment), but not in Speeda.By the end of the podding stage, the effect of the foliar spray on the shoot dry weight was more measurable in Coraline than in Speeda. After drought application, Coraline plants had higher shoot dry weight; however, Speeda plants showed measurably higher values in most treatments during the following stages (Fig. 4).www.nature.com/scientificreports/ Shoot length The soot length of both genotypes significantly decreased as a result of drought stress application.However, the H 2 O 2 foliar spray application, of any concentration in Coraline and of 1 mM in Speeda, could significantly enhance this trait.When plants of both genotypes were relieved from drought and allowed to recover, the groups of treatments that received any concentration of H 2 O 2 foliar spray showed significantly higher shoot length compared to the non-sprayed counterparts.Under continuous drought conditions, only the concentrations of 5 mM and 1 mM H 2 O 2 foliar spray could result in better shoot lengths in Coraline and Speeda, respectively.At the end of podding stage, all Coraline plants that received H 2 O 2 foliar spray showed significantly better shoot length (better even than the control), whereas that positive effect was noticed only on the treatment of Speeda that received 1 mM H 2 O 2 foliar spray, where the shoot length was 9.1% higher than that of the treatment that was not sprayed (D) and 4.8% higher than that of the control treatment (Fig. 5).Regardless of drought and H 2 O 2 application, the shoot length of Speeda was significantly higher than that of Coraline through the whole experimental period. Specific leaf area Drought stress application resulted in significant reductions in the specific leaf area (SLA) of both genotypes as compared to control counterparts.However, the application of H 2 O 2 foliar spray significantly increased SLA, regardless of its concentration.Compared to control plants, the application of 5 mM and 10 mM H 2 O 2 foliar spray on Coraline resulted in significantly higher SLA (by 26.7% and 12.3%, respectively) after 3 days of drought application, whereas only the 1 mM H 2 O 2 concentration could enhance this trait in Speeda.Under continuous drought stress conditions, the application of H 2 O 2 foliar spray could significantly increase SLA in both genotypes as compared to the non-sprayed counterparts.Recovering Coraline plants sprayed with either 5 mM or 10 mM H 2 O 2 could maintain higher SLA values as compared to control plants, whereas Speeda plants could not.Similar findings were observed at the end of the podding stge (Fig. 6). At all sampling dates, SLA of Speeda was higher in control and drought-stressed treatments that did not receive any foliar spray; however, Coraline plants that were sprayed with either 5 mM or 10 mM H 2 O 2 had higher SLA. Optimal photochemical efficiency of PSII Drought stress caused a significant reduction in this trait after 3 days of application on both genotypes (by 26% and 21.1% in Coraline and Speeda, respectively).However, applying the foliar spray of all concentrations on Coraline and of 1 mM and 5 mM on Speeda led to significant enhancement in Fv/Fm.Compared to the treatments that were subjected to continuous drought, the group of treatments where the drought was relieved was www.nature.com/scientificreports/able to maintain higher Fv/Fm values, with and without the application of H 2 O 2 foliar spray and regardless of its concentration.Fv/Fm was significantly higher (by 8.8%) when 5 mM H 2 O 2 foliar spray was applied to the recovering Coraline plants after 3 days of recovery compared to the non-sprayed recovering plants, whereas the foliar spray had no significant enhancement in Speeda recovering plants.Similar results were recorded at the end of the podding stage, where Fv/Fm values were not affected by the foliar spray on Speeda plants but were significantly better when 1 mM or 5 mM H 2 O 2 was applied on Coraline plants (Fig. 7). After drought application, Speeda plants had significantly higher Fv/Fm values in all treatments, except for the treatments where 10 mM H 2 O 2 was applied (where the Fv/Fm values of Speeda was still higher, yet not significantly).However, there were no measurable differences between the two genotypes after the recovery process, i.e.Coraline plants were able to retain comparable Fv/Fm values when the plants had the chance to recover with and without foliar spray application (Fig. 7). Actual photochemical efficiency of PSII (Yield) The application of drought stress significantly reduced the actual photochemical efficiency of PSII of both genotypes 3 days after its application, regardless of H 2 O 2 application and concentration.However, the exogenous application of H 2 O 2 at any concentration significantly enhanced this trait in both genotypes as compared to the non-sprayed, drought-stressed treatment.After terminating the drought, the treatments that were sprayed with any concentration of H 2 O 2 were still significantly higher in terms of yield as compared to the non-sprayed treatment.The group of treatments that were kept under drought stress conditions had significantly lower values of this trait.Within this group, the plants that received either 1 or 5, but not 10 mM H 2 O 2 , had significantly higher values than the treatment that did not receive any foliar H 2 O 2 .At the end of the podding stage, the actual photochemical efficiency of PSII of the treatments that received any concentration of H 2 O 2 was still significantly higher than that of the treatment that did not receive H 2 O 2 spray (Fig. 8). Chlorophyll-a After 3 days of drought stress application, the chlorophyll-a content significantly decreased in both genotypes, regardless of H 2 O 2 application and concentration.In the drought-susceptible genotype Coraline, the treatments which were allowed to recover from drought stress had significantly higher chla content compared to the treatments which suffered from continuous drought stress.Among these treatments, the foliar application of H 2 O 2 , regardless of its concentration, led to significantly higher chla compared to the treatment where the plants were allowed to recover without H 2 O 2 foliar spray.The foliar spray, however, did not have measurable effects on the chla content of the treatments which suffered from continuous drought.The foliar spray had no significant effect on the chla content in the drought-tolerant genotype Speeda; however, the group of treatments which was www.nature.com/scientificreports/allowed to recover from drought stress had significantly higher chla content compared to the group which suffered from continuous drought.At the end of the podding stage, the control treatment had significantly higher chla content than the drought-stressed treatments (by 54% and 36% in Coraline and Speeda, respectively), and the foliar spray had no measurable effect (Fig. 9).Under drought stress conditions, the chla content in Speeda was significantly higher than in Coraline at all 3 sampling dates, regardless of H 2 O 2 application and concentration. Chlorophyll-b The application of drought stress resulted in significant reduction in the chlb content in both genotypes.The foliar H 2 O 2 spray could not alleviate that effect.However, the recovered plants of both genotypes had significantly higher content of chlb as compared to the plants where drought was continuous.Moreover, the application of H 2 O 2 spray on the recovering plants of both genotypes increased the chlb content; that increase was significant when 1 mM or 5 mM of H 2 O 2 foliar spray was applied.The foliar spray, on the other hand, did not result in significant enhancements in this trait under continuous drought stress conditions.At the end of the podding stage, the chlb content was significantly higher in Coraline (by 14.7%) and Speeda plants (by 7.9%) which were sprayed with 5 mM and 1 mM of H 2 O 2 , respectively compared to the non-sprayed counterparts (Fig. 10). Speeda plants had significantly higher chlb content under drought stress conditions at all sampling dates. Total carotenoids Significant decrease in the total carotenoid (chlxc) content was recorded in both genotypes (by 28% and 35.5% in Coraline and Speeda, respectively) as a consequence of drought stress application.Except for the application of 10 mM H 2 O 2 on Coraline plants, the chlxc content was significantly increased by the H 2 O 2 foliar spray on both genotypes.The recovered plants of both genotypes had significantly higher chlxc content compared to the continuously drought-stressed counterparts.In Coraline plants, 1 mM H 2 O 2 foliar spray on the recovering plants of Coraline significantly increased chlxc compared to the non-sprayed recovering plants, whereas the chlxc of recovering Speeda plants was significantly increased by the application of any concentration of H 2 O 2 .On the other hand, the foliar spray had no measurable effects on the chlxc of the plants that were not allowed to recover from both genotypes.Interestingly, the chlxc at the end of the podding stage was significantly higher in Speeda plants that were sprayed with 1 mM H 2 O 2 as compared to the recovering plants that were not sprayed, whereas this trait did not have measurable differences in Coraline at the same period (Fig. 11).Regardless of the drought application and the H 2 O 2 application and concentration, the chlxc content was significantly higher in Speeda than in Coraline plants at all sampling dates. Stomatal conductance The stomatal conductance significantly decreased in both genotypes when subjected to drought stress.The foliar application of H 2 O 2 at any concentration could significantly elevate the stomatal conductance of both Coraline and Speeda plants.The stomatal conductance of both genotypes dramatically degraded when drought stress was kept; however, the foliar spray with H 2 O 2 at all concentrations could significantly increase the stomatal conductance of Coraline (by an average 22%), but not that of Speeda plants.On the other hand, the recovered plants of both genotypes had significantly higher stomatal conductance when foliar spray was applied at any concentration as compared to the recovering plants that was not sprayed.Interestingly, Coraline and Speeda plants that were sprayed with 5 mM and 1 mM H 2 O 2 , respectively had significantly higher stomatal conductance values (by 5.3% and 3.7%) compared to the control counterparts that were kept under optimum conditions throughout the whole experimental period (Fig. 12).Speeda plants could maintain higher stomatal throughout the experimental period as compared to Coraline plants. Relative water content The relative water content of all drought-stressed treatments of both genotypes significantly decreased as compared to control treatment, regardless of H 2 O 2 application and concentration.On the other hand, the exogenous application of either 1 or 5 mM H 2 O 2 on Coraline, and 1 mM H 2 O 2 on Speeda significantly enhanced the RWC.After terminating drought, a very similar result was obtained, where these concentrations helped in elevating the RWC of both genotypes to reach nearly similar values to those of control plants.On the other hand, all the treatments where the plants of both genotypes were kept under drought stress conditions had significantly lower RWC, yet all treatments that received any concentration of H 2 O 2 (except for 10 mM on Speeda) were able to keep significantly better RWC as compared to the non-sprayed counterpart.At the end of the podding stage, the RWC of (D1) and (D5) treatments in Coraline, and of (D1) treatment in Speeda was significantly higher than that of (D) treatment (Table 1). At all sampling dates, the RWC of Speeda was significantly higher than that of Coraline in all drought treatments except for the D5 treatment. Flower number At the end of the flowering stage, the treatments of both genotypes that were subjected to drought stress without any foliar spray produced significantly lower number of flowers (by 19.7% and 26.9% in Coraline and Speeda, respectively) as compared to control counterparts.The foliar spray enhanced this trait in Coraline, where both 1 mM and 5 mM H 2 O 2 concentrations resulted in significantly higher flower number as compared www.nature.com/scientificreports/ to the non-sprayed treatments.However, the foliar spray did not enhance this trait in Speeda; it even decreased the flower number when 5 mM or 10 mM H 2 O 2 was applied (Table 2).Speeda had significantly higher flower number than Coraline in all treatments except for the treatments that were sprayed with 5 mM H 2 O 2 where the flower numbers were very similar. Pod number The number of pods of both genotypes decreased due to drought stress application; the reduction was more measurable and significant in Speeda (37%).The pod number significantly increased in Coraline when 1 mM or 5 mM H 2 O 2 foliar spray was applied, and the same result was obtained when 1 mM H 2 O 2 foliar spray was applied on Speeda (Table 3).The number of pods was higher for Speeda in all treatments as compared to Coraline. Pod fresh weight Significant decrease in the pod fresh weight under drought stress conditions was recorded in both genotypes.However, the application of 5 mM and 1 mM H 2 O 2 foliar spray on Coraline and Speeda, respectively has led to significant increase (by 18.1% and 14.1%, respectively) in the pod fresh weight (Table 4). The pod fresh weight of Speeda was significantly higher than that of Coraline, regardless of drought and H 2 O 2 foliar spray application. Proline content The leaf proline content of both genotypes significantly increased under drought stress conditions.Furthermore, the foliar application of H 2 O 2 at any concentration significantly increased the leaf proline content as compared to the treatment where the drought-stressed plants were sprayed with DW.When drought was terminated, proline content measurably decreased in both genotypes, yet it was still higher than that of control treatment.Under continuous drought conditions, proline continued to accumulate, and its levels were significantly higher than those of the drought-relieved counterparts.The foliar H 2 O 2 spray had no measurable effect at this point in any of the two genotypes.The leaf proline content was still higher in the drought-relieved treatments at the end of the podding stage as compared to the control counterparts (Table 5). The leaf proline content was always higher in Speeda than in Coraline, and the differences were significant in all drought-stressed treatments of both groups.www.nature.com/scientificreports/ Total soluble sugars Drought stress significantly increased the total soluble sugars in the leaves of both genotypes.Compared to the drought-stressed treatment, the application of H 2 O 2 foliar spray at any concentration in Coraline, and at 1 or 5 mM in Speeda significantly induced the accumulation of soluble sugars.When drought was eliminated after the flowering stage, the total soluble sugars in Coraline noticeably decreased and reached levels that were insignificant as compared to the control plants, whereas these levels were still significantly higher in Speeda.Furthermore, the sprayed plants of both genotypes had very close levels of soluble sugars as compared to the non-sprayed counterparts.On the other hand, the group of treatments that had continuous drought accumulated significant levels of soluble sugars as compared to the drought-relieved group of both genotypes.In Coraline, the soluble sugar contents were not significantly different in the treatments sprayed with any concentration of H 2 O 2 from the treatment that was not sprayed, whereas they were in Speeda.At the end of the podding stage, the content of the total soluble sugars was still significantly higher in the treatments that suffered from drought, regardless of H 2 O 2 application and concentration, as compared to the control treatments that were sprayed with DW (Table 6).Although Speeda plants had higher contents of the total soluble sugars, yet the differences were more announced after drought stress application, whereas these differences were much less in the group of treatments that was relieved from drought, and also at the end of the podding stage. Discussion Osmotic stress can limit energy transport from photosystem II to photosystem I, and parallelly constitute spongy, thin tissues in the leaves, leading to elevated chlorophyll-a fluorescence and, consequently, reduced photosynthetic activity 52 .ROS accumulation negatively affects the sensitive chlorophyll molecules 53 .In our experiment, both chlorophyll-a and chlorophyll-b of both soybean genotypes significantly decreased under PEG-induced drought stress conditions, leading to damaged photosynthesis machinery 54 .Similar conclusion was reported Table 4. Pod fresh weight (g plant -1 ) of two soybean genotypes (Coraline and Speeda) at the end of R4 stage as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide.All values are the means of 3 replicates.In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test.by 55 , who also reported that total carotenoids significantly decreased under drought stress conditions, which was the case in our experiment as well.The exogenous application of either 1 or 5 mM H 2 O 2 could measurably enhance chlorophyll-a content in the drought-tolerant genotype Speeda, but not of the drought-susceptible genotype Coraline.However, no influence on Chl b content in both genotypes was detected.On the other hand, the total carotenoid content in both genotypes was significantly enhanced by H 2 O 2 application at any concentration (except for 10 mM on Coraline).Low concentrations of H 2 O 2 can induce certain enzymes and/or proteins related to photosynthesis process 56 .H 2 O 2 foliar spray can protect the chloroplast under drought stress conditions, resulting in enhanced chlorophyll content 37,57 .Similar conclusions were also reported on soybean in the case of exogenous melatonin 58 and ethanol 59 .Ethanol application can elevate the synthesis and/or reduce the degradation of the photosynthetic pigments 55 .The drought-stressed plants of both soybean genotypes in our experiment had significantly higher proline and soluble sugar concentrations 3 days after drought stress application.Proline is an important amino acid that is engaged in many processes on the cellular level 60 .Under drought stress conditions, the concentrations of proline and soluble sugars, among other osmolytes, increase without disturbing the usual biochemical activities in the cells 61 .Thus, these osmolytes play a defensive role against drought by decreasing the permeability of the cellular membranes, leading to stabilized water balance [62][63][64][65][66][67][68] .In their experiment 58 , reported that drought-stressed soybean seedlings had 30, 125 and 334% higher proline concentration after 5, 10 and 15 days of drought stress application.0][71] ) and on other species like hot pepper 72 , barley 73 , cotton 74 and rice 75 .According to 76 , there is another important role of the elevated soluble sugar levels under drought stress conditions; that is, sustaining adequate metabolic C/N ratios.The concentrations of both proline and soluble sugars were measurably higher in Speeda than in Coraline at the 3 sampling dates.It was previously reported that the levels of proline accumulations are genotype-dependent and varies among the different stages of soybean development when the drought stress is taking place 12,77 , which is also confirmed by our findings, as the group of treatments of both genotypes that was relieved from drought stress after flowering stage had significantly higher concentrations of both proline and soluble sugars as compared to the other group, where the drought was continuously kept in place.On the other hand, the exogenous application of H 2 O 2 noticeably increased both proline and soluble sugar concentrations in the drought-stressed plants of both genotypes 3 days after drought stress application.It was reported by 36 that H 2 O 2 foliar spray resulted in elevated proline and soluble sugar concentrations in drought-stressed maize plants, leading to enhanced drought tolerance.Similar conclusion was also reported when other osmo-regulators such as ethanol 55 were exogenously applied on soybean plants. Under unfavorable water availability conditions, sustaining water status within plants is vital to overcome these conditions, and leaf relative water content is considered as one of the most indicative traits for plant drought tolerance 78 .In our experiment, significant reduction in RWC under drought stress conditions was recorded in both soybean genotypes.It is well documented that drought stress results in reduced stomatal conductance by increasing stomatal closure ratio 79 in order to maintain the water content of the drought-stressed plants.In their experiment 80 , concluded that RWC of both experimented soybean genotypes decreased under drought stress conditions, and 55,81 reported that gs significantly decreased in drought-stressed soybean plants, which was also supported by our results.However, Significant enhancement in gs was recorded when any concentration of H 2 O 2 was exogenously applied on both soybean genotypes.Simultaneously, the RWC was significantly better 3 days after drought stress imposition when 1 or 5 mM H 2 O 2 was applied on Coraline plants and 1 mM H 2 O 2 was applied Table 6.Total soluble sugars (mg g -1 ) in the leaves of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates.In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test.NA: Not Applicable. Figure 1 .Figure 2 .Figure 3 . Figure1.Root dry weight (g) of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates (A: 3 days after drought stress application at R1 stage, B: 3 days after R3 stage started, C: at the end of R4 stage) as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates (columns) ± standard errors (vertical whiskers).In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Figure 4 .Figure 5 .Figure 6 . Figure 4. Shoot dry weight (g) of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates (A: 3 days after drought stress application at R1 stage, B: 3 days after R3 stage started, C: at the end of R4 stage) as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates (columns) ± standard errors (vertical whiskers).In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Figure 7 . Figure 7. Optimal photochemical efficiency of PSII of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates (A: 3 days after drought stress application at R1 stage, B: 3 days after R3 stage started, C: at the end of R4 stage) as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates (columns) ± standard errors (vertical whiskers).In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Figure 8 .Figure 9 .Figure 10 . Figure 8. Actual photochemical efficiency of PSII of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates (A: 3 days after drought stress application at R1 stage, B: 3 days after R3 stage started, C: at the end of R4 stage) as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates (columns) ± standard errors (vertical whiskers).In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Figure 11 . Figure11.Total carotenoid content (µg g -1 ) of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates (A: 3 days after drought stress application at R1 stage, B: 3 days after R3 stage started, C: at the end of R4 stage) as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates (columns) ± standard errors (vertical whiskers).In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Figure 12 . Figure 12.Stomatal conductance (mmol m -2 s -1 ) of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates (A: 3 days after drought stress application at R1 stage, B: 3 days after R3 stage started, C: at the end of R4 stage) as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates (columns) ± standard errors (vertical whiskers).In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Table 1 . Relative water content (%) of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates as affected by hydrogen peroxide foliar spray application under drought stress conditions ( Table 2 . Flower number of two soybean genotypes (Coraline and Speeda) at full bloom (R2) stage as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide, CD: continuous drought starting from R1 stage, CD1: continuous drought starting from R1 stage + 1 mM hydrogen peroxide, CD5: continuous drought starting from R1 stage + 5 mM hydrogen peroxide, CD10: continuous drought starting from R1 stage + 10 mM hydrogen peroxide).All values are the means of 3 replicates.In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Table 3 . Pod number (plant -1 ) of two soybean genotypes (Coraline and Speeda) at the end of R4 stage as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought from R1 till R2 stage, D1: drought from R1 till R2 stage + 1 mM hydrogen peroxide, D5: drought from R1 till R2 stage + 5 mM hydrogen peroxide, D10: drought from R1 till R2 stage + 10 mM hydrogen peroxide.All values are the means of 3 replicates.In each genotype, different letters indicate significant differences at .05 level as indicated by Duncan's multiple range test. Table 5 . Leaf proline content (µg g -1 ) of two soybean genotypes (Coraline and Speeda) at 3 different sampling dates as affected by hydrogen peroxide foliar spray application under drought stress conditions (D: drought
2024-01-27T06:17:46.852Z
2024-01-25T00:00:00.000
{ "year": 2024, "sha1": "c13b9506bab58b4defb62c4227501255f9e0970a", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "a09039e8fdeff0249a4916499cc1e6cde4b7f1d7", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
234474907
pes2o/s2orc
v3-fos-license
C. elegans PEZO-1 is a mechanosensitive ion channel involved in food sensation Millet et al. show that the C. elegans orthologue of the PIEZO family, PEZO-1, is a mechanosensitive ion channel involved in food sensation and in regulating pharyngeal function in the nematode. Mammalian PIEZO channels have been associated with several hereditary pathophysiologies (Alper, 2017). Piezo1 gain-of-function (GOF) mutations display slow channel inactivation, leading to an increase in cation permeability and subsequent red blood cell dehydration (Albuisson et al., 2013;Bae et al., 2013;Ma et al., 2018;Zarychanski et al., 2012). For instance, the human Piezo1 hereditary mutation R2456H, located in the pore domain, decreases inactivation, while substitution by Lys slows inactivation even further (Bae et al., 2013). Piezo1 global knockouts (KOs) are embryonically lethal in mice (Li et al., 2014;Ranade et al., 2014), and cell-specific KOs result in animals with severe defects (Ma et al., 2018;Wu et al., 2017a). Intriguingly, both Piezo2 KO and GOF mutations are associated with joint contractures, skeletal abnormalities, and alterations in muscle tone (Chesler et al., 2016;Coste et al., 2013;Yamaguchi et al., 2019). GOF and loss-of-function (LOF) mutations are useful genetic tools for determining the contribution of PIEZO channels to mechanosensation in diverse physiological processes and in various animals. The Caenorhabditis elegans genome encodes an orthologue of the PIEZO channel family, namely pezo-1 (wormbase.org release WS280). Recently, Bai et al. (2020) showed that pezo-1 is expressed in several tissues, including the pharynx. The worm's pharynx is a pumping organ that rhythmically couples muscle contraction and relaxation in a swallowing motion to pass food down to the animal's intestine (Keane and Avery, 2003). This swallowing motion stems from a constant low-frequency pumping, maintained by pharyngeal muscles, and bursts of high-frequency pumping from a dedicated pharyngeal nervous system (Avery and Horvitz, 1989;Lee et al., 2017;Raizen et al., 1995;Trojanowski et al., 2016). In mammals, the swallowing reflex is initiated when pressure receptors in the pharynx walls are stimulated by food or liquids, but the identity of the receptors that directly evoke this mechanical response remain to be identified (Tsujimura et al., 2019). Interestingly, the Drosophila melanogaster PIEZO orthologue is a mechanosensitive ion channel (Kim et al., 2012) required for feeding and is also important for avoiding food overconsumption (Min et al., 2021;Wang et al., 2020). To date, whether pezo-1 encodes for a mechanosensitive ion channel or regulates worm pharyngeal activity has yet to be determined. Here, we found a strong and diverse expression of the pezo-1 gene in pharyngeal tissues by imaging a pezo-1::GFP transgenic reporter strain. By leveraging genetic dissection, electrophysiological measurements, and behavior analyses, we also established that PEZO-1 is required for proper low-frequency electrical activity and pumping behavior. Analyses of pezo-1 KO and GOF mutants demonstrated that decreasing or increasing PEZO-1 function up-regulates pharyngeal parameters. Likewise, mutants display distinct pharyngeal activities triggered by the neurotransmitter serotonin or with various buffer osmolarities. Using elongated bacteria as a food source, we demonstrated that pezo-1 KO decreases pharyngeal pumping frequency, whereas a GOF mutant features increased frequency. Finally, electrophysiological recordings of pezo-1-expressing cells from C. elegans embryo cultures and the Spodoptera frugiperda (Sf9) cell line demonstrate that pezo-1 encodes a mechanosensitive ion channel. Altogether, our results reveal that PEZO-1 is a mechanosensitive ion channel involved in a novel biological function, regulating pharyngeal pumping and food sensation. RNA isolation and RT-PCR Cultured worms were washed with M9 buffer (86 mM NaCl, 42 mM Na 2 HPO 4 , 22 mM KH 2 PO 4 , and 1 mM MgSO 4 ) and collected into 15-ml Falcon tubes. Trizol was added to pelleted worms, and RNA was isolated using the freeze-cracking method as previously described (Van Gilst et al., 2005). Isolated total RNA was purified using the RNAeasy kit (Qiagen). The SuperScript III One-Step RT-PCR system with Platinum Taq DNA Polymerase was used for RT-PCR, following the manufacturer's protocol (Invitrogen). Primers were designed based on the pezo-1 sequences comprising the deletion regions of the knu508 allele of the COP1553 mutant: F3, 59-GCAACGTCACCAAGAAGAGCAG-39, and R2, 59-GCATTCAATAGTCTCGTTGCTG-39. Imaging Worms were individually selected and dropped into 15 µl of M9 buffer and then paralyzed on a glass slide with 2% agarose pads containing 150 mM 2,3-butanedione monoxime. Bright-field and fluorescence imaging were performed on a Zeiss 710 Confocal microscope using a 20× or 40× objective. Images were processed using Fiji ImageJ (Schindelin et al., 2012) to enhance contrast and convert to an appropriate format. Worm synchronization For all pharyngeal pumping assays, worms were synchronized by placing young adults onto fresh nematode growth media (NGM) plates seeded with OP50 (Escherichia coli strain) and left to lay eggs for 2 h at 20°C. Adults were removed, and the plates were incubated at 20°C for 3 d. Pharyngeal pumping Serotonin profile A serotonin aliquot (InVivo Biosystems) was diluted in M9 Buffer before experiments and discarded after 3 h. 42 synchronized worms were picked and transferred to 200 µl of M9 Buffer supplemented with 2, 5, 10, or 20 mM serotonin and incubated at 20°C for 30 min before loading onto a microfluidic chip (SC40, The ScreenChip System; InVivo Biosystems). Control E. coli assay OP50 was grown in liquid Luria-Bertani (LB) medium under sterile conditions at 37°C and diluted to OD 1.0. Bacterial cultures were stored at 4°C for up to 1 wk. Spaghetti-like E. coli assay The day before the experiment, OP50 colonies were picked from a fresh LB plate and incubated in 2 ml of LB overnight. The following day, 0.5 ml of this culture was used to inoculate 1.5 ml of LB medium and incubated until growth was exponential, which was verified by optical density (OD 0.5). Cephalexin (Alfa Aesar) was added to a final concentration of 60 µg/ml, and the culture was incubated for 2 h. Spaghetti-like OP50 were verified under a microscope and washed three times using 2 ml of M9 buffer, followed by centrifugation at 400 g to gently pellet the elongated bacteria. Pharyngeal recordings and analyses Worms were loaded one by one into the microfluidic chip recording channel and left to adjust for 1 min before recording. All recordings were 2 min long. Records were analyzed using NemAnalysis software (InVivo Biosystems) with the brute force algorithm turned off. Parameters were adjusted for each record to include the maximum number of clearly identifiable pharyngeal pumps. Results were exported from the software in sheet form, and parameters were plotted and statistically analyzed using Matlab R2019a (MathWorks). Matlab analysis code is available at: https://github.com/JonathanMillet/Pharyngeal_ Pumping_Analysis_Script_JM2021.git. Development assay Young adults were allowed to lay eggs on NGM plates seeded with control or spaghetti-like bacteria for 2 h. Spaghetti-like bacteria were cultured as described above. Animals (10-20 worms) were removed from plates after 2 h, and the number of eggs laid was counted. After 3 d of incubation, animals that reached adulthood were counted in each trial, and results were compared across four trials. Food ingestion assay A drop of fresh culture containing control or spaghetti-like bacteria with 2 µM DiI dye (CAS 41085-99-8; Sigma-Aldrich) was placed on an NGM agar plate. Young adults were fed bacteria with DiI for 30 min. Next, worms were transferred onto OP50-seeded NGM without dye for 5 min (Vidal-Gadea et al., 2012). Finally, animals were placed on a thin-layered 2,3-butanedione monoxime-agarose plate for imaging under a Nikon SMZ18 stereomicroscope. Food occupation in the digestive tract was detected by fluorescence. Primary culture of C. elegans embryo cells C. elegans embryonic cells were generated as previously described (Strange et al., 2007). Worms were grown on 10-cm enriched peptone plates with NA22 E. coli. NA22 bacteria grow in very thick layers that provide an abundant food source for large quantities of worms. Synchronized gravid hermaphrodites were bleached to release eggs and washed with sterile egg buffer (118 mM NaCl, 48 mM KCl, 2 mM CaCl 2 , 2 mM MgCl 2 , and 25 mM HEPES, pH 7.3, 340 mOsm, adjusted with sucrose). The isolated eggs were separated from debris by centrifugation in a 30% sucrose solution. Chitinase (1 U/ml; Sigma-Aldrich) digestion was performed to remove eggshells. The embryo cells were dissociated by pipetting and filtered through a sterile 5-µm Durapore filter (Millipore). The cells were plated on glass coverslips coated with a peanut lectin solution (0.5 mg/ml; Sigma-Aldrich) and cultured in L15 medium (Gibco) supplemented with 50 U/ml penicillin, 50 µg/ml streptomycin, and 10% FBS (Invitrogen) for 72-96 h. Expression of pezo-1 in Sf9 insect cells To express pezo-1 in Sf9 cells, we produced recombinant baculoviruses, according to the manufacturer's instructions (Bac-to-Bac expression system; Invitrogen). To generate this baculovirus, we used a pFastBac construct (Epoch Life Science) containing an 8× histidine-maltose binding protein tag and a synthesized pezo-1 isoform G nucleotide sequence (one of the longest isoforms according to RNA sequencing; wormbase.org release WS280). For expression of pezo-1 R2373K, the construct contained an 8× histidine-maltose binding protein tag and a synthesized pezo-1 isoform G with the R2373K point mutation. We infected Sf9 cells with either pezo-1 baculovirus for 48 h. Infected cells were plated on glass coverslips coated with a peanut lectin solution (1 mg/ml; Sigma-Aldrich) for patch-clamp experiments. Electrophysiology and mechanical stimulation Primary cultured embryo cells labeled with Ppezo-1::GFP from strains VVR3, VVR69, or VVR70 were recorded in the cellattached configuration of the patch-clamp technique. Control and infected Sf9 insect cells were recorded in the whole-cell or inside-out patch-clamp configurations. For on-cell recordings, the bath solution contained 140 mM KCl, 6 mM NaCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 10 mM glucose, and 10 mM HEPES, pH 7.4; 340 mOsm, adjusted with sucrose. The pipette solution contained 140 mM NaCl, 6 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 10 mM glucose, and 10 mM HEPES, pH 7.3; 330 mOsm, adjusted with sucrose. Cells were mechanically stimulated with negative pressure applied through the patch pipette using a High-Speed Pressure Clamp (ALA Scientific) controlled with a MultiClamp 700B amplifier through Clampex (Molecular Devices). Cellattached patches were probed using a square-pulse protocol consisting of −10-mmHg incremental pressure steps, each lasting 1 s, in 10-s intervals. Cells with giga-seals that did not withstand at least six consecutive steps of mechanical stimulation were excluded from analyses. I steady was defined as the maximal current in the steady state. Deactivation was compared by determining the percentage of I steady remaining 100 ms after removal of the mechanical stimuli. For whole-cell recordings, the bath solution contained 140 mM NaCl, 6 mM KCl, 2 mM CaCl 2 , 1 mM MgCl 2 , 10 mM glucose, and 10 mM HEPES, pH 7.4. The pipette solution contained 140 mM CsCl, 5 mM EGTA, 1 mM CaCl 2 , 1 mM MgCl 2 , and 10 mM HEPES, pH 7.2. For indentation assays, Sf9 cells were mechanically stimulated with a heat-polished blunt glass pipette (3-4 µm) driven by a piezo servo controller (E625; Physik Instrumente). The blunt pipette was mounted on a micromanipulator at an ∼45°angle and positioned 3-4 µm above the cells without indenting them. Displacement measurements were obtained with a square-pulse protocol consisting of 1-µm incremental indentation steps, each lasting 200 ms, with a 2-ms ramp in 10-s intervals. Recordings with leak currents >200 pA and with access resistance >10 MΩ, as well as cells with gigaseals that did not withstand at least five consecutive steps of mechanical stimulation, were excluded from analyses. For inside-out recordings, symmetrical conditions were established with a solution containing 140 mM CsCl, 5 mM EGTA, 1 mM CaCl 2 , 1 mM MgCl 2 , and 10 mM HEPES, pH 7.2, and mechanical stimulation was performed identically to on-cell recordings. Pipettes were made from borosilicate glass (Sutter Instruments) and were fire-polished before use until a resistance between 3 and 4 MΩ was reached. Currents were recorded at a constant voltage (−60 mV, unless otherwise noted), sampled at 20 kHz, and low-pass filtered at 2 kHz using a MultiClamp 700B amplifier and Clampex (Molecular Devices). Leak currents before mechanical stimulation were subtracted offline from the current traces. Data and fits were plotted using OriginPro (OriginLab). Sigmoidal fit was done with the following Boltzmann equation: where A 2 = final value, A 1 = initial value; X o = center, and dX = time constant. The time constant of inactivation τ was obtained by fitting a single exponential function, Eq. 2, between the peak value of the current and the end of the stimulus: where A = amplitude, τ = time constant, and the constant y-offset C for each component i. Data and statistical analyses Data and statistical analyses were performed using DataGraph 4.6.1, Matlab R2019a (MathWorks), and GraphPad Instat 3 software. Statistical methods and sample numbers are detailed in the corresponding figure legends. No technical replicates were included in the analyses. Online supplemental material Fig. S1 shows pezo-1::GFP expression at different developmental stages and in various tissues in C. elegans. Fig. S2 shows the molecular details of the pezo-1 KO strain (COP1553; pezo-1 (knu508) IV). Fig. S3 shows the pharyngeal pumping frequencies of WT, pezo-1 KO, and R2373K challenged with 2 mM serotonin or fed with control E. coli. Fig. S4 shows the differences between control and spaghetti-like E. coli and their effect in WT and pezo-1 mutants' physiological traits (feeding, development, and pharyngeal pumping frequency). Fig. S5 shows the GFP intensity across the worm strains used to generate embryo cultures for patch-clamp experiments. Fig. S6 shows singlechannel trace recordings and current-voltage relationships of WT cells expressing pezo-1::GFP in the cell-attached configuration. Fig. S7 shows representative single-channel trace recordings of pressure-evoked currents from pezo-1::GFP cells expressing pezo-1 WT and R2373K. Fig. S8 shows the current densities evoked with displacement of naive and pezo-1-infected Sf9 cells. Fig. S9 shows representative single-channel trace recordings of pressure-evoked currents from Sf9 cells expressing pezo-1 WT and R2373K. pezo-1 is expressed in a wide variety of cells in the worm pharynx To determine the expression of pezo-1 in C. elegans, we used a fluorescent translational reporter made by the TransgeneOme Project (Hasse et al., 2016). This fosmid construct contains pezo-1 native cis-regulatory elements, including introns, up to exon 17 and 39 untranslated region (UTR) sequences linked in-frame to GFP ( Fig. 1 A). The position of GFP with respect to the remainder of the gene creates an unnatural truncated version of the PEZO-1 protein. Hence, it likely expresses a nonfunctional protein that excludes 16 exons, which contain most of the pezo-1 sequence (including the pore domain). GFP signals are present in multiple cells at all developmental stages (Fig. S1, A and B). Furthermore, it does not appear to be mosaic, as similar expression patterns were observed in at least three independent transgenic lines. We imaged pezo-1::GFP worms at various focal planes to identify the different cells expressing GFP based on their morphological features (i.e., cell-body position, neurites extension and position along the body, and branching; Fig. 1, B-G). The strongest GFP signals that we identified came from the pharyngeal gland cells ( Fig. 1 B, bright and fluorescence fields). These cells are composed of five cell bodies (two ventral g1s, one dorsal g1, and two ventral g2s) located inside the pharynx terminal bulb and three anterior cytoplasmic projections: two short projections that are superposed, ending in the metacorpus, and a long projection reaching the end of the pm3 muscle. These cells are proposed to be involved in digestion (Albertson and Thomson, 1976;Ohmachi et al., 1999), lubrication of the pharynx (Smit et al., 2008), generation and molting of the cuticle (Höflich et al., 2004;Singh and Sulston, 1978), and resistance to pathogenic bacteria (Höflich et al., 2004). Additionally, we visualized pezo-1::GFP in a series of cells surrounding the muscle of the corpus and the isthmus ( Fig. 1 C; Albertson and Thomson, 1976). We also recognized the arcade cells as putative pezo-1-expressing cells, according to their morphology and location ( Fig. 1, C and E). Arcade cells and the pharyngeal epithelium form the buccal cavity and connect the digestive tract to the outside (Altun and Hall, 2009). We also observed many other anterior cells labeled with GFP; however, we cannot currently confirm if they represent neurons and/or amphids. By crossing pezo-1::GFP with a tph-1::DsRed marker-carrying strain, we were able to identify pezo-1 expression in the pharyngeal neurosecretory, motor, and sensory (propioceptive/ mechanosensory) neurons (NSM L/R ; Fig. 1 G). Importantly, these serotoninergic neurons have been proposed to sense food in the lumen of the pharynx through their proprioceptive-like endings and trigger feeding-related behaviors (i.e., increased pharyngeal pumping, decreased locomotion, and increased egg laying; Albertson and Thomson, 1976;Avery et al., 1993). In addition to the pharyngeal cells, we observed expression of pezo-1 in the ventral nerve cord neurons, according to their morphology and location ( Fig. 1 D), striated muscles ( Importantly, the expression pattern reported by our pezo-1 fosmid construct in NSM neurons matches what is reported in the C. elegans Neuronal Gene Expression Map & Network (CeNGEN; Taylor et al., 2021). The strong and varied pezo-1 expression in the pharynx, along with the function of the cells expressing it, led us to investigate the potential contribution of PEZO-1 to pharyngeal function. Serotonin stimulation reveals varying pharyngeal pump parameters To analyze the contribution of pezo-1 to pharyngeal pumping in C. elegans, we used the ScreenChip system (InVivo Biosystems), which can record electropharyngeograms (EPGs; Fig. 2 A; Raizen and Avery, 1994) by loading single, live worms into a microfluidic chip. Fig. 2, A and B, summarizes the pharynx anatomy, electrical properties measured during an EPG, and the neurons involved in pharyngeal function. For instance, the excitation event (E spike) precedes the pharyngeal contraction and is modulated by the MC pacemaker neuron (Fig. 2 B, top), whereas the repolarization event (R spike) leads to pharyngeal relaxation and correlates with the activity of the inhibitory M3 motor neurons ( Fig. 2 B, middle). After three to four pumps, there is relaxation of the terminal bulb (isthmus peristalsis), which is modulated by the M4 motor neuron (Fig. 2 B, bottom; Avery and Horvitz, 1989). The main EPG events are regulated by the pharyngeal proprioceptive neuron, NSM. Importantly, the proprioceptive NSM neurons and the I3 interneuron express pezo-1 according to our data ( Fig. 1 G) and CeNGEN (Taylor et al., 2021), respectively. Analysis of the EPG records allows for the determination of various pharyngeal pumping parameters, including frequency, duration, and the time interval that separates two pumping events (hereafter referred to as the interpump interval). We used serotonin to increase pharyngeal activity, since in the absence of food or serotonin, pumping events are infrequent. Serotonin mimics food stimulation by activating the MC L/R and M3 L/R neurons (Niacaris and Avery, 2003). First, we established a serotonin dose-response profile of the WT (N2) strain pharyngeal pumping parameters (Fig. 2, C-G). Serotonin increases pharyngeal pumping frequency in a dose-dependent manner, with concentrations >5 mM increasing the likelihood of reaching 5 Hz (Fig. 2 C). We averaged the EPG recordings at each serotonin concentration and found a clear difference in pump duration between 0 and 5 mM. Concentrations ≥5 mM evoked similar pump durations (∼100 ms; Fig. 2 D). Interestingly, analysis of the pump duration distribution profile under serotonin stimulation revealed that pharyngeal pump duration fits into two categories: fast (∼80 ms) and slow (100-120 ms; Fig. 2 E, gray rectangles). We observed that the fast and slow categories displayed an inverse relationship with respect to serotonin concentration (Fig. 2 E, arrows). Unlike pump duration, we observed only a single category for interpump interval, ∼95-120 ms for serotonin concentrations of 5-20 mM (Fig. 2, F and G). Interestingly, we did not observe interpump intervals faster than 90 ms, regardless of the . pezo-1 is strongly expressed in C. elegans pharynx. (A) pezo-1 gene diagram according to wormbase.org, release WS280 made with Exon-Intron Graphic Maker (wormweb.org). Magenta rectangles and white triangles denote the 59 and 39 UTRs, respectively; black rectangles denote exons; black lines denote introns; green rectangle denotes the GFP sequence inserted after exon 17. (B) Bright-field (left) and fluorescence (right) micrographs of the anterior end of a young adult pezo-1::GFP hermaphrodite highlighting pharynx structures and the GFP reporter expression in gland cells. Scale bar represents 50 µm. (C) Micrograph of the anterior end of a young adult pezo-1::GFP hermaphrodite expressing GFP in different cells. Arrows highlight pm3 muscle (white) and arcade cells (yellow), according to their morphology and location. Scale bar represents 20 µm. (D and E) Micrograph of the anterior end of a young adult pezo-1::GFP hermaphrodite expressing GFP in different cells. White and yellow arrows highlight ventral nerve cord neurons and posterior arcade syncytium, respectively, according to their morphology and location. Scale bar represents 50 µm. (F) Micrograph of the anterior end of a young adult pezo-1::GFP hermaphrodite expressing GFP in the pharyngeal sieve (white arrow), according to its morphology and location. Scale bar represents 100 µm. (G) Colocalization between tph-1::DsRed2 and pezo-1::GFP reporter in the NSM neuron. Scale bar represents 50 µm. Micrographs are representative of ≥20 independent preparations. serotonin concentration. The interpump interval results support the idea that there is a minimum refractory period between two pumps. This set of analyses allowed us to establish a suitable model for evaluating the role of pezo-1 function in vivo. PEZO-1 modulates pump duration and interpump interval To determine whether pezo-1 has a functional role in pharyngeal pumping, we engineered LOF and GOF mutants. A putative LOF mutant was obtained by deleting 6,616 bp from the pezo-1 locus (hereafter referred to as pezo-1 KO; Fig. 3 A, top; and Fig. S2, A-C). This CRISPR-KO transgenesis deleted 638 amino acid residues from PEZO-1 that, according to the cryo-EM structure of the PIEZO1 mouse orthologue (Ge et al., 2015;Guo and MacKinnon, 2017;Saotome et al., 2018;Zhao et al., 2018), eliminates 12 transmembrane segments, 7 extra-and intracellular loops, and the beam helix that runs parallel to the plasma membrane (Fig. S2 D). Previous works demonstrated that the substitution of R2456H (located at the pore helix) in the human Piezo1 gene orthologue increases cation permeability (GOF) and causes hemolytic anemia (Albuisson et al., 2013;Bae et al., 2013;Zarychanski et al., 2012). Moreover, a conservative substitution of Lys for Arg at position 2456 in the human PIEZO1 channel exhibits a pronounced slowed inactivation when compared with the WT or R2456H channels (Bae et al., 2013). Hence, we engineered a putative GOF mutant strain, obtained by substituting the conserved Arg 2373 with Lys (hereafter referred to as pezo-1 R2373K or GOF; Fig. 3 A, bottom). Parenthetically, the R2373K numbering position is based on isoform G, one of the longest isoforms according to RNA sequencing (wormbase.org release WS280). We also included two mutants known to alter pharyngeal function, eat-4(ad572) and avr-15(ad1051), in our analysis. EAT-4 is a glutamate-sodium symporter involved in postsynaptic glutamate reuptake. eat-4(ad572) affects the neurotransmission efficiency of all glutamatergic pharyngeal neurons Lee et al., 1999). AVR-15 is a glutamate-gated chloride channel expressed in the pm4 and pm5 pharyngeal muscles (both synapsed by M3 L/R ) and is involved in relaxation of the pharynx. Its mutant allele ad1051 lengthens pump duration by delaying relaxation of the pharynx in a similar fashion to laser ablation of M3 L/R neurons (Dent et al., 1997). With these strains, we sought to determine if altering PEZO-1 function would affect the worm's pharyngeal phenotype. At a 2-mM concentration of exogenous serotonin (to elicit pharyngeal activity), both pezo-1 KO and R2373K mutants displayed higher pumping frequencies than WT, albeit not statistically significantly (WT, 1.92 ± 0.11; KO, 2.21± 0.09; and GOF, 2.29 ± 0.1 Hz; mean ± SEM; Fig. 3 B), similar to avr-15(ad1051) (2.22 ± 0.09 Hz; mean ± SEM; Fig. 3 B). On the other hand, the eat-4(ad572) mutant displayed lower pumping frequency at this serotonin concentration. To further assess the altered pharyngeal function pezo-1 mutants, we analyzed the pump duration distributions from the EPG records. pezo-1 KO distribution is similar to the WT (Fig. 3 C, red versus black), whereas the R2373K mutant profile is reminiscent of avr-15(ad1051), as both mutant strains displayed significantly narrower distributions around 100-ms pump events (Fig. 3, C and D, blue and green versus black), when compared with the WT (significance was determined by a Z test). Moreover, the R2373K mutant lacked fast pump events, 50-80 ms (Fig. 3 C), similar to the WT features observed at high serotonin concentrations (≥5 mM; Fig. 2 E) and the eat-4(ad572) and the avr-15(ad1051) mutants at a 2-mM serotonin concentration (Fig. 3 D). Analysis of the distribution of interpump intervals revealed that pezo-1 KO and R2373K mutants, although different, both spend less time resting between pumps (95-120 ms) than the WT (≈140 ms; Fig. 3 E). This enhancement in function resembles the WT activity measured at 5-20-mM serotonin concentrations (Fig. 2, F and G) and could account for the slight increase in frequency shown in Fig. 3 B. The close resemblance between the pharyngeal pumping parameters of PEZO-1 GOF and the avr-15(ad1051) mutant suggests a potential link between PEZO-1 and pharyngeal relaxation. PEZO-1 determines pharyngeal pumping in response to osmolarity Mechanical stimuli come in many forms, including stretching, bending, and osmotic forces (Cox et al., 2019). To further understand the functional role of pezo-1, we evaluated pharyngeal pumping parameters after challenging worm strains with varying osmolarities (in the absence of serotonin or food). The worm's pharynx draws in liquid and suspended bacteria from the environment and then expels the liquid but traps bacteria (Avery and You, 2012). We adjusted a standard laboratory solution used for worm experiments (M9 buffer, 320 mOsm) to varying osmolarities (150, 260, and 320 mOsm). Parenthetically, M9 buffer elicits acute withdrawal behavior in the absence of food, and another buffer (M13) with lower osmolarity (∼280 mOsm) is commonly used to study molecules that elicit acute avoidance behavior (Hart et al, 2006;Jang and Bargmann, 2013;Caires et al, 2017Caires et al, , 2021Geron et al, 2018). Because we measured the pharynx function in the absence of food for this set of experiments, we refer to the M9 buffer as a "high-osmolarity solution." Low-osmolarity solutions would be equivalent to swallowing liquid containing few solutes (150 mOsm), whereas high osmolarities would represent a "gulp" of liquid with a large amount of solutes (320 mOsm). Of note, higher osmolarities were associated with smaller mean pumping frequencies for WT worms (Fig. 4 A). Our results indicate that a larger number of solutes in solution corresponds to an increased retention time in the pharynx before moving to the intestine of the WT worms. Notably, at 260 mOsm, pezo-1 mutants exhibited lower frequency than WT (albeit the KO was not statistically significant), and at 320 mOsm, both pezo-1 KO and GOF mutants displayed a significantly higher pumping frequency than WT worms (Fig. 4 A). In contrast, we did not measure significant differences between WT worms and the pezo-1 mutants at 150 mOsm. Akin to human Piezo2 KO or GOF mutations causing joint contractures (Chesler et al., 2016;Coste et al., 2013;McMillin et al., 2014), we demonstrated that lack of or enhanced PEZO-1 function modulated pharyngeal pumping frequencies similarly (at high osmolarities). Next, we further examined the EPG parameters at high osmolarity (320 mOsm). Analysis of the distribution of pump durations and length of the mean interpump intervals revealed that both pezo-1 mutants had more frequent fast pumps (80-120 ms, Fig. 4 B), and the KO spent less time resting between pumps, compared with the WT (Fig. 4 C). Interestingly, high osmolarity (320 mOsm) revealed resemblances between PEZO-1 GOF and avr-15(ad1051) versus PEZO-1 KO and eat-4(ad572) pharyngeal pumping parameters (frequency and duration, Fig. 4, D and E). Altogether, our results suggest that PEZO-1 is required for finetuning pharyngeal function in response to osmolarity changes. PEZO-1 function is involved in food sensation To determine the impact that PEZO-1 function has on food intake, we recorded pharyngeal pumping of WT and pezo-1 strains in response to varying food stimuli. It has been hypothesized that food quality and feeding preferences displayed by worms are linked to bacterial size (Shtonda and Avery, 2006). To this end, we measured worm pharyngeal pumping while feeding them with conventional food used in the laboratory for maintenance (E. coli strain OP50). We found that feeding WT worms E. coli elicited lower pumping frequencies than 2 mM serotonin ( Fig. S3; E. coli, 1.36 ± 0.09; and serotonin, 1.92 ± 0.11 Hz; mean ± SEM), whereas pezo-1 mutants displayed similar pumping frequencies with exogenous serotonin or E. coli (Fig. S3). Future experiments are needed to understand why exogenous serotonin stimulation and feeding result in varying pezo-1 influence on pharyngeal function. Additionally, we varied the dimensions of OP50 using cephalexin, an antibiotic that prevents the separation of budding bacteria and generates long spaghetti-like filaments of bacterium, as observed under a microscope and elsewhere (Fig. S4 A; Hou et al., 2020;Martinac et al., 1987). Specifically, cephalexin yields bacteria whose contour length is 5-10 times longer and 1.5 times stiffer (i.e., Young's modulus and the bacterial spring constant) than untreated bacteria, as determined by fluorescence imaging and atomic force microscopy (Hou et al., 2020). Hence, feeding worms with these two physically different types of food could help to elucidate the physiological role of PEZO-1 in detecting food with varying mechanical properties (i.e., small and soft versus large and rigid food). A similar method was previously described using the antibiotic aztreonam and was shown to affect pharyngeal pumping (Ben Arous et al., 2009;Gruninger et al., 2008). WT and pezo-1 mutants are able to ingest spaghetti-like bacteria and reached adulthood in 3 d, similar to worms fed with control bacteria (Fig. S4, B and C). Notably, feeding worms with control or spaghetti-like bacteria revealed distinctive pharyngeal traits between the pezo-1 mutants and the WT worms (Fig. S4 D). When fed with control E. coli, both pezo-1 mutants (KO and GOF) had higher mean frequencies, shorter mean pump durations, narrower pump duration distributions, and faster mean interpump intervals than the WT worms (Fig. 5, A-E). Conversely, feeding worms with spaghetti-like E. coli elicited opposing effects on the pharyngeal pumping parameters of the pezo-1 mutants. For instance, feeding with spaghetti-like E. coli decreased pezo-1 KO mean frequency, while the mean pump duration and distribution remained similar to WT worms (Fig. 5, A-C). Furthermore, this modified diet significantly increased the mean interpump interval of the KO in comparison to the GOF mutant (Fig. 5 D). Unlike the KO and WT strains, the R2373K pezo-1 mutant displayed high-frequency, shorter pumps (mean and distributions; Fig. 5, A-C) and reduced mean interpump interval durations (mean and distributions; Fig. 5, D and E). Altogether, our results indicate that PEZO-1 regulates the pharynx response to the physical parameters of food, such as the length and stiffness of ingested bacteria. pezo-1 encodes a mechanosensitive ion channel The PEZO-1 protein sequence shares 60-70% similarity with mammalian PIEZO channel orthologues. However, whether PEZO-1 responds to mechanical stimuli has not yet been established. To address this major question, we cultured C. elegans cells from three different strains endogenously expressing pezo-1 WT, KO, or the R2373K GOF mutation in the background of the VVR3 strain that expresses a nonfunctional pezo-1::GFP reporter ( Fig. 1 A). pezo-1 WT, KO, and GOF strains expressed similar levels of GFP (Fig. S5). Embryonic pezo-1::GFP cells were patchclamped using the cell-attached configuration, with application of constant negative pressure (−70 mmHg) at different voltages (Fig. 6, A-C). The normalized steady-state current (I/I max )-versusvoltage relationship is characterized by a reversal potential of +9.06 mV (Fig. 6 D), indicating that PEZO-1 mediates a slight cation selective conductance, like the mouse and Drosophila orthologues (Coste, 2012). Importantly, most of the pezo-1 WT cells displayed multiple-channel opening events upon mechanical stimulation. Nevertheless, from the few traces that carry pressure-dependent single-channel opening events, we were able to determine that PEZO-1 displays an outward slope conductance of 46.4 pS and an inward slope conductance of 34.8 pS (Figs. 6,E and F;and S6). These current magnitudes are similar to the conductance of human PIEZO1 (37.1 pS) reported by others (Gottlieb et al., 2012). Mechanical stimulation of embryonic pezo-1::GFP cells expressing WT and GOF PEZO-1, but not the KO, yielded several channel openings upon increasing negative pressure (Fig. 7, A and C). As would be expected with increased activity, it was more difficult to identify single-channel opening events in the traces coming from the GOF, as compared with the WT (Figs. 7 B and S7). Cells expressing WT PEZO-1 displayed mechanodependent currents with a half-pressure activation (P 1/2 ) corresponding to −59.1 ± 4.3 mmHg (mean ± SEM; Fig. 7, A and D). Alternatively, PEZO-1 R2373K displayed mechanodependent currents with a significantly lower P 1/2 than the WT channel (−39.2 ± 2.2 mmHg; Fig. 7, A and D), indicating that the GOF mutant requires less mechanical stimulation to open. Since we could not reach saturating stimulus, these P 1/2 values might be inaccurate; hence, future experiments are needed to unequivocally determine the differences in sensitivity between the WT and GOF. Notably, the R2373K mutation introduced a latency for activation that was not detected in the WT (Fig. 7 A, blue traces; and Fig. 7 E). The decrease in mechanical threshold, along with the slowed activation, were previously reported for the equivalent human PIEZO1 R2456K mutation in mammalian cell lines (Albuisson et al., 2013;Bae et al., 2013;Romero et al., 2019;Zarychanski et al., 2012). Future experiments are needed to understand the origin of these differences in activation. Unlike pezo-1 WT, ∼50% of the mechanocurrents elicited from the pezo-1 R2373K-expressing cells remained active after removal of the mechanical stimulus (Fig. 7 A, blue traces; Fig. 7 F; and Fig. S7). This slow deactivation is also reminiscent of the human PIEZO1 R2456K GOF phenotype previously characterized by Bae et al. (2013). Overall, our results suggest that PEZO-1 is a mechanosensitive ion channel and that a conserved mutation in the pore domain elicits activation and deactivation changes similar to those of its human counterpart. One caveat of our PEZO-1 electrophysiological characterization in C. elegans cultured cells is that we cannot identify (at this point) which type of cells we are measuring from. We are only confident that those cells express pezo-1 since they are labeled with the nonfunctional pezo-1::GFP reporter. These patch-clamp experiments were blind to genotype, yet we consistently found that pezo-1::GFP-labeled cells coming from KO worms had negligible or no currents under the same voltage and pressure regimes used for pezo-1 WT and GOF cell cultures (Fig. 7 C). Hence, to further validate that the pezo-1 gene encodes for a mechanosensitive ion channel, we heterologously expressed one of the longest isoforms of pezo-1 (isoform G; wormbase.org release WS280) in Sf9 cells and measured its function in the whole-cell patch-clamp configuration while stimulating with a piezo-electrically driven glass probe (Fig. 8 A). Similar to mammalian PIEZO channels, PEZO-1 mediates indentationactivated currents (Fig. 8 B). Uninfected Sf9 cells do not display mechanosensitive channel currents (Figs. 8 B and S8). Importantly PEZO-1 displayed the properties described for mammalian PIEZOs in other cell types (Coste et al., 2010;Wu et al., 2017b), including voltage-dependent inactivation (Fig. 8, C-F) and nonselective cation currents, as determined by the reversal potential (−1.15 mV; Fig. 8 G). Our results demonstrate that expressing pezo-1 in a naive system was sufficient to confer mechanosensitivity to Sf9 cells. Millet et al. Journal of General Physiology pressure, WT and mutant channels displayed steady-state currents, but only the GOF mutant exhibited a pronounced latency for activation and slowed deactivation. With displacement stimulation, it is possible to determine that PEZO-1 GOF inactivates more slowly than the WT. Overall, our results indicate that PIEZO orthologues are functionally conserved. Discussion In 2010, Coste and collaborators reported that the C. elegans genome contained a single Piezo gene, pezo-1 (Coste et al., 2010). However, the functional role of pezo-1 remained elusive, even a decade after its discovery. Here, we showed that PEZO-1 is a mechanosensitive channel with a novel functional role in the worm pharynx by combining fluorescent reporters, genome editing, EPG recordings, behavior, and patch-clamp measurements. We found that pezo-1 is highly expressed in gland cells and the proprioceptive-like NSM neuron, among many other tissues. In addition to its expression, several lines of evidence suggested that PEZO-1 modulated several discrete but reliable features of pharyngeal function. Lack or augmentation of PEZO-1 function decreased interpump intervals when worms were challenged with 2 mM serotonin, while it also increased pumping frequency in high-osmolarity conditions or feeding with control bacteria. In the absence of functional PEZO-1, worms had reduced pharyngeal function (i.e., low frequency and long pump intervals) when fed with spaghetti-like bacteria. Finally, we demonstrated that the pezo-1 gene (WT or GOF) encodes a mechanosensitive ion channel by measuring its native function in C. elegans cells or with heterologous expression in insect cells. Altogether, our results established that PEZO-1 is important for pharyngeal function, regulation, and food sensation. C. elegans feeding relies on the ability of the pharynx to contract and relax. The pharynx is a tube of electrically coupled muscle cells that pump continuously throughout the worm's life (Mango, 2007). Several ion channels have been identified to be crucial for the pharyngeal muscle action potential, including acetylcholine receptors, T-and L-type Ca 2+ channels, glycine receptors, and K + channels (Avery and You, 2012). Although the pharyngeal muscle is capable of pumping (albeit at low frequencies) without nervous system input, higher pumping frequencies are controlled by pharyngeal motor neurons, namely MC L/R and M3 L/R (Avery and You, 2012). Nevertheless, the role of the nervous system in the control of rhythmic pharyngeal pumping is not completely understood. It is known, however, that the pharynx responds to a variety of neuromodulators (Avery and Horvitz, 1989). We found that pezo-1 is expressed in proprioceptive/mechanosensory NSM L/R neurons (which are important for the pharyngeal nervous system). Moreover, Taylor et al. (2021) reported expression of pezo-1 in the pharyngeal I3 interneuron. Unlike NSM L/R and M3 L/R , the function of I3 has not been established (Avery, 1993;Avery and Thomas, 1997). Our results suggest that PEZO-1 is not essential for pharyngeal muscles, but instead fine-tunes the role of the nervous system in controlling pharynx function. This is reminiscent of the novel role of mammalian PIEZO1 and PIEZO2 in mediating Traces are accompanied by all-point amplitude histograms generated from the corresponding records highlighted by arrows in A. (C) Bar graph displaying steady-state currents elicited by −100 mmHg of negative pressure of WT, pezo-1 KO, and pezo-1 R2373K cells expressing pezo-1::GFP. Bars are mean ± SD. n is denoted above the x axis. Kruskal-Wallis and Dunn's multiple comparisons tests. (D) Pressure-response profiles for pezo-1 WT and R2373K currents. Normalized currents (I/I max ) elicited by negative pressure of mechanically activated currents of WT and pezo-1 R2373K cells expressing pezo-1::GFP. A Boltzmann function, Eq. 1, was fitted to the data. The shadows encompassing the curves indicate the 95% confidence bands for the fit. Circles are mean ± SD. n = 15 for WT and n = 10 for pezo-1 R2373K. (E) Bar graph displaying the time it takes to reach half of the steady-state currents, elicited by −100 mmHg of pressure, of WT and pezo-1 R2373K cells expressing pezo-1::GFP. Bars are all mean ± SD. n is denoted above the x axis. Unpaired t test with Welch's correction. (F) Bar graph displaying the percentage of steady-state currents remaining 100 ms after the removal of the mechanical stimulus, from WT and pezo-1 R2373K cells expressing pezo-1::GFP. Bars are mean ± SD. n is denoted above the x axis. Unpaired t test. Asterisks (***, P < 0.001 and **, P < 0.01) indicate values that are significantly different, and ns indicates not significantly different.. Millet et al. Journal of General Physiology neuronal sensing of blood pressure and the baroreceptor reflex . NSM L/R and M3 L/R (NSM L/R are pezo-1-expressing neurons), have been postulated to sense bacteria in the pharynx lumen, via their proprioceptive endings, and secrete serotonin in response to this mechanical stimulus (Avery, 1993;Avery and Thomas, 1997). Laser ablation of NSM L/R in unc-29 mutants leads to subtle changes in pharyngeal pumping rate; however, this was done while simultaneously ablating other pharyngeal motor neurons (M1, M2 L/R , M3 L/R , M5, and MI; Avery, 1993). This approach could exert antagonistic effects on pumping rate, yielding a steady pharyngeal activity. Using the ScreenChip system allowed us to reveal the potential roles of extrapharyngeal neurons expressing pezo-1 (NSM L/R ). Our results determined that proper function of PEZO-1 lengthened interpump intervals with 2 mM serotonin, in the absence of food. They further demonstrated that PEZO-1 modulated the feeding behavior of worms confronted with food of various mechanical properties (control and spaghetti-like bacteria). This led us to hypothesize that PEZO-1 is involved in food sensation and modulates pharyngeal pumping rate. Hence, like the mammalian orthologue PIEZO2, PEZO-1 is expressed in proprioceptive endings and is involved in stretch reflexes (Chesler et al., 2016;Woo et al., 2015). Nevertheless, it remains to be determined if mammalian PIEZO channels play a role in food sensation and/or the swallowing reflex. Humans sense various organoleptic food qualities, such as visual aspects (color and shape), odorants through smell, and texture and flavor through tasting. In nematodes, there is a lack of understanding of what is sensed as food. Worms are able to filter particles from fluid in a size-dependent manner (Fang-Yen et al., 2009;Kiyama et al., 2012), and feeding is facilitated by attractive odors or suppressed by repellents (e.g., diacetyl, isoamyl alcohol, quinine; Gruninger et al., 2008;Li et al., 2012). Others have demonstrated that worms prefer to feed from active (i.e., bacteria reproducing rapidly and emitting high levels of CO 2 ) rather than inactive bacteria (Yu et al., 2015). We determined that pezo-1 KO worms "choke" when presented with long and stiff spaghetti-like bacteria, whereas WT and GOF strains increase pharyngeal pumping when ingesting this elongated and rigid food. Therefore, we propose that the pharynx itself might be a sensory organ, as worms modify their pumping parameters when they sense solutions of different osmolarities or food with different textures and/or consistencies. We further hypothesize that worms can perceive changes in texture and adjust their pumping frequency through a mechanism requiring PEZO-1. Since pezo-1 is not essential for C. elegans when cultured in standard laboratory conditions (e.g., monoaxenically on E. coli OP50), we wonder if in its natural biotic environment this mechanosensitive ion channel plays a crucial role, as it does in humans and Drosophila. Given that worms grow in microbe-rich and heterogeneous environments (feeding from prokaryotes of the genera Acetobacter, Gluconobacter, and Enterobacter, for example; Schulenburg and Félix, 2017), they might encounter bacteria of different dimensions and stiffness that would make pezo-1 function more relevant to the worm's ability to discriminate the food on which it grows best. Why do pezo-1 LOF and GOF mutations cause similar behavioral phenotypes? Our data show that both pezo-1 mutants (KO and GOF) increase the pumping frequency of the pharynx in different settings: serotonin exposure (albeit not statistically significantly), high osmolarity, and ingestion of control bacteria. While it may seem counterintuitive at first, there are some scenarios in which too little or too much mechanosensation can be detrimental for animal behavior. In humans, PIEZO2 LOF (premature stop codon) and GOF (missense mutation I802F) alleles cause joint contractures, skeletal abnormalities, and alterations in muscle tone (Chesler et al., 2016;Coste et al., 2013;Yamaguchi et al., 2019). Only when feeding worms with spaghetti-like bacteria were we able to uncover differences in the pharyngeal parameters between the LOF and GOF mutants. Hence, we hypothesize that lacking the function of PEZO-1 significantly slows pharyngeal function when passing the lengthy and rigid bacteria from the pharynx to the gut. Several requirements must be met for a channel to be considered mechanically gated (Arnadóttir and Chalfie, 2010). Accordingly, we found that pezo-1 is expressed in the proprioceptive NSM neuron; knocking out pezo-1 inhibits worm pharyngeal function when fed with elongated and stiff bacteria; engineering a single point mutation in the putative pore domain (R2373K) elicited similar inactivation, activation, and deactivation delays that are reminiscent of the gating behavior reported for the human PIEZO1 R2456K (Bae et al., 2013); and expression of pezo-1 (WT and GOF) confers mechanosensitivity to, otherwise naive, Sf9 cells. We propose that PEZO-1 is a mechanosensitive ion channel given that the time it takes to reach half of the steadystate currents ranges between 3.5 and 15 ms upon application of negative pressure. These are faster than activation times reported for the Drosophila phototransduction cascade, one of the most rapid second messenger cascades (Hardie, 2001). These combined efforts highlight the versatile functions of the PIEZO mechanosensitive channel family, as well as the strength of the C. elegans model organism to reveal physiological functions. Our findings revealing PEZO-1 as a mechanosensitive ion channel that modulates pharyngeal function raise several important questions. How does pezo-1 modulate pumping behavior electrical activity? Does pezo-1 equally enhance or inhibit the function of the pharyngeal hypodermal, gland, and muscle cells, as well as neurons, expressing this channel? Could pezo-1 phenotypes be exacerbated if the gene function is nulled in a cellspecific manner? Does the slow deactivation and/or inactivation of the GOF mutant, determined at the patch-clamp level, account for the enhancement in pharyngeal function when worms are fed with bacteria? Does PEZO-1 require auxiliary subunits and/ or the cytoskeleton for gating? Regardless of the answers, the plethora of physiologic roles that this eukaryotic family of mechanosensitive ion channels play is outstanding. More experimental insight will be needed to fully grasp the full implications of pezo-1 in the physiology of C. elegans. Data availability Data supporting the findings of this paper are available from the corresponding author upon reasonable request. The source data underlying figures and supplementary figures are provided as a Source Data file, doi: https://doi.org/10.6084/m9.figshare. 16992058.v3. Figure S2. pezo-1 KO validation. (A) pezo-1 gene diagram according to wormbase.org release WS280 made with Exon-Intron Graphic Maker (wormweb.org). Magenta rectangles and white triangles denote the 59 and 39 UTRs, respectively; black rectangles denote exons; black lines denote introns; the red bracket denotes the knu508 allele (a 6,616-bp deletion) of the pezo-1 KO strain; magenta arrows labeled F1, F2, and R1 denote the positions of the oligonucleotides used for PCR amplification; and blue arrows F3 and R2 denote the positions of the oligonucleotides used for RT-PCR amplification. (B) Agarose gel electrophoresis (1% agarose) of PCR-amplified products using F1/R1 and F2/R1 PCR primer sets. Lane M, 1 kb Plus DNA (SM1331/2; Thermo Fisher Scientific) size marker. WT (N2) and KO (COP1553) refer to the worm strains used to extract genomic DNA. (C) Agarose gel electrophoresis (1% agarose) of RT-PCR amplified products using F3/R2 primer sets. Lane M, 1 kb Plus DNA (SM1331/2; Thermo Fisher Scientific) size marker. WT (N2) and KO (COP1553) refer to the worm strains used to extract total RNA. (D) Ribbon representation of Mus musculus PIEZO1 monomer (PDB ID: 5Z10; gray) highlighting the PEZO-1 corresponding residues (red) that were knocked out using CRISPR to generate the knu508 allele. PEZO-1 monomer ribbon diagram was made with UCSF Chimera v1.9. Millet et al. Journal of General Physiology S2 PEZO-1 is a mechanosensitive ion channel https://doi.org/10.1085/jgp.202112960 Figure S3. Comparison between pezo-1 strains pumping frequencies elicited by serotonin or bacteria. Pharyngeal pumping frequencies depicted as violin plots with the means shown as horizontal bars, for WT (N2), pezo-1 KO, and pezo-1 R2373K strains at 2 mM serotonin concentration or when fed with control E. coli. n is denoted above the x axis. Mann-Whitney test. Asterisks indicate values that are significantly different (***, P < 0.001), and ns indicates not significantly different. , pezo-1 KO, and pezo-1 R2373K adult proportion after 3 d of seeding eggs on NGM plates with control or spaghetti-like bacteria, as determined by worm images. Animals that reached adulthood were counted in each trial, and results were compared across four trials. n is denoted inside the bars. (D) Pharyngeal pumping frequencies depicted as violin plots with the means shown as horizontal bars, for WT (N2), pezo-1 KO, and pezo-1 R2373K strains when fed with control or cephalexin-treated E. coli (spaghetti-like bacteria). n is denoted above the x axis. Mann-Whitney test. Asterisks indicate values that are significantly different (***, P < 0.001 and **, P < 0.01), and ns indicates not significantly different.
2021-05-13T13:21:23.818Z
2021-11-13T00:00:00.000
{ "year": 2021, "sha1": "4c2e1b408f26687f57d591af7623a5cc6994f654", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.1085/jgp.202112960", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a94866d8dcebbf59cdccea8576dac21f27ea4cf6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
240262307
pes2o/s2orc
v3-fos-license
Nanodiamond—Carbon Black Hybrid Filler System for Demanding Applications of Natural Rubber—Butadiene Rubber Composite : The objective of the study was to investigate the effect of the partial replacement of carbon black (CB) by nanodiamonds (NDs) on the vulcanization, mechanical and dynamic properties of a natural rubber—butadiene rubber compound, a typical elastomer compound found in several applications (the tire and mining industry, for example). A studied hybrid filler system resulted in a 28% increase in tensile strength and 29% increase in 300% modulus at low ND loadings even though the total weight fraction of the filler system was kept constant at 25 parts per hundred rubber. The hybrid filler system improved dispersion of both fillers as was proven by scanning electron microscopy and the Payne effect study. In addition, the replacement of 2.5 and 5 phr CB by NDs resulted in 62% improvement in wear resistance. The DMA study showed that a certain ND-CB filler combination has a positive effect on tire properties such as wet grip and rolling resistance. Introduction A rubber compound consists of several components. One of the most important additives are fillers that are added for several purposes, such as to enhance the mechanical properties of rubber, reduce costs or improve a specific property, e.g., electrical conductivity. Carbon black (CB) has been the main reinforcing filler in rubbers over a century. A high reinforcing ability of CB is related to small particles and aggregate size and high structure (shape and level of branching of aggregates) of the filler resulting in a large surface area available to interact with polymer chains and create chemical and physical bonding with rubber [1]. However, CB has a negative influence on dynamic properties compared to silica, the second traditional reinforcing filler, which is currently used, for example, in tire treads. This, together with a high carbon footprint of the CB manufacturing process, increases the requirement for new filler solutions [2]. Nanosized fillers, such as carbon nanotubes (CNTs) and layered silicates, have been widely studied during the last decades [3][4][5][6][7][8][9][10][11]. They are promising fillers due to their small particle size and high surface area, and thus they have, in theory, a high reinforcing potential already at low filler concentrations. The practice has shown that the mixing of these nanofillers into rubber is challenging due to the high surface energy and their tendency to form large agglomerates. Several publications have claimed that solution mixing is required for a good dispersion of CNTs [3,12,13], but it is not yet an industrially viable method. Instead, the CB-nanofiller hybrid filler systems have been found to lead to improved filler dispersion and improved mechanical properties [14][15][16]. For example, by combining up to 5 parts per hundred rubber (phr) of CNTs with CB, a good dispersion of CNTs is achieved in a conventional rubber mixing process [14,16]. In addition, the hybrid filler system improves the CB dispersion and results in improved mechanical properties [17,18] leaving, however, the dynamic properties still compromised [14]. Furthermore, natural rubber latex containing a filler combination comprising of CNTs, graphite oxide and CB was found to have better fatigue, crack growth resistance and mechanical properties than the single CB filler system [19]. This was also related to the improved dispersion and synergistic effect. Similar conclusions were made in a study where a graphene nanoplatelet and CB hybrid filler system was studied in ethylene propylene diene rubber [20]. The partial replacement of CB by graphene nanoplatelets was mentioned to reduce the CB aggregation. Moreover, a surface modified expanded graphite was reported to improve the mechanical properties of the CB-filled NR composite [21] as well as an intercalated graphene-CB hybrid filler system in styrene-butadiene rubber (SBR) [22]. A less studied nanofiller type, nanodiamonds (NDs), are considered as a non-reinforcing filler but together with CB it could result in good dynamic properties, as they have been found to decrease the losses of rubbers [23], including low mechanical hysteresis [24]. Such an effect may be related to the reduced internal friction due to the dry lubricating behavior of NDs facilitating the orientation of macromolecules upon the applied stress [23,25]. Single detonation-produced NDs are 4-6 nm spherical particles consisting of sp 3 -carbon core, thin sp 2 -carbon transition layer and an active surface mostly containing carboxyls, hydroxyls, lactones, ketones and ethers [26,27]. NDs are known to exhibit synergistic effect with other carbon-based fillers in SBR [28] and epoxy, and the addition of NDs to some rubbers has been shown to improve e.g., elongation at break [29,30]. The CNT-ND hybrid filler system improved the dispersion of individual filler particles but had a negative influence on mechanical properties compared to CNT filled SBR [28]. In this study, CB is partially replaced with a small amount of NDs in a natural rubber (NR)-butadiene rubber (BR) compound. Up to 5 phr of NDs is used as the earlier studies conclude percolation threshold below 4 phr [29]. The main aim of the study is to increase polymer-filler interaction and dispersion of fillers and thus improve the mechanical and dynamic properties of rubber. Carboxylated NDs contain a high amount of surface carboxyl groups, as well as some lactones, phenols and anhydrides, groups that match to functional groups of CB [31]. The presence of these polar groups is expected to have limited interaction with non-polar elastomers but to contribute to the possible dry lubricating action of NDs. The effect of ND-CB combinations on the filler dispersion, vulcanization, mechanical and dynamic properties of NR-BR compound is investigated. Materials and Methods The studied NDs were carboxylated NDs (uDiamond ® Vox P) from Carbodeon Oy, Vantaa, Finland. According to the manufacturer, the diameter of the NDs varied between 2 and 6 nm. The other materials used in the compound and formulation are presented in Table 1. Elastomers and other ingredients were mixed in a Brabender 350E mixer (Brabender GmbH, Duisburg, Germany) for six minutes. The mixing sequence is presented in Table 1. The starting temperature was 50 • C and the rotor speed was 60 rpm. The reference compound (ND0) contained only CB as filler in a concentration of 25 phr. In other compounds, 0.5, 1, 2.5 or 5 phr of CB was replaced by the same weight amount of NDs. The name of the compounds is related to the ND loading. Curing characteristics were measured with an Advanced Polymer Analyzer, APA 2000 from Alpha Technologies (Hudson, OH, USA). The measurements were carried out at 150 • C for 25 min. The compounds were then vulcanized to their t'90 value, which means the time required to reach 90% of the maximum rheometric torque. The apparent crosslink densities were studied by a swelling test. Approximately 0.4 g vulcanized rubber pieces were soaked in toluene for 72 h. The apparent crosslink density was determined from the swelling coefficient by 1/Q: where w 0 is the initial weight of the sample and w t is the weight at time t, ρ 1 and ρ 2 are densities of the solvent and unswollen vulcanizate, respectively [32]. Three measurements per compound were done and the averages and standard deviations were calculated. Bound rubber (BDR) measurements were performed by dissolving 0.2 g of uncured rubber in toluene for 96 h. Toluene was exchanged every 24 h. After immersion, the samples were dried and weighed. The BDR content was calculated by the formula: where m 0 is the rubber content in the sample, m 1 is the combined weight of the bag and the sample, m 2 is the weight of the dried bag and the sample, m s is the weight of the dry sample, and cpd is the total amount of rubber and filler in the compound in phr [33]. Three measurements per compound were done and the averages and standard deviations were calculated. The Payne effect was studied with a dynamic mechanical analyzer (DMA, DMA/SDTA861e from Mettler Toledo, Columbus, OH, USA). The measurements were conducted for a circular sample with a diameter of 6 mm in strain sweep from 0.5 to 1200 µm with a frequency of 1 Hz at room temperature. The state of dispersion of the CB and ND particles in the compounds was investigated by scanning electron microscopy (Zeiss ULTRAplus, Oberkochen, Germany). The fracture surfaces of the tensile test specimens were coated with a thin carbon layer to ensure the conductivity of the samples. Tensile tests of the samples were carried out by testing 5 specimens/rubber compound with Instron 5967 universal tester (Instron, Darmstadt, Germany) according to ISO 37 sample type 3 [34]. Tear resistance was tested with the same equipment according to ISO 34 with trouser type specimens (3 specimen/rubber compound) [35]. The crosshead speeds for the tensile and tear tests were 200 mm/min and 100 mm/min, respectively. Wear resistance was determined with CETR UMT-2 pin-on disk (Bruker Co, Billerica, MA, USA). Rubber samples with a diameter of 10 mm were rotated on a smooth AISI304 steel plate for 60 min with a force of 40 N. The rotation speed was 60 rpm. The change in mass was measured. The mass loss was converted to volume loss using the density of the compounds for more comparable results. The density of the compound was determined with a Wallace X21B electronic densimeter (H.W. Wallace & Co Ltd., Dorking, Surrey, England) by weighing the samples in air and in water. Shore A hardness was determined with a Bareis hardness tester (Bareis Prüfgerätebau, Oberdischingen, Germany) according to ASTM D 2240 [36]. The Shore A value was recorded instantaneously after dropping durometer. Five measurements were performed for each sample. Dynamic properties were studied with a Pyris Diamond DMA from PerkinElmer Instruments (Waltham, MA, USA), operating in tension mode. The measurements were done from −80 to +80 • C with a heating rate of 3 K/min, an amplitude of 40 µm, and a frequency of 1 Hz. The standard deviation in the results were determined by: where x i is the measured value, x is the average and n is the number of measurements. Results and Discussion The replacement of small amounts of CB by NDs can be expected to shorten the vulcanization time due to the high thermal conductivity of NDs [37,38] and thus due to the more efficient thermal energy transfer and faster temperature increase in rubber. This kind of behavior has been observed for CNTs [39] and for the CNT-CB filler combinations [14] as well as in the earlier ND studies [40]. However, the effect of NDs was opposite in the current case. Figure 1 and Table S1 (Supporting Information) shows that both the scorch time and the vulcanization time increases when the concentration of NDs increases. The carboxyl functional groups are known to adsorb the accelerator [41]. Further, NDs decrease the maximum torque values (Table S1). Although NDs have similar functional groups on the surface than CB, the surface of NDs is more polar. This has a negative influence on the polymer-filler interaction and affects the maximum torque together with the adsorption of curatives. This indicates a lower reinforcing ability of NDs. Although the rheometric torque indicated lower networking for the compounds containing NDs, the apparent crosslink density determined by the swelling study increases marginally when a part of CB was replaced by NDs, as seen in Table 2. Although swelling study is commonly accepted as a qualitative method for crosslink density, the filler volume and filler dispersion influence the swelling study results [42] as the swelling behavior of bound rubber and occluded rubber differ from the unfilled rubber. Replacing CB by the same weight amount of NDs decreases the total volume fraction of fillers as NDs have higher particle density than CB [43,44], but probably increases the total interfacial area through the improved CB dispersion, which leads to the increased apparent crosslink density determined by the swelling study. The polymer-filler interaction was studied by the bound rubber content. The bound rubber describes the polymer fraction adhered to filler by strong covalent bonds or by weak physical bonds. The bound rubber decreases the mobility of the polymer chains and makes the adhered polymer insoluble to solvents and thus reinforces the polymer. The higher the bound rubber content, the better the polymer-filler interaction. Table 2 shows that the bound rubber content decreases slightly at 0.5 and 1 phr ND loadings but increases back to the same level with the compound containing only CB at higher ND loadings. However, on the basis of the standard deviation, the direct conclusions are impossible to make. In this study, the physical and chemical bonds were not separated. Thus, the bound rubber results contain also the physically bonded or mechanically trapped rubber, i.e., occluded rubber. Due to the polar functional groups on the surface of NDs, covalent bonds with non-polar NR and BR are not likely to occur [24,45], and the rubber chains could slide along the surface of NDs showing some plasticizing effect. Thus, a decrease in bound rubber content is natural, especially if the ND particles are well dispersed and distributed. CB exists in aggregates and polymer chains can be trapped into the voids of these aggregates. In well dispersed NDs, these voids do not exist, and the amount of occluded rubber is reduced. The replacement of small amounts of CB by NDs can be expected to shorten the vul-canization time due to the high thermal conductivity of NDs [37,38] and thus due to the more efficient thermal energy transfer and faster temperature increase in rubber. This kind of behavior has been observed for CNTs [39] and for the CNT-CB filler combinations [14] as well as in the earlier ND studies [40]. However, the effect of NDs was opposite in the current case. Figure 1 and Table S1 (Supporting Information) shows that both the scorch time and the vulcanization time increases when the concentration of NDs increases. The carboxyl functional groups are known to adsorb the accelerator [41]. Further, NDs decrease the maximum torque values (Table S1). Although NDs have similar functional groups on the surface than CB, the surface of NDs is more polar. This has a negative influence on the polymer-filler interaction and affects the maximum torque together with the adsorption of curatives. This indicates a lower reinforcing ability of NDs. Although the rheometric torque indicated lower networking for the compounds containing NDs, the apparent crosslink density determined by the swelling study increases marginally when a part of CB was replaced by NDs, as seen in Table 2. Although swelling The state of dispersion was analyzed by the Payne Effect and SEM. The Payne Effect is determined by the difference between the complex shear modulus at low strain and at high strain. At low strain, filler-filler interaction is strong, and fillers stay in agglomerated form. When the strain is increased, the filler particles are pulled apart and the filler agglomerates are broken into smaller aggregates, and the modulus of rubber decreases. Hence, the Payne effect describes the filler-filler interaction. The higher the Payne Effect, the higher the filler-filler interaction and, in the case of nanofillers, the more likely the formation of a three-dimensional filler network instead of an aggregated filler structure [46]. Table 2 shows that the Payne effect increases after addition of NDs. The change can partly be explained by the increased total surface area of the fillers but NDs may also suppress the interfacial interaction of CB particles by locating between them. This improves the dispersion of CB and enables the formation of a filler network. At 5 phr ND loading, the Payne effect decreases again indicating worse filler dispersion. The SEM images ( Figure 2) show some differences in filler dispersion and distribution in different compounds, although ND particles cannot be distinguished separately. The reference compound (Figure 2a) containing only CB as filler has well dispersed areas but also unfilled and agglomerated areas. In addition, the compound containing 0.5 phr (Figure 2b) NDs has an irregular surface caused by the large agglomerates, which is evidence of insufficient dispersion. Figure 2c,d shows that the filler dispersion is improved. The surface is rather smooth, large filler agglomerates do not exist (except for the two filler agglomerates in Figure 2d), and the fillers seem to be more evenly distributed than in the reference compound. The improved dispersion explains the earlier Payne effect results. At 5 phr ND concentration, the filler dispersion becomes poorer again and the large agglomerates are found (Figure 2e), which is the reason for the decreased filler-filler interaction observed from the Payne effect. Appl. Sci. 2021, 11, x FOR PEER REVIEW 6 o The surface is rather smooth, large filler agglomerates do not exist (except for the two f agglomerates in Figure 2d), and the fillers seem to be more evenly distributed than in reference compound. The improved dispersion explains the earlier Payne effect resu At 5 phr ND concentration, the filler dispersion becomes poorer again and the large glomerates are found (Figure 2e), which is the reason for the decreased filler-filler in action observed from the Payne effect. Although the clear evidence of improved polymer-filler interaction was not obtai from the bound rubber results, the tensile strength of the rubber compound is impro remarkably after addition of NDs as seen in Figure 3. This is new evidence of the impro dispersion. The benefits of the hybrid filler system are partly lost with higher ND conc trations (>2.5 phr) due to the worse filler dispersion and decreased polymer-filler inte tion, but the tensile strength is still at a higher level than without NDs. Furthermore, standard deviation is the highest for the compounds containing 0 or 5 phr NDs, which in cates inhomogeneous compounds due to the poorer filler dispersion. Hence, the ten strength results support the SEM analysis. The tensile modulus at 300% elongation (M3 increases first but decreases when the amount of NDs is increased. Moreover, a minor crease in elongation at break is observed after the addition of NDs for the compounds w Although the clear evidence of improved polymer-filler interaction was not obtained from the bound rubber results, the tensile strength of the rubber compound is improved remarkably after addition of NDs as seen in Figure 3. This is new evidence of the improved dispersion. The benefits of the hybrid filler system are partly lost with higher ND concentrations (>2.5 phr) due to the worse filler dispersion and decreased polymer-filler interaction, but the tensile strength is still at a higher level than without NDs. Furthermore, the standard deviation is the highest for the compounds containing 0 or 5 phr NDs, which indicates inhomogeneous compounds due to the poorer filler dispersion. Hence, the tensile strength results support the SEM analysis. The tensile modulus at 300% elongation (M300) increases first but decreases when the amount of NDs is increased. Moreover, a minor decrease in elongation at break is observed after the addition of NDs for the compounds with the highest M300, but the changes are between the error bars. The decreased elongation at break is commonly attributed to the restricted mobility of the polymer chains. Appl. Sci. 2021, 11, x FOR PEER REVIEW 7 of 11 the highest M300, but the changes are between the error bars. The decreased elongation at break is commonly attributed to the restricted mobility of the polymer chains. In tear resistance results the trend is opposite (Figure 3d). The tear resistance decreases when 0.5-2.5 phr CB is replaced by NDs but increases again at 5 phr NDs. This is related to the plasticizing effect of NDs and filler dispersion. The poor interface between NDs and rubber as well as the reduced mechanical trapping of polymer chains weakens the tear resistance at low ND loadings. The big standard deviation at higher ND loadings also indicates the poor dispersion of fillers. In addition, wear resistance is improved after addition of 2.5 and 5 phr NDs, although change in hardness is nonexistent (Figure 4). The harder surface of NDs helps rubber to slide on the smooth steel surface. The improvements in wear resistance as well as reduction in friction coefficient after addition of NDs has also been found for acrylonitrile butadiene rubber, fluororubber and epoxy [25,45,47] In tear resistance results the trend is opposite (Figure 3d). The tear resistance decreases when 0.5-2.5 phr CB is replaced by NDs but increases again at 5 phr NDs. This is related to the plasticizing effect of NDs and filler dispersion. The poor interface between NDs and rubber as well as the reduced mechanical trapping of polymer chains weakens the tear resistance at low ND loadings. The big standard deviation at higher ND loadings also indicates the poor dispersion of fillers. In addition, wear resistance is improved after addition of 2.5 and 5 phr NDs, although change in hardness is nonexistent (Figure 4). The harder surface of NDs helps rubber to slide on the smooth steel surface. The improvements in wear resistance as well as reduction in friction coefficient after addition of NDs has also been found for acrylonitrile butadiene rubber, fluororubber and epoxy [25,45,47] Dynamic properties play an important role in several rubber applications, such as tires or dampers. Figure 5 presents the storage and loss moduli of the compounds as a function of temperature. The storage modulus curve can be divided into three stages: the glassy region, glass transition and the rubbery region. In the rubbery plateau region, the storage modulus is the highest when 0.5 phr CB is replaced by NDs, possibly due to the improved filler dispersion. When the ND loading increases, the storage modulus decreases. At the same time, a reduction in loss modulus is observed for higher ND loadings due to the plasticizing effect [25] and decreased polymer-filler interaction. rubber as well as the reduced mechanical trapping of polymer chains weakens the tear resistance at low ND loadings. The big standard deviation at higher ND loadings also indicates the poor dispersion of fillers. In addition, wear resistance is improved after addition of 2.5 and 5 phr NDs, although change in hardness is nonexistent (Figure 4). The harder surface of NDs helps rubber to slide on the smooth steel surface. The improvements in wear resistance as well as reduction in friction coefficient after addition of NDs has also been found for acrylonitrile butadiene rubber, fluororubber and epoxy [25,45,47] Dynamic properties play an important role in several rubber applications, such as tires or dampers. Figure 5 presents the storage and loss moduli of the compounds as a function of temperature. The storage modulus curve can be divided into three stages: the glassy region, glass transition and the rubbery region. In the rubbery plateau region, the storage modulus is the highest when 0.5 phr CB is replaced by NDs, possibly due to the improved filler dispersion. When the ND loading increases, the storage modulus decreases. At the same time, a reduction in loss modulus is observed for higher ND loadings due to the plasticizing effect [25] and decreased polymer-filler interaction. The loss factor of the compounds is presented in Figure 6. The gradual replacement of CB by NDs has only minor influence on the peak intensity. This behavior differs from the behavior of CNTs. The height of the loss factor peak decreases when CB is partially replaced by them due to the restricted mobility of the polymer chains [14]. In this case, only the compound containing 0.5 phr NDs show this effect, and still the effect is minimal. The loss factor of the compounds is presented in Figure 6. The gradual replacement of CB by NDs has only minor influence on the peak intensity. This behavior differs from the behavior of CNTs. The height of the loss factor peak decreases when CB is partially replaced by them due to the restricted mobility of the polymer chains [14]. In this case, only the compound containing 0.5 phr NDs show this effect, and still the effect is minimal. The addition of the higher amount of NDs results in the higher peak, thus NDs improve the mobility of the polymer chains. However, they do not affect the glass transition temperature. The loss factor curve is used to predict the tire properties, such as wet grip and rolling resistance. At −20 • C, the loss factor is improved when 1 or 2.5 phr CB is replaced by NDs, which indicates better grip on ice. At 0 • C, the temperature used to predict the wet grip, the compound containing 1 phr NDs shows the highest loss factor and thus the best wet grip in tire applications. The loss factor values at 60 • C are used as an indicator for the rolling resistance. At this temperature the loss factor is increasing when 0.5 or 1 phr CB is replaced by NDs, indicating the poorer polymer-filler interaction of NDs and the chain slipping of polymers. However, at a 2.5 phr ND concentration, the loss factor is remarkably lower than without NDs. Hence, the ND-CB filler combination has potential to reduce the rolling resistance in tires with proper concentrations. Appl. Sci. 2021, 11, x FOR PEER REVIEW 9 of 11 wet grip, the compound containing 1 phr NDs shows the highest loss factor and thus the best wet grip in tire applications. The loss factor values at 60 °C are used as an indicator for the rolling resistance. At this temperature the loss factor is increasing when 0.5 or 1 phr CB is replaced by NDs, indicating the poorer polymer-filler interaction of NDs and the chain slipping of polymers. However, at a 2.5 phr ND concentration, the loss factor is remarkably lower than without NDs. Hence, the ND-CB filler combination has potential to reduce the rolling resistance in tires with proper concentrations. Conclusions The effect of a nanodiamond-carbon black hybrid filler system on the properties of a natural rubber-butadiene rubber compound was studied by gradual replacement of a small amount of carbon black by nanodiamonds. Significant enhancement in tensile strength was achieved by replacing 1 phr carbon black by nanodiamonds, which was due to the improved filler dispersion. The hybrid filler system facilitated the dispersion of both fillers, and the stronger filler network was achieved. In addition, the hybrid filler system improved the wear resistance of the compounds after addition of 2.5 and 5 phr NDs. Hence, the studied hybrid filler systems have a great potential for improved performance in several industrial sectors, such as mining and paper manufacturing, although further studies are still required. Moreover, the partial replacement of carbon black by nanodiamonds has the potential to increase the wet grip as well as to decrease the rolling resistance in tire tread compounds. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Table S1: The curing parameters of the compounds obtained from the rheometric curves. Conclusions The effect of a nanodiamond-carbon black hybrid filler system on the properties of a natural rubber-butadiene rubber compound was studied by gradual replacement of a small amount of carbon black by nanodiamonds. Significant enhancement in tensile strength was achieved by replacing 1 phr carbon black by nanodiamonds, which was due to the improved filler dispersion. The hybrid filler system facilitated the dispersion of both fillers, and the stronger filler network was achieved. In addition, the hybrid filler system improved the wear resistance of the compounds after addition of 2.5 and 5 phr NDs. Hence, the studied hybrid filler systems have a great potential for improved performance in several industrial sectors, such as mining and paper manufacturing, although further studies are still required. Moreover, the partial replacement of carbon black by nanodiamonds has the potential to increase the wet grip as well as to decrease the rolling resistance in tire tread compounds. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/app112110085/s1, Table S1: The curing parameters of the compounds obtained from the rheometric curves.
2021-10-31T15:16:08.970Z
2021-10-28T00:00:00.000
{ "year": 2021, "sha1": "780159041ead38854e6b84acfa436336c1130b43", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/21/10085/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "0fba60e0a78888a8c984ca8520e39b4ffc1e22e4", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
248379406
pes2o/s2orc
v3-fos-license
Changes in the Amount of Rainwater in the Roztocze National Park (Poland) in 2001–2020 and the Possibility of Using Rainwater in the Context of Ongoing Climate Variability : Data for the years 2001–2020 on changes in the amount of rainwater in the Roztocze National Park (RNP) in the catchment area of the ´Swierszcz River (Poland) were investigated to evaluate the possibility of using rainwater in the park for various purposes in the context of ongoing climate variability. An analysis of data from the RNP’s Integrated Monitoring of the Natural Environment showed that the average annual air temperature increased by 2.1 ◦ C over the 20-year period, while the amount of precipitation decreased, especially in the winter seasons. These changes periodically led to a negative hydrological balance. As an effect, the groundwater table was gradually lowering, the flow of the ´Swierszcz River was reduced, and there were periodic shortages of water feeding the Echo Ponds. Water shortages also negatively affected the flora and fauna of the RNP. In order to quantitatively protect the Park’s water resources, a proposition was made to build a rainwater management system at the Animal Breeding Centre in Florianka to provide water for watering Polish Konik horses, flushing toilets, washing cars and agricultural equipment, and fire-prevention purposes. The excess water would be discharged to a nearby pond, which is an amphibian breeding site. It was estimated that the system was capable of meeting 100% of the demand for lower-quality water in the summer period. Moreover, it was determined that 9109 m 3 of rainwater could be obtained annually from the roofs of all public utility buildings located in the RNP. Introduction Water is essential for human life and well-being, as well as the economy of all countries. Freshwater accounts for only 2.5% of the Earth's water resources, and the remainder (97.5%) is the salt water found in the world's oceans. On top of that, as much as 68.7% of the Earth's freshwater is frozen in mountain glaciers and ice sheets [1]. This means that the available water resources, mainly found as groundwater and surface waters, are small. Freshwater is a renewable resource, but the world's reserves of clean and fresh water are constantly diminishing. In many parts of the globe, the demand for water exceeds the supply, and, as the human population continues to grow, numerous countries face increasing water shortages. Today, more than 2.1 billion people worldwide do not have easy access to water at home, and another 2.3 billion have poor sanitation [2]. The awareness of the global importance of water conservation did not develop until the 20th century, when more than half of the world's wetlands had been lost. Freshwater ecosystems, which are rich in biodiversity, are disappearing faster than marine aquatic ecosystems [3]. Therefore, it is necessary to rationally manage water resources and seek solutions that will permit us to protect the Earth's freshwater reserves and obtain water from various sources. In recent years, there has been more and more talk worldwide of wastewater reuse and the use of closed-loop systems [4][5][6][7][8]. In countries which experience the greatest water shortages, attempts are being made at using the desalination technology to convert sea and ocean water into freshwater [9][10][11][12]. However, the costs of applying wastewater treatment and desalination technologies are quite high [13][14][15]. Therefore, it seems that a more advantageous solution is to employ, where possible, systems for the collection and use of rainwater [16][17][18], the treatment of which does not require expensive technologies. The collection and use of rainwater is one of the measures that can improve water security and access to freshwater in the face of climate change [17]. Research conducted by numerous authors around the world shows that rainwater harvesting systems (RWHS) can provide from 12 to 100% of potable water for a household, depending on environmental and social conditions [16,[19][20][21][22][23]. Eroksuz and Rahman [24] investigated the water-saving potential of rainwater tanks installed in multistory residential buildings in three cities of Eastern Australia. They found that a rainwater tank of an appropriate size installed in a multistory building could deliver considerable water savings even in dry years. When designing RWHS, attention should also be paid to the selection of the right size of tanks and the comparison of the assumed and actual performance of the system. The return on investment costs can be significantly extended in the case of over-sized tanks in relation to optimized-sized tanks [25]. Attention is also drawn to the need for a more multi-criteria approach when modeling urban RWHS, not only to collect water, while taking into account economic constraints, but also considering other benefits to nature [26]. Additionally, the use of different optimization algorithms and ranking methods can help to reach the optimal solution while designing RWHS. Moreover, numerical optimization can provide an additional tool to increase efficiency of systems and reduce project cost [27]. The significance of RWHS is also shown in aspects of reduction of electricity consumption and greenhouse gas emissions compared to public centralized supply systems [28]. RWHS is of particular importance in countries with tropical climates, where RWHS can reduce water scarcity in dry season and may help urban drainage system in the rainy season [29]. In counties such as Ethiopia, RWHS from large institutions will enable a significant volume of potable water to be transferred to localities critically suffering from water shortage [30]. It should also be kept in mind that the efficiency of rainwater management is usually judged on the basis of long-term averages. However, such a long-term assessment is poorly suited to the goal of mitigating the effects of short-term high-intensity rainfall [31]. With the development of IoT (Internet of Things) technology, the possibility of real-time monitoring has also opened up; it can help in the construction of systems preventing urban flooding and sewers overflows [32]. The literature review presented above shows that, so far, rainwater management installations have been usually used in urbanized areas in cities. However, the proper management of rainwater should be a key concern for protected areas, such as national parks, landscape parks, nature reserves, or Natura 2000 sites, as it may contribute to sustainable protection of waters and biodiversity in those areas [33,34]. The existing and planned protected areas require adequate water resources to ensure the functioning of river valleys and other areas with aquatic and water-dependent ecosystems, as well as the proper protection of ecological corridors. This applies to all watercourses and their valleys, both within and outside protected areas [33]. The forecast climate changes, and especially their effects, may lead to a transformation of the water cycle and alterations in the structure of the water balance (water budget) of catchment areas, and above all, to an increase in the frequency and extent of droughts and a reduction in water resources [35]. Action is therefore needed to mitigate the future effects of climate change. Negative changes in the quantity of water resources mean that the demand of the human population and nature for water is not met in full; they also lead to an increased occurrence of contamination of water for public supply, periodic local deficits of water for public supply, and periodic water shortages for irrigation in agriculture and forestry. An acceleration of the hydrological cycle can lead to more and more frequent extreme water-related events-both droughts and floods. In order to reduce the risk of flooding, it is necessary to extend the existing retention systems by building dams and dry polders and developing solutions for the so-called "small retention" of rainwater [35]. Local measures also need to be taken to retain and use rainwater flowing from the roofs of single-family houses or public buildings. The goal of this paper is to present data on changes in the amount of rainwater in the Roztocze National Park (RNP) in Poland in the years 2001-2020 and discuss the possibilities of using the rainwater for various purposes in the context of the climate change taking place in this protected area. Characteristics of the Study Area RNP is located in Roztocze, which is a region situated on the border between Poland and the Ukraine, in Southeastern Poland ( Figure 1). It is a region that forms a natural borderline visible as a narrow belt of limestone hills connecting the Lublin Upland with Podolia. It differs from the neighboring lands in its geological structure, topography, climate, hydrological regime, soils, and vegetation. The region has many peculiar features, including a well-preserved, distinctive landscape, which is unique in Europe [36,37]. About 95% of the RNP is covered with various types of forests [38]. The RNP covers an area of 8482.83 ha and is one of Poland's 23 national parks. It was established in 1974 to preserve the natural and cultural heritage of the Roztocze region and provide access to it for scientists, tourists, and recreation visitors in a way that would not adversely affect the protected object. According to the Nature Conservation Act, it is also one of the Park's tasks to provide nature education experiences [40]. In Romer's division of Poland into climatic regions [41], the climate of Roztocze was classified as a Central Uplands climate of the region of the Lublin-Lviv Uplands and Ridges (D4). The climate of the RNP is a temperate, transitional climate, which has slightly more continental features compared to other areas of Poland [42]. The area of the RNP is one of the coolest in the region. The average annual air temperature varies from 7.4 to 7.5 • C and is about 1-2 • C lower on the hills. Annual precipitation is usually 600-650 mm. The RNP has a moderately high insolation, with average annual sunlight hours ranging from 1550 to 1600. The park displays a considerable topoclimatic diversity related to its varied topography (which leads to variability of exposure), large elevation differences, and a plant species composition which determines the height and density of the vegetation cover [42]. The area of the RNP has a very sparse network of surface waters. This is mainly due to the high permeability and water capacity of the bedrock, which retains water from precipitation. Surface waters cover only 52.6 ha, which is 0.62% of the total area of the RNP. Groundwater is found in porous-fissure rocks of the Upper Cretaceous, occurring as marls, opokas, and gaizes, and in sandy sediments and Quaternary gravels filling buried river valleys. In the river valley zones, the waters of the Cretaceous layers merge with the waters contained in alluvia, creating a common circulation-and-drainage system, the so-called Roztocze Water Level, which is characterized by large resources of groundwater [43]. In the central part of the RNP is located the catchment of theŚwierszcz River, along with the complex of Echo Ponds (Figure 2), which covers an area of 4651 ha, 40% of which lies within the park. The RNP is a Natura 2000 habitat area coded Central Roztocze PLH060017 and part of a Natura 2000 bird sanctuary coded Roztocze PLB060012. The remainder of the bird sanctuary is located within the RNP's buffer zone [44,45]. The map shows the coverage of the territory of the RPN in which the Animal Breeding Center is located. A total of 93.81% of the park is covered with various types of forests. The predicted climate changes will also affect the structure of forests in the Roztocze area. As it is known, the main factor influencing the diversification of species growth is the availability of water, which is conditioned by the level of rainfall and losses caused by evapotranspiration [46]. The land-cover map shows the natural context of the changes taking place. Scope and Statistical Analysis The literature data regarding climate change in the RNP were gleaned from monographs on the RNP and the Roztocze region [42,47]. Changes in the precipitation levels in the area of the RNP in the years 2001-2020 were determined on the basis of data coming from the Reports of the "Roztocze" Base Station of the Integrated Monitoring of Natural Environment (BS IMNE), which belongs to the RNP, and from the Chief Inspectorate for Environmental Protection (CIEP), which operates under the Integrated Monitoring of Natural Environment (IMNE). The missing data were obtained from the Roztocze Scientific Station in Guciów, affiliated with Maria Curie Skłodowska University (MCSU) in Lublin. The IMNE measurement data meet the relevant standards and norms and are based on proven and comparable field research and laboratory analysis methods. The methodological assumptions of the IMNE, a compilation of data on the measurement system, methods of laboratory analysis, and the principles of collecting and processing measurement results are presented in a study from the series Biblioteka MonitoringuŚrodowiska (Environmental Monitoring Library) [48]. The study also presents a conception of a system for the harvesting and utilization of rainwater designed for the RNP's Animal Breeding Centre (Ośrodek Hodowli Zwierząt) in Florianka and discusses the possibility of using rainwater in all public utility facilities in the RNP. The aspects covered by the research and methodology adopted in the analysis are presented in Figure 3. The map shows the coverage of the territory of the RPN in which the Animal Breeding Center is located. A total of 93.81% of the park is covered with various types of forests. The predicted climate changes will also affect the structure of forests in the Roztocze area. As it is known, the main factor influencing the diversification of species growth is the availability of water, which is conditioned by the level of rainfall and losses caused by evapotranspiration [46]. The land-cover map shows the natural context of the changes taking place. Scope and Statistical Analysis The literature data regarding climate change in the RNP were gleaned from monographs on the RNP and the Roztocze region [42,47]. Changes in the precipitation levels in the area of the RNP in the years 2001-2020 were determined on the basis of data coming from the Reports of the "Roztocze" Base Station of the Integrated Monitoring of Natural Environment (BS IMNE), which belongs to the RNP, and from the Chief Inspectorate for Environmental Protection (CIEP), which operates under the Integrated Monitoring of Natural Environment (IMNE). The missing data were obtained from the Roztocze Scientific Station in Guciów, affiliated with Maria Curie Skłodowska University (MCSU) in Lublin. The IMNE measurement data meet the relevant standards and norms and are based on proven and comparable field research and laboratory analysis methods. The methodological assumptions of the IMNE, a compilation of data on the measurement system, methods of laboratory analysis, and the principles of collecting and processing measurement results are presented in a study from the series Biblioteka Monitoringu Środowiska (Environmental Monitoring Library) [48]. The study also presents a conception of a system for the harvesting and utilization of rainwater designed for the RNP's Animal Breeding Centre (Ośrodek Hodowli Zwierząt) in Florianka and discusses the possibility of using rainwater in all public utility facilities in the RNP. The aspects covered by the research and methodology adopted in the analysis are presented in Figure 3. Two statistical analyses were performed to assess changes in the value of the sum of precipitation and air temperature. In the first one, mean precipitation and air temperature values for the RNP were compared between two decades (2001-2010 and 2011-2020), using one-way analysis of variance (ANOVA). Comparisons of mean values for precipitation and air temperature were made for various periods, namely all-year-round, cold season, warm season, and for each month separately, in order to get a more accurate overview of the observed changes. The second analysis was aimed at determining the trends of changes in air temperature and precipitation in relation to the subsequent years of the studied period. The precipitation and air temperature trends over the study period were determined by using linear regression. Pearson's correlation coefficients, r, between the study year and precipitation and between the study year and air temperature were calculated. As in the case of ANOVA, these trends were analyzed whole years, warm and cold seasons, and for each of the months. Statistical calculations were performed by using Tibco Statistica v. 14 software [49]. Statistical differences were determined at the significance level of α = 0.05. Changes in the Precipitation Level and the Water Balance in the RNP in 2011-2020 Studies conducted by the RNP as part of the IMNE program of CIEP's State Environmental Monitoring show the unfavorable changes in the hydrological regime of the RNP that had occurred over the past ten years (2011-2020) [50]. In May 2020, the level of Cretaceous groundwater recorded in the Roztocze National Park was the lowest in the entire history of measurements which had been conducted since 1986-the water table was 17.5 m below ground level (b.g.l). It was 3.50 m lower than the highest level recorded in mid-June 2013 (13.92 m b.g.l). This situation was caused by a decrease in the precipitation levels, a distribution of precipitation that was unfavorable from the point of view of nature (especially a lack of a snow cover in winter), and an increase of approx. 2.0 • C in the average annual temperature. All of this gave rise to a periodically negative hydrological balance in the territory of the RNP, especially in the catchment of theŚwierszcz River [50,51], resulting in a number of biotic changes. The simplified water balance of the catchment area of thé Swierszcz River presented in Figure 4, which compares precipitation and outflow in the years 2012-2020, does not show a significant statistical change. Still, the decrease in the precipitation levels and the unfavorable distribution of precipitation across the year (no winter precipitation), as well as the increase in temperature in the years 2017-2019, led to a lack of water in Echo Ponds and resulted in the upper section of theŚwierszcz River drying up. On the basis of the collected data, it was found that the annual precipitation levels the RNP in 2011-2020 were, on average, 41.75 mm lower than in 2001-2010, with the la est drop in the precipitation level occurring in the winter season (41.02 mm) and onl slight decrease observed in the warm period (by 0.73 mm). On the basis of the collected data, it was found that the annual precipitation levels in the RNP in 2011-2020 were, on average, 41.75 mm lower than in 2001-2010, with the largest drop in the precipitation level occurring in the winter season (41.02 mm) and only a slight decrease observed in the warm period (by 0.73 mm). The mean annual precipitation values for the two decades did not differ significantly statistically (ANOVA, p-values: 0.283, 0.067, and 0.362, respectively). No significant statistical differences were found either when each month was analyzed individually. The air-temperature data showed that, in the second decade (2011-2020) the annual air temperature was 1.03 • C higher than in the first one. In the cold season, the average temperature in the second decade was higher by 1.18 • C, and in the warm season, it was higher by 0.87 • C. These differences were statistically significant (ANOVA, p-values: 0.002, 0.026, and <0.001, respectively). When changes in average temperatures in the individual months were analyzed, significant increases were observed in the second decade in June (the mean higher by 1.51 • C, p = 0.022), August (1.28 • C, p = 0.005), September (1.69 • C, p = 0.002), and, the highest, in December (2.66 • C, p = 0.036). In the remaining months, the changes were not statistically significant, and a decrease was recorded only in July (by 0.23 • C). Analogous results were obtained when the trends were analyzed by using linear regression: there was a decrease in the precipitation level and an increase in air temperature ( Figure 5). As the regression line coefficients show, the average annual decrease in the precipitation level was 1.53 mm, which represents a total decrease of 30.6 mm between 2001 and 2020 (r = −0.11, p = 0.654). In turn, the average air temperature increased by 0.11 • C annually, giving a total increase of 2.1 • C (r = 0.77, p < 0.001) over the twenty-year period. Table 1 shows the trend in the amount of average precipitation in the years 2001-2020. December was the only cold month in which an increase in precipitation (by 0.48 mm annually) was recorded; in the remaining cold months, precipitation decreased. In the case of the warm months, the greatest changes were recorded in May (average annual increase by 0.19 mm) and July (average annual decrease by 0.17 mm). The correlation coefficient was not statistically significant for any of the months. In the case of air temperature, significant increases were recorded in the following months: June (by 0.17 °C annually and by 3.48 °C over the 20 years, r = 0.68, p = 0.001), August (0.13 °C and 2.65 °C, respectively; r = 0.71, p < 0.001), September (0.15 °C and 2.92 °C, respectively; r = 0.64, p = 0.002), and December (0.26 °C and 5.29 °C, respectively; r = 0.54, p = 0.014). In the remaining months, the regression slope was not significantly different from 0 ( Table 2). The p-values show that these correlations are not statistically significant, and the upward trend in the warm half-year was influenced by a large amount of rainfall received in the last year of the study (2020). In the case of air temperature, statistically significant upward trends were recorded both in the cold and the warm season. The temperatures in the cold half-year increased by 0.13 • C annually (by 2.61 • C over the 20-year period, r = 0.62, p = 0.004), and in the warm half-year by 0.08 • C (1.58 • C over the 20 years, r = 0.79, p < 0.001). The research we conducted shows that changes in air temperature in the RNP are not uniform. Decadal periods can be discerned in precipitation in the warm season. Table 1 shows the trend in the amount of average precipitation in the years 2001-2020. December was the only cold month in which an increase in precipitation (by 0.48 mm annually) was recorded; in the remaining cold months, precipitation decreased. In the case of the warm months, the greatest changes were recorded in May (average annual increase by 0.19 mm) and July (average annual decrease by 0.17 mm). The correlation coefficient was not statistically significant for any of the months. Cold season (January-April and November-December) and warm season (May-October). In the case of air temperature, significant increases were recorded in the following months: June (by 0.17 • C annually and by 3.48 • C over the 20 years, r = 0.68, p = 0.001), August (0.13 • C and 2.65 • C, respectively; r = 0.71, p < 0.001), September (0.15 • C and 2.92 • C, respectively; r = 0.64, p = 0.002), and December (0.26 • C and 5.29 • C, respectively; r = 0.54, p = 0.014). In the remaining months, the regression slope was not significantly different from 0 ( Table 2). The air temperature in the RNP in the 20-year period from 2001 to 2020 increased by 2.1 • C, which was a greater increase than that recorded in Poland [52][53][54] and in the world [55,56] in the previous periods (decadal increases by 0.2-0.5 • C). A particularly large increase in temperature in the RNP in the period between 2001 and 2020 was recorded in December-by as much as 5.29 • C. Such a situation may lead to the occurrence of snowless winters in the future and more and more frequent dry periods in spring. Despite the lack of clear trends in annual precipitation totals in the RNP, large fluctuations in the amount of precipitation in the individual years caused an increase in the frequency of dry periods, which was also mentioned by Ziernicka-Wojtaszek and Kopciska [57]. This unfavorable situation is exacerbated by the downward trend observed in the share of winter precipitation in the annual total, as this contributes to changes in the natural environment in the RNP. Natural Determinants and Consequences of the Climate Changes in the Roztocze National Park Research conducted by the RNP as part of the IMNE program of CIEP's State Environmental Monitoring shows that the natural environment of the RNP had experienced unfavorable changes over the past decade (2011-2020), especially in the catchment area of theŚwierszcz River [50]. The ongoing climate change has led to adverse changes in the natural environment and the human socio-economic environment. Water shortages, thermal and precipitation conditions which are unfavorable for trees, and the lowering of the groundwater level have affected the main forest-forming species of the RNP. As observed by Szwagrzyk and Bodziarczyk [58], climate change has an adverse effect on the most important tree species in the Park-the Baltic pine (Pinus sylvestris). In the last few years, events such as drought and the massive outbreaks of the sharp-dentated bark beetle (Ips acuminatus) have resulted in a decline in Baltic pine stands, especially where the pine grows in fertile sites. If summers continue to be hot and dry, the dieback of the pine may considerably accelerate [58]. Through the Roztocze National Park runs the eastern border of the continuous range of the beech (Fagus sylvatica), fir (Abies alba), larch (Larix decidua), and spruce (Picea abies). The presence of beech and fir forests in this area as borderland communities and the tracking of their dynamics may be helpful in interpreting climate change. It is all the more important that Holly Cross fir forests (91P0) (Abietetum polonicum) occur only in Poland [59]. The fir trees growing in the forests of the RNP show two opposite trends. On the one hand, the share of fir trees in fertile sites, especially in beech forests, decreases due to the gradual loss of older trees; in recent years, this phenomenon has been accelerated by droughts. On the other hand, in poorer soils, especially under the canopy of pine, fir regenerates much better. Its growth in less fertile sites, however, may be hampered over time by water scarcity. In recent years, a significant decrease in the population of spruce (Picea abies) has been observed in Poland, compared to all other tree species. It has been caused by adverse climate changes [58]. The temperature and rainfall conditions in the recent years have been unfavorable for trees, inducing the development of pathogenic fungi and leading to a dynamic increase in the population of cambio-xylophage and foliophage insects, which damage trees, threatening their health and sometimes also causing their dieback and falling out of stands [60]. In the last decade, a decrease in the level of groundwater has been observed in both deep Cretaceous aquifers and in shallower Quaternary waters, e.g., those of the peat bog complex "Międzyrzeki". It should be emphasized that maintaining a high and stable level of the groundwater table in a peat bog ensures its longevity and proper functioning, as it prevents the destructive processes of peat rotting and encourages the development of vegetation appropriate for this extremely valuable ecosystem, which is endangered across Europe [50]. On a positive note, Szwagrzyk and Bodziarczyk [58] note that the coniferous swamps and swamp forests that grow in the peat bogs of the "Międzyrzeki" area are neither endangered nor subject to adverse changes. The current distribution and range of fragments of these forests confirm that this habitat has been stable over the last 30 years. Nevertheless, it should be borne in mind that the changes in the hydrological regime and the potential increase in eutrophication are the effects of the appearance of ecologically alien species. Socio-Economic Determinants and Consequences of Climate Changes in the Roztocze National Park It has been observed that climate changes in the RNP are manifested by a lack of water in the upper section of theŚwierszcz River and in Echo Ponds. These changes were particularly acute in the years 2011-2020 and were found to have important social ramifications. Echo Ponds are open to bathers, and the operation of the bathing site constitutes a sort of social compromise included in the Plan for the Protection of the RNP [61]. The lack of water in Echo Ponds in the tourist season of 2020 caused the dissatisfaction of the local community and visitors to the park. Failure to understand the ongoing changes in climate resulted in numerous media "attacks" on the RNP in which the ponds are located. The scarcity of water caused social tensions and contributed to the negative perception of the RNP. Meanwhile, the employees of the Park, who continuously monitor the environment, noted the changes taking place and made good use of the low water level, performing repairs and maintenance on hydro-technical devices to ensure effective water retention. Small flows in theŚwierszcz River led to an increased activity of the European beaver (Castor fiber), which built dams in the lower section of the river, thus increasing water retention. The levels of surface waters and groundwater in the RNP also affect the rearing of Polish Konik horses (Equus ferus caballus), which are bred in a stud farm located in the catchment area of theŚwierszcz River. This stream is a natural watering place for the herd, which lives in semi-natural conditions in the sanctuary. The stud farm where the Polish Konik horses are kept is supplied with water from a groundwater well located at the Animal Breeding Centre in Florianka. In the years 2011-2020, the groundwater level in the well in Florianka was observed to become lower and lower, and the efficiency of the well decreased. In the same period, a lack of water or a large drop in the water level was also found in the wells located in the northern part of the Roztocze National Park in the settlements of Krzywe, Stara Huta, and Sochy and in the catchment area of theŚwierszcz River. Water shortages in the area of the Animal Breeding Centre in Florianka constitute a serious fire hazard, as the groundwater intake, due to its low efficiency, does not meet the legal requirements set out in the applicable regulations [62]. Conception of a Rainwater Management System for the RNP's Animal Breeding Centre in Florianka In order to quantitatively protect the Park's decreasing water resources and limit the consumption of high-quality water, a decision was made to introduce rainwater retention and management solutions in the Park's public utility facilities. Figure 7 shows a diagram of a rainwater management system which is planned to be built at the RNP's Animal Breeding Centre in Florianka. The schematic of the installation was drawn on the basis of the Functional and Utility Program prepared by Grabowski et al. [63]. The Animal Breeding Centre runs a stud farm of the Polish Konik horse, and the Uhruska sheep and white-backed cattle, and maintains meadows and pastures to provide the animals with fodder. Aside from that, the Animal Breeding Centre is open to visitors. Annually, 2.5 thousand tourists visit Florianka, and 35 thousand pass through it in transit. The forest settlement Florianka is supplied with water from a 50 m-deep well located at an altitude of 264.2 m a.s.l. The average annual water usage in Florianka calculated from water-meter readings taken in the years 2018-2021 is 1137 m 3 . The decreasing groundwater level and the reduced efficiency of the well call for the use of rainwater. The rainwater harvested from the planned installation is planned to be used for various purposes at the RNP's Animal Breeding Centre. Treated by filtration and irradiation with a UV lamp, it will be used to water the Polish Konik horses and flush toilets. Untreated rainwater, on the other hand, will be used for fire prevention purposes and washing cars and agricultural equipment; excess water will be discharged into the nearby19th century ponds, which serve as a natural breeding site for amphibians. The design also assumes that the wastewater generated during the flushing of toilets and washing vehicles will be discharged, after appropriate pretreatment in a skimming tank, to the existing hybrid constructed wetland located in the neighboring settlement in Florianka, approximately 500 m away from the Animal Breeding Centre. The structure, efficiency, and reliability of the discussed treatment plant have been described in a study by Micek et al. [64]. To calculate the demand for water in the Animal Breeding Centre, we used the water consumption standards set out in the Regulation of the Polish Minister of Infrastructure [65] and quarterly water usage readings taken from the Centre's deep water intake in 2018-2021. Table 3 shows the monthly demand for rainwater in the Animal Breeding Centre. The data provided in Table 3 show that the total annual demand for rainwater in the Animal Breeding Centre was 894.5 m 3 . The greatest demand for water was for watering animals-almost 800 m 3 , which represented over 89% of the total demand. The data show that the demand for water was the highest in the spring and summer, from May to August, i.e., in the period when the precipitation levels were also the highest. The subsections below contain calculations regarding the water demand in the RNP's Animal Breeding Centre and the amount of rainwater that can be harvested from the roofs of two farm buildings located at the Animal Breeding Centre. Demand for Water for Different Usage in the RNP's Animal Breeding Centre To calculate the demand for water in the Animal Breeding Centre, we used the water consumption standards set out in the Regulation of the Polish Minister of Infrastructure [65] and quarterly water usage readings taken from the Centre's deep water intake in 2018-2021. Table 3 shows the monthly demand for rainwater in the Animal Breeding Centre. The data provided in Table 3 show that the total annual demand for rainwater in the Animal Breeding Centre was 894.5 m 3 . The greatest demand for water was for watering animals-almost 800 m 3 , which represented over 89% of the total demand. The data show that the demand for water was the highest in the spring and summer, from May to August, i.e., in the period when the precipitation levels were also the highest. Determination of the Amount of Rainwater Flowing from the Roofs of the Animal Breeding Centre Buildings in the RNP Field measurements and calculations show that the total area of the roofs of the Animal Breeding Centre's two farm buildings (A-stable and B-barn) (Figure 7) from which rainwater will be harvested is 1191 m 2 . The roof area of building A is 467 m 2 , and that of building B is 724 m 2 . The amount of rainwater flowing from the surface of roofs A and B was calculated by using a modified formula, Formula (1), previously used by Aladenol and Adeboye [66], Adugna et al. [30], and Villar-Navascués et al. [67]: (1) where: V RW -volume of rainwater flowing from the roof surface during a year (m 3 ); TRAtotal area of roofs A and B (m 2 ); AAR-average annual rainfall in the RNP in the years 2001-2020 (686 mm); and SRC-surface runoff coefficient-a dimensionless value (0.9) determined for smooth sheet metal roofs based on a study by Farrene et al. [68]. The calculations made using Formula (1) show that, on average, 735 m 3 of rainwater, which can be used for various purposes, flows annually from the roofs of the two buildings (A and B) located in the RNP's Animal Breeding Centre. There is a large variation in the amount of rainfall throughout the year, and Figure 8 shows the volume of water that is available for use in the different months. Monthly volume of roof rainwater was calculated from Formula (2): where V MRW -average rainfall volume in the RNP for a given month in the 20-year period from 2001 to 2020. The amount of rainwater flowing from the surface of roofs A and B was calculated by using a modified formula, Formula (1), previously used by Aladenol and Adeboye [66], Adugna et al. [30], and Villar-Navascués et al. [67]: VRW = (TRA · AAR · SRC)/1000 (1) where: VRW-volume of rainwater flowing from the roof surface during a year (m 3 ); TRA-total area of roofs A and B (m 2 ); AAR-average annual rainfall in the RNP in the years 2001-2020 (686 mm); and SRC-surface runoff coefficient-a dimensionless value (0.9) determined for smooth sheet metal roofs based on a study by Farrene et al. [68]. The calculations made using Formula (1) show that, on average, 735 m 3 of rainwater, which can be used for various purposes, flows annually from the roofs of the two buildings (A and B) located in the RNP's Animal Breeding Centre. There is a large variation in the amount of rainfall throughout the year, and Figure 8 shows the volume of water that is available for use in the different months. Monthly volume of roof rainwater was calculated from Formula (2): where VMRW-average rainfall volume in the RNP for a given month in the 20-year period from 2001 to 2020. Figure 8 shows that the amount of rainwater was the largest in the spring and summer (from May to August), i.e., when the demand for water was the highest. During this period, rainwater could cover 100% of the demand for water to be used for various previously mentioned purposes. In the remaining months, rainwater could cover 54-90% of the Animal Breeding Centre's demand for water. Discussion Regional climate models (RCA3, HadRM3, HIRHAM5, CLM, and RegCM3) for Poland, including Roztocze and the RPN, forecast an increase in air temperature [69]. The forecasts of changes in thermal conditions for the years 2021-2050 in relation to the years Figure 8 shows that the amount of rainwater was the largest in the spring and summer (from May to August), i.e., when the demand for water was the highest. During this period, rainwater could cover 100% of the demand for water to be used for various previously mentioned purposes. In the remaining months, rainwater could cover 54-90% of the Animal Breeding Centre's demand for water. Discussion Regional climate models (RCA3, HadRM3, HIRHAM5, CLM, and RegCM3) for Poland, including Roztocze and the RPN, forecast an increase in air temperature [69]. The forecasts of changes in thermal conditions for the years 2021-2050 in relation to the years show that the greatest warming will occur in winter and the smallest in summer. The average temperature is predicted to increase by about 1 to 3 • C. It will be caused by a shorter duration of snow cover of ever smaller thickness. The HIRHAM3 model in Roztocze assumes that the average temperature will not change. The HADRM3 model predicts warming and an increase in temperature in Roztocze by 3.75 • C. According to the research in the years 2001-2020, the air temperature increased by 2.51 • C in the cold season, and in the warm season, it increased by 1.68 • C. The average temperature increase is 2.10 • C, which corresponds to the projected changes. The temperature in December increased by 5.29 • C. In the warm season, the months of May and July were colder by 0.5 • C. The obtained results are consistent with the forecasts from the abovementioned models. Total annual precipitation in the territory of the RPN does not show clear trends in comparison to the air temperature. The research carried out in 2001-2020, however, confirms some forecasts. Air-temperature changes in the RPN are not uniform. We can distinguish 10-year periods when, in the winter half of the year, a drop in air temperature is accompanied by an increase in precipitation, and in the warm half of the year, a decrease in air temperature is associated with an increase in rainfall. Annual amounts of precipitation in the territory of the RPN in 2011-2020 were, on average, 41.7 mm lower than in 2001-2010, with the largest decrease in the amount of precipitation in the cold season (41.02 mm), and only insignificant in the warm season (by 0.73 mm). Previous studies have shown that rainwater is usually of fairly good quality [20,70,71]. Only some parameters of rainwater exceed the standards specified for water intended for human consumption [70]. A study of rainwater flowing from the roofs of two outbuildings located near the Roztocze National Park Directorate building also showed that the water was of a fairly good quality [72], although it contained increased levels of ammonia, coliform and fecal coliform bacteria, and meso-and psychrophilic bacteria, which were probably caused by contamination of the roof surface with bird droppings. The water also had a slightly alkaline pH, which may indicate that the air and the surfaces of the roofs were polluted with dust particles from local furnaces and with animal waste. Jóźwiakowski et al. [72] showed that, in terms of microbiological parameters and some chemical parameters, the rainwater tested was not suitable for drinking or hygienic purposes, but could be safely used for washing vehicles, watering green areas, or flushing toilets. On the other hand, rainwater can be easily treated to drinking-water standards; the sole shortcoming is the low level of minerals [73,74]. The installation for the harvesting and utilization of rainwater mentioned above, which collects water from the buildings of the Roztocze National Park Directorate, was constructed in 2014 on the basis of a conception developed by Jóźwiakowski et al. [75]. The authors of the conception determined that 323 m 3 of rainwater could be harvested per year from two garage roofs with a surface area of 185 and 302 m 2 and used for washing vehicles and irrigating green areas. They also calculated that this amount of water was sufficient to meet the annual water demand for these purposes. The rainwater-harvesting installation for the RNP Directorate building has an ecological effect, as it allows users to limit the consumption of high-quality groundwater from the Roztocze aquifer located in this legally protected area. At the same time, the installation improves the image of the RNP as a pro-ecological institution which implements modern technological solutions to protect and preserve the most valuable assets of the natural environment. In the future, other systems of this type are planned to be installed in the RNP. Table 4 shows a list of all public utility buildings located in the Roztocze National Park that could be equipped with rainwater-harvesting and -utilization systems, as well as the average annual amount of water that could be collected from these facilities. The surface area of the roofs of the RNP's public utility facilities was determined on the basis of a register of the Roztocze National Park's structures and buildings and GPS measurements. The calculations show that the construction of rainwater harvesting systems for the public utility facilities located in the RNP would allow for the collection and use of 9109 m 3 of rainwater per year. Conclusions The climate-change forecast based on regional climate models for 2021-2050 regarding changes in thermal conditions and precipitation in Poland and in Europe [69] and data from the literature on the Roztocze National Park [47] indicate that there is an upward trend in air temperature and a very high yearly variability in precipitation levels. This tendency was also confirmed in this present study, which additionally shows that there is a downward trend in the amount of precipitation, especially in the winter months. These changes have a negative impact on the natural environment, especially the Park's forestforming species, and contribute to the lowering of the level of groundwater and the lack or scarcity of surface waters in water bodies, such as the upper section of theŚwierszcz River and Echo Ponds. Particularly dramatic changes were recorded in 2017-2020, and they led to profound environmental and social consequences. The study focuses mainly on the impact of rainfall and temperature on the groundwater level balance, but, of course, not all aspects affecting it were considered in the study, such as infiltration, which also affects the lowering of the level of groundwater [73]. In the present study, we proposed that, in order to quantitatively protect the diminishing water resources in the area of the RNP and to limit the consumption of high-quality water, measures should be taken for the retention and management of rainwater in public utility facilities. We found that local solutions based on natural resources of rainwater could satisfy a considerable part of the water demand of the Roztocze National Park's public utility facilities. We determined that approximately 9109 m 3 of water could be retained and used in the park during a year. As an example of such a solution, we presented a sustainable rainwater management system to be installed at the Animal Breeding Centre in Florianka that would ensure the continuity of operation of the Polish Konik horse stud farm (the symbol of the RNP) and the proper maintenance of the Animal Breeding Centre in Florianka, which is visited by about 40 thousand people yearly. It was assumed that, in this system, rainwater, after treatment, would be used for watering the horses and for flushing toilets. Untreated rainwater, on the other hand, would be used for fire prevention purposes and for washing cars and agricultural equipment. The excess water would be discharged to a nearby pond, which is an amphibian breeding site. It was found that, in the spring and summer period (from May to August), rainwater could cover 100% of the Animal Breeding Centre's demand for water used for various purposes. In the future, the use of rainwater harvesting and utilization systems will contribute to the broadly understood water protection and will help counteract the effects of drought in the face of the changing climate in Poland and around the world.
2022-04-26T15:16:15.083Z
2022-04-20T00:00:00.000
{ "year": 2022, "sha1": "614e243f09aea834540dd6cdbb767f642fa0a697", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4441/14/9/1334/pdf?version=1650448375", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f0344f11901b920bb5e6bdfea5e45ebcc46a4513", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
73617805
pes2o/s2orc
v3-fos-license
FORMULATION OF KETOCONAZOLE LOADED NANO DISPERSIVE GEL USING SWOLLEN MICELLES TECHNIQUE AND ITS IN VITRO CHARACTERIZATION Objective: The objective of the present work was to formulate and characterize nano dispersive gel (NDG) for topical delivery of water-insoluble antifungal agent ketoconazole in order to enhance its solubility, penetration through the skin and antifungal activity. Methods: Nano dispersion of the drug was first prepared by swollen micelles technique (SMT) using tween 80 and chloroform which is then incorporated into the gel using carbopol 934. Ten formulations of ketoconazole loaded NDG was prepared and characterized for different physicochemical parameters like homogeneity, pH, spreadability, extrudability, practical yield, drug content, in vitro drug release, ex vivo permeation study, and biological parameter antifungal activity. Results: The formulated topical preparation exhibit pH in the range of 6.5 to 7.4, and unveiled excellent homogeneity, spreadability and extrudability. Out of 10 formulations, formulation F4 showed maximum drug content of 95.56±1.13% and practical yield of 97.23±0.51%. The in vitro drug release studies were performed using pH 7.4 phosphate buffer. Formulation F4 showed best in vitro drug release 96.52±0.52% at the end of 24 h of study. Ex vivo permeation study of formulation F4 carried out using franz diffusion cell, also manifested good permeation and flux of drug across the chicken skin. Antifungal activity test of formulation F4 was carried out by the cup plate method using Aspergillus niger strain against marketed ketoconazole unveiled higher antifungal activity than marketed one. Conclusion: The study confirmed formulation F4 to be an optimized and promising formulation for the effective treatment of topical fungal infections with enhanced solubility and penetration through the skin. INTRODUCTION Infections are caused by fungus ranges from superficial conditions of the skin (e. g., athletes foot and ringworm) and nails (e. g., onychomycoses) to the deadly contagious disease [1]. Ringworm also called dermatophytosis usually manifest as a series of rapidly expanding irritating lesions which can occur in any area of the skin, chiefly attack stratum corneum of keratinized tissues and hair fibres resulting in autolysis of the fibre structure, breaking off the hair and alopecia [2]. Ketoconazole is widely used recent synthetic imidazole ringcontaining powerful antifungal agent active against most of the species of fungi and yeast. A topical preparation of ketoconazole is used for the treatment of infections caused by a wide variety of fungi like candida and tinea [3]. In biopharmaceutics classification system (BCS), ketoconazole is classified as a class-II drug, based on its absorption and dissolution property, since it has a high permeability but its solubility in aqueous media is insufficient for the whole dose to be dissolved in the gastrointestinal (GIT) fluids under normal condition [4]. It is very lipophilic, practically it is insoluble in mineral oil (<0.01%), and in solvent with alkaline pH but it is freely soluble in most of the organic solvents including chloroform, noctanol, ethyl alcohol, methanol, dimethyl sulfoxide (DMSO) [5]. It is least stable at pH 1 and most stable at pH 7 [6]. The poor aqueous solubility of ketoconazole is the limitation for topical delivery [3]. Topical drug delivery systems are extensively used for the treatment of local skin disorders. Topical application of drugs has potential advantages of delivering the drug directly to the site of action for an extended period. It bypasses the first pass metabolism, avoids problems related to absorption, changes in pH and also avoids risk and inconvenience of intravenous (IV) therapy [7,8]. However, stratum corneum the top layer of the epidermis is the major barrier of drug penetration through the skin. For transdermal delivery, ideally, the drug should have low molecular weight (≤ 500 Da), high lipophilicity and effective at low dosage. Considerable attention has been focused in the last few decades to develop novel drug delivery systems (NDDS), to lower stratum corneum barrier and increase permeability [9]. This approach mainly includes electrophoresis, iontophoresis, sonophoresis, chemical permeation enhancers, microneedles, tape stripping, nanoparticles, nanodispersion [13] and vesicular systems including liposomes, niososmes, ethosomes and transferosomes [10,11]. Developed liposomes and niosomes, having high permeability, controlled release and target specificity but liquid nature of topical liposomes and an aqueous suspension of niosomes may have problems associated with physical instability such as fusion, aggregation and leaking of the entrapped drug [3,10,12]. Nanodispersion plays an efficient role in the topical delivery of poorly aqueous soluble or insoluble drug whereby decrease of molecule size to nanoscale measurements leads to a massive expansion in the surface area that enables the faster penetration of medication through the skin and upgrades the drug bioavailability. This is often because of the rise in solubility of the drug on the reduction of particle radius. Terribly microscopic particle size causes an oversized reduction in gravity force and therefore the brownian motion could also be sufficient for overcoming gravity. This implies that no deposition happens on storage [13]. Topical preparations like ointments, creams, lotions are extensively used as topical agents have numerous disadvantages. Generally, they are very sticky causing discomfort to the patient. Likewise, they also increase the contact time [8,14]. Unlike other topical preparations, the gel is non-sticky, does not cause discomfort to the patient. Drug release is faster in gel formulation compared with ointments and creams. It also has more spreading co-efficient than ointment and cream [15,16]. Hence in the present study attempt has been made to formulate and characterize NDG of ketoconazole. NDG is semisolid dosage form formed by the combination of nanodispersion and gel. The gelling agent is added in nanodispersion that gives NDG. NDG overcomes the limitation of poor aqueous solubility of ketoconazole with the advantages of gel. MATERIALS AND METHODS Ketoconazole was received as a gift sample from Vonstachem Ltd, Delhi. Carbopol 934, triethanolamine (TEA), tween 80, chloroform, disodium hydrogen phosphate and potassium dihydrogen phosphate were provided by Galgotias University, Greater Noida. Strains of Aspergillus niger were purchased from the Institute of Microbial Technology (IMTECH, Chandigarh, India). All other chemicals used were of analytical grade. Preparation of nanodispersion of ketoconazole Nanodispersion of ketoconazole was prepared by a special technique called the formation of SMT [13]. The drug was dissolved in sufficient volume of chloroform and then tween 80 were added continuously with stirring. After complete addition of tween 80 in the final mixture, further stirred for 2 d to provide controlled and complete evaporation of chloroform. This leads to the formation of a system with ultra-low interfacial tension as well as drug particle size in nanoscale. Finally, ample volume of phosphate buffer (pH 6.8) was added to give nanodispersion. Formation of viscous mixture ensures the complete removal of chloroform. Preparation of ketoconazole loaded nano dispersive gel Carbopol 934 in required quantity was added to the ketoconazole nanodispersion with constant stirring at 500 rpm for 2 h. Speed was reduced later to avoid air entrapment. Finally, TEA was added to neutralize the mixture and provide appropriate consistency [17]. Different formulations were developed by varying tween 80 and carbopol 934 concentrations (table 1). Characterization Prepared formulations were characterized for different parameters including physical appearance and homogeneity, pH determination, spreadability, extrudability, percent practical yield, drug content, in vitro dissolution, ex-vivo permeation and antifungal activity. Determination of pH Digital elico pH meter was used to check the pH of NDG at room temperature. At first, the pH meter was calibrated utilizing standard buffers of pH 4 and 9.2. Precisely 2.5 g of gel was weighed and dispersed in 25 ml of distilled water and after that pH meter was plunged in the dispersion and the pH was measured. The pH measurement of each formulation was done in a triplicate manner and average values were calculated [19]. Spreadability study For the determination of spreadability [7] of the prepared NDG formulations a wooden block was taken which is provided by a pulley at one end. Spreadability was determined on the basis of 'Slip' and 'Drag' properties of NDG. A ground glass slide was taken and fixed on this block. An excess of NDG (about 2 g) collected and applied on this ground slide. The NDG was then embedded between this slide and another glass slide having the dimension of the fixed ground slide. On the top of the two slides, a 1 kg weight is placed for 5 min to provide a uniform film of the NDG and to expel air. The top plate is then subjected to pull of 80 g. The time required by the top slide to cover a distance of 7.5 cm was noted. Spreadability of each formulation was done in a triplicate manner and average values were calculated by using the formula given below. Extrudability Method of extrudability study [4] of NDG formulation was based upon the quantity in the percentage of gel extruded from the tube on the application of certain load. More the quantity extruded better was extrudability. The NDG was filled in a clean, aluminium collapsible 10 g tube with 5 mm opening nasal tip. This tube was subsequently placed in between two glass slides and was clamped. A constant load of 1 kg was placed on the slides and gels extruded was collected and weighed. Extrudability of the gel was then determined by weighing the amount of gel extruded through the tip when a load was put. The percentage of gel extruded was calculated and grades were allotted (+++excellent; ++good; +fair). Practical yield (PY) PY [4] helps in selection of an appropriate method of preparation. It was calculated to know about percent yield and efficiency of the preparation method. NDG was collected and weighed to determine the PY of each formulation in triplicate manner and average values were calculated from following and noted down. Drug content To determine drug content [20], NDG equivalent to 5 mg of model drug was taken and dissolved in 5 ml of methanol. The solutions were filtered and were further diluted such that the absorbance falls within the range of the standard curve. The absorbances of solutions were determined at 215.80 nm by UV-visible spectrophotometer. The actual drug content of each formulation was estimated using the following formula in triplicate manner and average values were noted down. % drug content = Actual drug content in weighed quantity of gel/ Theoretical amount of drug in gel × 100 In vitro dissolution studies In vitro release studies of various samples of NDG were carried out using biological egg membrane [16]. A fresh chicken egg was carefully broken and its outer membrane was removed with necessary precautions without tearing/damaging and soaked overnight in the dissolution medium [7]. The egg membrane with a surface area of 3.14 cm2 available for diffusion was mounted on a franz diffusion cell. A known amount of NDG was weighed and applied on the membrane mounted on franz diffusion cell from the donor side. [7,12]. Ex vivo permeation study To perform ex vivo permeation study [21,22] of formulation showing better release (formulation F4) and marketed formulation (phytoral), franz diffusion cell was chosen. The selected franz diffusion cell was having effective diffusion area of 3.14 cm 2 and cell volume of 15 ml. Freshly slaughtered chicken skin [23] was taken from the local slaughter house and soaked in sodium bromide solution for 5 h and then washed with water in order to remove adhering fat tissue. The outer layer (epidermis) was washed thoroughly with water, dried at 25% RH and stored in the freezer until further use. Skins were allowed to hydrate for 1 h before being mounted on the franz diffusion cell. A suitable size of chicken skin was cut and mounted in the franz diffusion cell in such a way that the dermis faces receiver compartment and stratum corneum faces donor compartment. A known amount of formulated gel was loaded in the donor compartment. The receptor compartment was filled with 15 ml of phosphate buffer pH 7.4. The temperature was maintained at 37±0.5 °C. Magnetic beads of suitable size was placed inside the receiver and then this assembly was placed on a magnetic stirrer and stirred for 24 h. The samples (0.5 ml aliquots) were withdrawn from the receptor cell at a regular time interval and were analysed spectrophotometrically (Shimadzu Corporation, Japan 1800) at wavelength 215.80 nm. Same procedure was repeated for normal marketed ketoconazole gel (phytoral). The cumulative amount of drug permeate across the skin was determined as a function of time of both the samples in triplicate manner and average values were noted down. Permeability coefficient and flux were also calculated using the following standard formula [5]. Antifungal activity "Cup plate method" [9,22] was opted to determine the antifungal activity of optimized formulation F4, in comparison with normal marketed ketoconazole gel (phytoral). Fungal strain Aspergillus niger as the test microorganism was used. An inoculum of Aspergillus niger was spread over the sabouraud's dextrose agar media and allowed to solidify in the petri dish. After solidifying the agar plate, cups were made with sterile borer (5 mm). By using a sterile syringe, 0.5 ml of NDG solution was filled in into one cup and marked with 'S' similarly 0.5 ml of marketed gel solution was filled in another cup and marked with 'M'. This plate then kept for incubation for 24 h at 37 °C temperature. After incubation, the zone of inhibition was calculated and compared. RESULTS AND DISCUSSION In the present research work ketoconazole NDG were prepared using tween 80 along with carbopol 934 in different proportions by the technique called formation of SMT. The prepared ketoconazole NDG were characterized for various parameters like homogeneity, pH, spreadability, extrudability, practical yield, drug content, in vitro drug release, ex vivo permeation study, and biological parameter antifungal activity. Physical appearance and homogeneity Except F9 and F10, all other formulations were having excellent and good homogeneity and physical appearance (table 2). Formulations were white, viscous, homogeneous free from fibres with a smooth and considered acceptable to avoid the risk of irritation upon application to the skin. This implies that NDG has better patient acceptability than other topical preparations [15,16]. Determination of pH pH of all the formulations was between pH 6.5-7.4 (table 2), which fall in the normal pH range of the skin [24], which ensures the compatibility of the formulation with skin. Spreadability Spreadability of all the formulations under study was in between 26.65±1.33 to 33.52±0.88 g. cm/s (table 2), which represents the good spreading efficiency of NDG that will meet ideal quantity to the skin [24]. Extrudability Extrudability is a useful empirical test to measure the forces required to extrude out the gel from a tube [25]. In this study out of 10 formulations prepared and tested except F9 and F10, extrudability grade of all the formulations was+++or++ (table 2), which represent excellent or good extrudability of NDG. Table 3 shows the results of percent practical yield of all the formulations. The PY of prepared NDG was found to be in the range of 42.65±0.13% to 97.23±0.51%. The maximum yield was found to be 97.23±0.51% in F4 formulation. These results indicated that the utilized method was efficient and appropriate for the preparation of NDG [4]. Drug content Drug content of all the formulation is shown in table 3. Drug content of prepared NDG was in the range of 76.55±0.32% to 95.56±1.13%. The maximum drug content was found to be 95.56±1.13% in formulation F4. These results indicated the high content uniformity of the formulation [18], with the results being within the pharmacopeial limits. Data expressed in mean±SD, (n=3) Table 6 shows the data of cumulative percent drug permeate of formulation F4 and marketed formulation (P). The cumulative % drug permeate of F4 and P after 24 h of the study was found to be 89.12±0.63% and 68.86±1.65%. Permeability coefficient of F4 and P were found to be 0.0015 cm/min and 0.0012 cm/min respectively. Ex vivo permeation study The flux of F4 and P were found to be 2.10 µg/cm 2 /min and 1.69 µg/cm 2 /min respectively. From the result it is clear that formulation F4 which was showing maximum cumulative % drug release among all other formulations also showed good permeation of drug across the chicken membrane in comparison to marketed formulation (phytoral). It indicates that formulating NDG leads to enhancement in permeability of drug [26]. Data expressed in mean±SD, (n=3) Antifungal activity Zone of inhibition of ketoconazole loaded NDG (S) and normal marketed ketoconazole gel (M) after 24 h of incubation were found to be 28.5±0.42 mm and 22.56±2.51 mm respectively. These results indicate that antifungal activity shown by prepared ketoconazole loaded NDG is more than normally marketed ketoconazole. This is due to enhanced permeation and solubility on the reduction of particle size to nano measurement [9,20]. CONCLUSION The results of these studies showed that, formulating NDG of ketoconazole lead to enhancement in solubility and dissolution behaviour of poorly soluble antifungal drug ketoconazole. Carbopol 934 used as a gelling agent that produced a gel with, an excellent physical appearance, spreadability and extrudability which proved carbopol 934 to be a promising gelling agent to prepare NDG. The formation of SMT for preparing nanodispersion was found to be satisfactory as it produced a good product with high practical yield as well as high drug content. Out of 10 formulations prepared, formulation F4 exhibit best drug release, as well as dissolution rate when compared to pure drug. Formulation F4 also showed good skin permeability and higher antifungal activity than normal marketed ketoconazole gel. Practical yield, as well as Drug content of formulation F4, was also maximum among all other formulations. Henceforth, we concluded that formulation F4 is an optimized and promising formulation for effective treatment of topical fungal infections with enhanced solubility and penetration through skin
2019-04-09T13:08:52.388Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "7265efec03170913e8dfe5845a22004699e2dee5", "oa_license": "CCBYNC", "oa_url": "https://innovareacademics.in/journals/index.php/ijpps/article/download/24552/14127", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b746fa6a1d9cfef30eccbfaf37df38aae4b0513f", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
119060862
pes2o/s2orc
v3-fos-license
The Role of Cooperons in the Disordered Electron Problem: Logarithmic Corrections to Scaling Near the Metal-Insulator Transition The effect of Cooperons on metal-insulator transitions (MIT) in disordered interacting electronic systems is studied. It is argued that previous results which concluded that Cooperons are qualitatively unimportant near the MIT might not be correct, and that the problem is much more complicated than had previously been realized. Although we do not completely solve the Cooperon problem, we propose a new approach that is at least internally consistent. Within this approach we find that in all universality classes where Cooperons are present, i.e. in the absence of magnetic impurities and magnetic fields, there are logarithmic corrections to scaling at the MIT. This result is used for a possible resolution of the so-called exponent puzzle for the conductivity near the MIT. A discussion of the relationship between theory and experiment is given. We also make a number of predictions concerning crossover effects which occur when a magnetic field is applied to a system with Cooperons. I. INTRODUCTION The current theoretical description of the metal-insulator transition (MIT) problem asserts that the presence or absence of the particle-particle or Cooper channel does not qualitatively modify the MIT. 1,2 At first sight this may seem surprising, as much of the early work in the modern (post-1979) theory of the localization problem concentrated on the Cooper channel and the backscattering or weak localization effects it produces. 3 Also, numerous experiments confirmed the presence of these weak localization effects in weakly disordered metallic systems in the absence of magnetic impurities and magnetic fields. 4 However, the assertion appears less surprising if one recalls that electron-electron interaction effects in the presence of disorder lead to many of the same effects as Cooperons. 5 If this is so in the weak disorder regime, and if one acknowledges that electron-electron interactions are in general relevant for the MIT, then it is conceivable that Cooperons do not lead to any additional effects at the MIT over and above those produced by the interplay of interactions and disorder alone. This was in fact the conclusion reached by Finkel'stein 1 and others 2 who have argued that the Cooper channel is irrelevant, in the sense of the renormalization group (RG), for the MIT, that interaction effects effectively replace Cooperon effects near the MIT, and that MIT with or without Cooperons are qualitatively the same. In this paper we argue that the latter conclusion is probably not correct. We present a consistent description within which the Cooper propagator, or the effective Cooper interaction amplitude, Γ c , is a marginal operator rather than an irrelevant one. This marginal operator leads to logarithmic corrections to scaling that are characteristic for those universality classes where Cooperons are present. Technically, we first show that the MIT problem in the presence of Cooperons is substantially more complicated than had previously been realized. In particular, the renormalization procedures used in existing treatments of the problem do not lead to a finite renormalized field theory. We further argue that even if this problem is ignored, the RG fixed point structure obtained in previous treatments is not stable with respect to the consideration of higher order terms that were neglected. We then present a partial solution to this problem. We first point out that a principal unanswered question is that of how many renormalization constant are needed to make the theory finite. This is closely related to the physical question of whether Γ c is a simple scaling variable near the MIT or whether it consists of several scaling parts. Since at present we do not have a firm answer to this question, we first derive RG flow equations for all other coupling constants which appear in the theory. These flow equations can be expressed in terms of Γ c whose scaling behavior is a priori unknown. We then derive an Eliashberg-type integral equation for Γ c . General arguments lead us to the conclusion that Γ c consists of several scaling parts, and that it approaches its asymptotic value at the MIT, Γ * c , logarithmically slowly. The most important conclusion from our considerations is that in systems without magnetic impurities or magnetic fields there are logarithmic corrections to scaling at the MIT. This in turn implies that it is virtually impossible to experimentally reach the asymptotic critical scaling regime for these universality classes, and that the existing experiments measure effective exponents, rather than asymptotic critical exponents. We show that these logarithmic corrections to scaling provide a possible resolution of a long-standing problem. The critical exponent s for the conductivity in Si:P 6 and some other systems, 7,8 all of which are believed to be in universality classes that contain Cooperons, is experimentally observed to be smaller than 2/3, in apparent violation of a rigorous bound that requires s ≥ 2/3 in three-dimensional (3-D) systems. 9 We will see that logarithmic corrections to scaling can easily account for an apparent or effective exponent s ef f ≈ 0.5 even if the true asymptotic critical exponent obeys s ≥ 2/3. The same conclusion was reached in a previous short account of part of the work presented in this paper. 10 The plan of this paper is as follows. In Sec. II we use Finkel'stein's effective field theory for the MIT 1 to calculate the perturbative corrections to the coupling constants which appear in the theory to one-loop order. In Sec. III we use a normalization point RG approach to renormalize the field theory and to derive RG flow equations for the coupling constants in terms of the Cooper propagator Γ c . We also show how previous results can be obtained from certain assumptions and approximations for Γ c . We find that these assumptions do not lead to a finite renormalized theory, and that terms neglected in the previous treatments invalidate this approach in any case. In Sec. IV we classify and discuss the possible solutions of the integral equation for Γ c . We argue that generic solutions of the integral equation yield a Γ c that approaches its fixed point value logarithmically slowly, which leads to logarithmic corrections to scaling. In Sec. V we discuss the experimental consequences of this result. We compare the theory with existing experiments and suggest a number of new measurements to further test the theory. In particular we discuss crossover effects due to an external magnetic field which changes the universality class and eliminates the logarithmic corrections to scaling. We conclude in Sec. VI with a general discussion of our results as well as the current status of the MIT problem. II. THE FIELD THEORY, AND THE LOOP EXPANSION In the first part of this section we recall the basic field theoretic description of the disordered electron problem 11,1 and derive the Gaussian propagators of the field theory. We then explain how to obtain perturbative corrections to the coupling constants by considering the generating functional to one-loop order. A. The Model The existing theoretical description of the MIT is based on the usual assumption of the theory of continuous phase transitions, viz. that the physics near the transition is dominated by the low-lying excitations of the system. As for the description of other phase transitions, the problem is then reduced to obtaining a solution for an effective field theory for these slow modes. Once the field theory has been identified, the technical apparatus used to obtain a solution is the RG. The initial problem is to obtain the field theory, which requires a physical identification of the relevant slow modes. In the field theory which we will use, 1,2 the slow modes in the metallic phase are assumed to be the diffusion of mass, spin, and energy density. The effective field theory for longwavelength and low-frequency excitations that describe how these diffusive processes change across an MIT is defined by an action, Here the field variable Q is an infinite matrix whose matrix elements, Q αβ nm , are complex 4-by-4 matrices (spin-quaternions) which comprise the spin and particle-hole degrees of freedom. The labels α, β = 1, 2, · · · , N denote replica labels. In deriving Eq. (2.1), quenched disorder has been integrated out by means of the replica trick, 12 and the limit N → ∞ is implied at the end of all calculations. n, m = −∞, · · · , +∞ are Matsubara frequency labels. Ω = Iω n with I the identity matrix, ω n = 2πT (n + 1/2), is a fermionic frequency matrix, and tr denotes a trace over all discrete degrees of freedom. [Qγ (u) [Qγ (c) Q] = − n 1 n 2 n 3 n 4 δ n 1 +n 2 ,n 3 +n 4 α r=1,2 ×tr s tr τ (τ r ⊗ s 0 )Q αα n 1 n 2 tr τ (τ r ⊗ s 0 )Q αα n 3 n 4 , (2.2c) with tr = tr s tr τ where tr τ acts only on the τ 's and tr s acts only on the s's. In Eqs. (2.2), τ 0 = s 0 = σ 0 and τ j = −s j = −iσ j (j = 1, 2, 3), with σ j the Pauli matrices. The theory contains five coupling constants. G = 8/πσ B with σ B the bare or self-consistent Born conductivity is a measure of the disorder, and H = πN F /4 plays the role of a frequency coupling parameter with N F the bare density of states (DOS) at the Fermi level. K s and K t are singlet and triplet particle-hole interaction constants, respectively, and K c is the singlet particle-particle or Cooper channel interaction constant. At zero frequency the triplet coupling constant in the particle-particle channel vanishes due to the Pauli principle. A disorder generated, frequency dependent triplet particle-particle interaction constant has been discussed elsewhere. 13 For simplicity we formulate the theory with a short-range model interaction, i.e. the K s,t,c are simply numbers. For the more realistic case of a Coulomb interaction K s is x-dependent and must be kept under the integral in Eq. (2.1). Most results for this case are easily obtained after all calculations have been performed by essentially putting K s = −H, 1 but occasionally subtle complications arise. Since these are purely technical in nature, and decoupled from the Cooper channel induced problems which are the subject of this paper we will not discuss them. We will, however, give results for the Coulomb interaction case in Sec. III. The matrix Q is subject to the nonlinear constraints, and satisfies, The τ i are the quaternion basis and span the particle-hole and particle-particle space, while the s i serve as our basis in spin space. For convenience we expand Q αβ nm in this basis, From the first equality in Eq. (2.3c) it follows that the elements of Q describing the particlehole degrees of freedom are real, while those describing the particle-particle degrees of freedom are purely imaginary, In addition, from the hermiticity condition one obtains, The constraints given by Eqs. (2.3a,2.3b) and the hermiticity requirement, Eq. (2.3c), can be eliminated by parametrizing the matrix Q by, 14 Here the q are matrices with spin-quaternion valued elements q αβ nm ; n = 0, 1, · · ·; m = −1, −2, · · ·. Like the matrix Q, they can be expanded as Note that the q do not satisfy the symmetry relations given by Eqs. (2.6) for the Q. We have so far given the theory for the so-called generic (G) universality class which is realized by systems without magnetic fields, magnetic impurities, or spin-orbit scattering. Apart from class G, the second universality class with Cooperons is the one with strong spin-orbit scattering (class SO). For class SO the action is shown as above, except that the particle-hole spin triplet channel is absent, i.e. the sum over the spin index i in Eqs. (2.4) and (2.7b) is restricted to i = 0. 15 In what follows we will give results for both class G and class SO. With the help of Eqs. (2.7) one can expand the action in powers of q, Here p ≡ dp/(2π) D , and 1 ≡ (n 1 , α 1 ), etc. The matrix M is given by, where ν 0 = s, ν 1,2,3 = t, and The Gaussian propagators are given by, where, f n (p) = n 1 ≥0,n 2 <0 δ n,n 1 +n 2 D n 1 −n 2 (p) . (2.10d) Here we have introduced the propagators, with Ω n = 2πT n a bosonic Matsubara frequency. Physically, D n , D s n and D t n are the energy, mass, and spin diffusion propagators. 17 Equations (2.10b,2.10c) can be put into a more standard form by summing over, e.g., n 3 and n 4 , Examining the various terms in Eqs. (2.11) we see that all of them have a standard propagator structure except for the last contribution in Eq. (2.11b). In interpreting these propagators as having a standard structure, the Matsubara frequencies in Eqs. (2.10e)-(2.10g) are taken to be analogous to a magnetic field at a magnetic phase transition, i.e., the MIT occurs at Ω n → 0 and Ω n or the temperature is a relevant perturbation in the RG sense. Using Eq. (2.10a), changing the sum to an integral, and placing an ultraviolet cutoff, Ω 0 , on the resulting frequency integrals shows that 2πT f n 1 +n 2 (p) in Eq. (2.11b) diverges logarithmically in the long wavelength, low temperature limit. With K c > 0, we see that the last term in Eq. (2.11b) is logarithmically small compared to the other terms in Eqs. (2.11). We conclude this subsection with two remarks. Firstly, the logarithm discussed above that appears in the particle-particle density correlation function is just the usual BCS logarithm. However, since we consider a system with a repulsive Cooper channel interaction, K c > 0, which is not superconducting in the clean limit, this does not lead to a Cooper instability. Rather, the last term in Eq. (2.11b) vanishes logarithmically in the limit p, T → 0. If the structure of this term persists for disorder values up to the MIT, and if it couples to the physical quantities like, e.g., the conductivity, then it will lead to logarithmic corrections to scaling. Secondly, considering the Gaussian theory one can already anticipate a fundamental problem with any RG treatment of the field theory. To see this, note that at the Gaussian level the two-point vertex functions are given by S 2 [q]. Examining the Eqs. (2.9) we see that at this order no singularities, neither in the ultraviolet nor in the infrared, are present in the vertex functions, and there is no explicit cutoff dependence. This should be contrasted with the corresponding two-point Gaussian propagators given by Eqs. (2.10). Because of the last term in Eq. (2.10c), both an infrared singularity and a dependence on an ultraviolet cutoff appear. In the usual RG approach such cutoff dependences are eliminated from the field theory by the introduction of suitable renormalization constants. Here, unusual features are that vertex functions and propagators behave differently with respect to their cutoff dependence, and that the cutoff dependent term is logarithmically small rather than large. In previous RG treatments of this problem, the procedure used was effectively to renormalize K c in Eqs. (2.9c) and (2.10c) such that the renormalized two-point propagators were finite as Ω 0 → ∞. It is easy to see that such a procedure leads to renormalized two-point vertex functions that contain a singularity in this limit, cf. Eq. (3.10) below. In Sec. III we will further discuss this problem, and we will propose an alternative renormalization procedure. B. One-loop perturbation theory We will not be able to present a final solution of the Cooperon renormalization problem. we will therefore consider several renormalization procedures, and will discuss which ones are at least internally consistent and which ones are not. One of these procedures is based on perturbation theory for the generating functional for the vertex functions, Γ[q], which is closely related to the thermodynamic potential. The perturbation expansion for Γ[q] can be generated by standard techniques. 18 First, a source term with an external potential J αβ nm (x) is added to the action so that the average of q αβ nm , which we denote byq αβ nm , is nonzero. We can then obtain a loop or disorder expansion for Γ[q]. It is convenient to expand Γ[q] in powers ofq, where Γ (N ) is the N-point vertex function. To zero-loop order, one has At higher-loop order the various coefficient in S acquire perturbative corrections, and in addition structurally new terms are generated. For simplicity we restrict our considerations to the two-point vertex function Γ (2) , and to the one-point vertex function Γ (1) which is related to the one-point propagator is given by the second derivative with respect toq of the right-hand side of Eq. (2.9a) with q →q, and with the matrix M replaced by and, where G n 1 n 2 , H n 1 n 2 , and K s,t,c n 1 n 2 ,n 3 n 4 are given by G, H, and K s,t,c , respectively, plus frequency dependent one-loop perturbative corrections. In our notation we have suppressed the fact that these corrections are in general also momentum dependent. In the absence of Cooperons these corrections have been discussed in detail elsewhere. 16 In general the momentum and frequency dependence of the corrections is quite complicated. Here we just give the results to leading order in 1/ǫ, with ǫ = D − 2. With Λ an ultraviolet momentum cutoff, and G = GS D /(2π) D with S D the surface of the D-dimensional unit sphere, we obtain, Here, Ω 0 = O(Λ 2 /GH) is a cutoff frequency, and, G2πT G2πT For the one-point vertex function one finds, In giving Eqs. (2.15)-(2.17) we have neglected terms that are finite in D = 2 as (Λ, Ω 0 ) → ∞, and a delta function constraint (cf. Eqs. (2.14)) is understood in Eqs. (2.15c,2.15d,2.15e). Some of these terms depend on n 1 and n 2 separately, not just on the difference n 1 − n 2 . In Eq. (2.15b) we have written a separate dependence on n 1 and n 2 explicitly for later reference. Also, the complete frequency dependence of K t n 1 n 2 ,n 3 n 4 and K c n 1 n 2 ,n 3 n 4 is more complicated than the one shown. However, for most of our purposes it is sufficient to treat all 'external' frequencies as equal, and we do not have to deal with the (substantial) complications that arise from the full perturbation theory. Finally, let us discuss one important point. To one-loop order the two-point propagators are given by the right-hand side of Eq. (2.10a) with the inverse matrix M −1 replaced by the matrix M ′ −1 which contains the perturbative corrections. For the particle-hole degrees of freedom one can show in general that, except for irrelevant terms, the matrix M ′ −1 has the same form as the matrix M −1 , with the only difference being that the corrected coupling constants appear in M ′ −1 . The underlying reason for this feature is the conservation laws for mass, spin, and energy density. For the particle-particle degrees of freedom the situation is different. There are no conservation laws which guarantee that the form of the last term in Eq. (2.10c) will not change at higher orders in the loop expansion. In general, the matrix where Γ (c) satisfies an Eliashberg-type integral equation, Γ c n 1 n 2 ,n 3 n 4 + 2πT Here we have again suppressed the momentum dependence of Γ c . Note that if we make the substitutions K c replace the remaining sum by an integral, let p, T → 0, and use an ultraviolet cutoff Ω 0 on the frequency integral, then Γ c has the standard BCS form, with γ (0) c = K c /H, and D (0) = 1/GH. The actual Cooper propagator to all orders in perturbation theory would have the simple form given by Eq. (2.18c) only if the coupling constants K c and H were constant to all orders. In general, the structure of the Cooper propagator will be more complicated and to obtain it one has to solve the integral equation, Eq. (2.18b). For later reference we symmetrize the equation by defining For T → 0 the integral equation forΓ then reads, Since we consider the zero temperature limit we have made the replacements, etc., and for definiteness we have assumed Ω ≥ 0. We stress again that Eq. (2.20a) has the form of an Eliashberg equation with a repulsive kernel. III. THE RENORMALIZATION GROUP FLOW EQUATIONS In the first part of this section we review the general procedure of the field theoretic RG as applied to a nonlinear sigma-model. We then use a normalization point RG procedure to obtain RG flow equations for all of the parameters that appear in the field theory except for the Cooper propagatorΓ. Finally, we discuss the reasons why previous attempts to derive a flow equation forΓ are problematic. A. The Nonlinear Sigma-Model and the Renormalization Group The basic philosophy behind the field theoretic RG approach is to eliminate all singular ultraviolet cutoff dependences, i.e. singularities of the theory as (Λ, Ω 0 ) → ∞, by introducing renormalization constants Z i , (i = 1, 2, · · ·). RG flow equations are then obtained by examining how the Z i depend on the cutoff. 18 In principle this approach is equivalent to the Wilsonian RG, which examines how the theory changes when the ultraviolet cutoff is changed from, e.g., Λ to Λ/b with b a RG rescaling factor. 19 However, only a limited amount of work has been done on the formal relationship between these two formulations of the RG. 20 A point of fundamental importance in any RG approach to a field theory is that one has to determine how many independent scaling operators there are. In the field theoretic RG approach, one needs to know how many renormalization constants are needed to make the theory finite. The field theory defined by Eqs. (2.1)-(2.3) has the form of a nonlinear sigma-model with perturbing operators. Let us first consider the pure sigma-model part, i.e. the first term on the r.h.s. of Eq. (2.1) with the constraints given by Eqs. (2.3). This term is invariant under the symplectic symmetry group Sp(8nN), with N the number of replicas and 2n the number of Matsubara frequencies. This model is well known to be renormalizable with two renormalization constants, one for the coupling constant G (the disorder) and one for the renormalization of the Q-field. 21 The second term on the r.h.s. of Eq. (2.1) breaks the symmetry to This term does not require any additional renormalization constants because the coupling constant H in Eq. (2.1) just multiplies the basic Q-field, and therefore the renormalization of H is determined by the field renormalization constant. 21,22 This model represents the noninteracting localization problem. 11 From a physical point of view the model has a rather restrictive property: The only interaction taken into account is the elastic electron-impurity scattering, and consequently the different Matsubara frequencies in Eq. (2.1) are decoupled. Effectively, n is held fixed, and this is crucial for the simple renormalization properties of the noninteracting model. The situation remains relatively simple if further terms are added which respect the non-mixing of the frequencies. In this case, one needs one additional renormalization constant for each operator that represents a different irreducible representation of Sp(8nN). 21 The situation changes fundamentally with the addition of the last term in Eq. (2.1). Physically, this term describes the electron-electron interaction and hence the exchange of energy between electrons. Technically, this leads to a coupling between the Matsubara frequencies, and an examination of the perturbation theory shows that this term introduces new infrared and ultraviolet singularities as n → ∞. In the absence of interactions, singularities arise only from momentum integrations and the symmetry group is fixed to be Sp(8nN). With interactions there are singularities due to both momentum and frequency integrations, and the symmetry properties of the model change continuously during the RG procedure. As a consequence, the results mentioned above concerning the renormalizability of nonlinear sigma-models with perturbing operators, which apply to models with a fixed symmetry, are inapplicable. No general results concerning the number of renormalization constants needed are available, and it is unclear how to renormalize the model given by the full Eq. (2.1). The symmetry arguments quoted above can provide only a lower bound on the number of Z's needed to renormalize the theory, and it is not known whether the model is renormalizable with a finite number of renormalization constants. All RG treatments of the field theory defined by Eqs. (2.1)-(2.3) so far have ignored this general renormalizability problem. They have assumed, explicitly or implicitly, that the full model is still renormalizable with one extra renormalization constant for each interaction coupling constant which is added. In addition, H acquires a renormalization constant of its own once interactions are present. In the absence of Cooperons, i.e. in a theory which contains only K s and K t , there is empirical evidence based on perturbation theory for this assumption being correct. 14,23 In the presence of Cooperons things are more complicated as we will see. For pedagogical reasons, and to make contact with previous work, we nevertheless proceed for a while using this assumption. It is most convenient to use a normalization point RG. The renormalized disorder, frequency coupling, interaction constants, and q-fields are denoted by g, h, k s , k t , k c , and q R , respectively. They are defined by, where µ is an arbitrary momentum scale that is introduced in Eq. [K s,t,c n 1 n 2 ,n 3 n 4 − K s,t,c ] ωn 1 −ωn 2 =µ D /gh The fastest way to derive the RG flow equations is to switch from our cutoff regularized theory to a minimal subtraction scheme. We can do so by putting ǫ < 0 in Eqs. (3.4), letting Λ → ∞, and then analytically continuing to ǫ > 0. We find, 5d) with l t = ln(1+γ t ), γ t,c = k t,c /h. In giving Eqs. (3.5), and (3.6) below, we have for simplicity neglected terms of order gΓ n with n ≥ 2. The omission of these terms does not affect our conclusions. The one-loop RG flow equations follow from Eqs. (3.1) and (3.5) in the usual way. 18 With b ∼ µ −1 and x = ln b one obtains, In Eqs. (3.5), (3.6)Γ is the Cooper propagator at scale µ: To lowest order in the disorder one has, In giving Eqs. (3.5), (3.6) we have specialized to the case of a Coulomb interaction between the electrons. In this case a compressibility sum rule enforces γ s = k s /h = −1, 1 and the terms ln(1+γ s ) that appear in doing the integrals become −2/ǫ in Eqs. (3.6). The presence of these terms in the RG flow equation for γ c reflects the well-known (ln) 2 singularity that exists in the disorder expansion of the single-particle DOS in D=2. 5 It has sometimes been claimed that these DOS effects are absent in all flow equations for the interactions constants. 24 We stress that this statement depends on what quantity exactly one tries to derive a flow equation for. We find that it is true for γ s and γ t , but not for the Cooperon amplitude γ c . It is also important to note that the presence of these 1/ǫ-terms in the flow equations per se does not create any problems. They do appear, e.g., in the renormalization of the single-particle DOS, for which a careful application of the RG 25 leads to results that are consistent both internally and with those obtained by other methods. 1 The Eqs. (3.6) are valid for the universality class G. For the spin-orbit class SO the analogous results are, Note that the Eqs. B. A Scaling equation forΓ The RG flow equations given by Eqs. (3.6) are not closed because they contain the Cooper propagatorΓ. Ref. 10 just used Eq. (3.7b) in Eqs. (3.6). In general this is not satisfactory, since for a consistent one-loop RG description one needs dΓ/dx to one-loop order. In Ref. 10 we argued that this was justified near the MIT since we found that γ c approaches a finite FP value at the transition. Here we try to improve on this point. In principle,Γ can be determined by solving the integral equation given by Eqs. (2.20) together with Eqs. (3.6) and (3.7). This structure is very different from the one usually encountered in RG approaches where the flow equations are an autonomous set of coupled differential equations. In this subsection we discuss attempts to reduce the integral equation forΓ to a single differential equation. We note that for generic integral equations this can not be done. What has effectively been assumed in the previous literature 1,2 is that the reduction is possible in this case. As we will see, this reduction leads to severe structural problems which went unnoticed in the previous treatments quoted since the techniques used were not sensitive to them. In the next section we will therefore turn to a different method, which determines the behavior ofΓ directly from Eqs. (2.20). This will qualitatively yield the same result as inserting Eq. (3.7b) into Eqs. (3.6). Here we investigate possible avenues for deriving a flow equation forΓ strictly in order to make contact with previous work. Within the normalization point RG approach we can formally obtain a differential equation forΓ by renormalizing the Cooper propagator rather than the Cooper vertex function. We impose a normalization condition, 9a) and define a renormalization constantZ c by Notice that this approach assumes thatΓ is a simple scaling quantity. To zeroth order in the disorder Eqs. (3.7) and (3.9) give, A few remarks are in order in the context of Eq. (3.10b): (1) It has the standard form of a RG flow equation for a marginal operator. (2) In this approach the RG is used to obtain the BCS logarithm. This is in contrast to, e.g., Eq. (2.11b) where we obtained the BCS logarithm at Gaussian order by inverting the vertex function. (3) In the present subsection the physical meaning of γ c is different from the rest of this paper. Due to Eq. (3.9a) it plays the role of the Cooper propagator rather than that of the Cooper interaction amplitude that appeared in the previous subsection. (4) A crucial question is what the structure of the higher loop corrections to Eq. (3.10b) will be. In a particular approximation that has been made in the literature and which we will discuss below, γ c flows to a nonzero fixed point value, γ c → γ * c . In this case Eq. (3.10a) naively implies thatZ c has a Cooper-type singularity at a finite scale x = ln b = 1/γ * c which does not correspond to any physical phase transition. Of course, this conclusion is in general not necessarily correct, since terms of higher order in the disorder could change the behavior ofZ c . Nevertheless, we will see that the appearance of such an unphysical singularity represents a serious problem. 27 We will discuss this point in connection with Eq. (3.14) below, and in the Conclusion. Within this approach the one-loop RG flow equation can be obtained by using Eqs. (2.15) and (2.19a) in Eq. (2.20a), and iterating to first order in the disorder. From Eqs. (3.1) and (3.9) one can then obtain a flow equation for γ c . The resulting equation is quite complicated. Since we have come to the conclusion that this is not a viable approach we will not give a complete discussion, but rather illustrate only a few points. First, let us make contact with previous work 1,2 by retaining only the first two terms in Eq. (2.15e), neglecting the corrections to H, putting p = 0, and working in D=2. In this approximation, γ in Eq. (2.20a) is given to leading logarithmic accuracy by, with, for the generic and spin-orbit universality classes, respectively. Using Eq. (3.11a) in the procedure discussed above, we obtain the following one-loop flow equation for γ c , This result is identical to those in Refs. 1 and 2, except for the prefactor of the a 1 , which is 4 in Refs. 1,2. This difference is due to the fact that for a Coulomb interaction, which was considered in these references, an additional term appears in Eq. (2.15e). This leads to a more complicated kernel than the one in Eq. (3.11a), and to an additional factor of 2 in Eq. (3.12a'). Since this difference is not relevant for our purposes we restrict ourselves to the simpler kernel for the short-range case, Eq. (3.11a). In the approximation discussed above one finds γ c → const. at the MIT. As noted below Eqs. (3.10), this result creates some problems. To illustrate this point we add to Eq. (3.11a), via Eq. (2.19a), the Cooper propagator contribution to H n 1 n 2 and H n 3 n 4 , i.e. the integral over I c 1 in Eq. (2.15b). The result is a kernel in Eq. (2.20a) which is given by, with, The last two terms in Eq. (3.13a) occur because of the dependence of γ on the Cooper propagator. In the last term in Eq. (3.13a) we have neglected a dependence on Ω which is irrelevant for our purposes. Using Eq. (3.13a) in the RG procedure yields the flow equation, The crucial point is that as x = ln b → ∞ the last term in Eq. (3.14) does not exist because of a singularity at b ∼ exp(1/γ c ) ∼ exp(1/ǫ 1/2 ). Notice that this breakdown of the RG flow equation for the Cooper propagator is nonperturbative in nature. Previous attempts to derive a RG flow equation for the Cooper propagator 1,2 were based on a frequency-momentum shell RG approach. This method is by necessity restricted to low order perturbative expansions in both g and γ c and cannot be used to discuss the singularity in Eq. (3.14). Furthermore, if one expands the last term in Eq. (3.14) in powers of γ c then a non-Borel summable divergent series is obtained. The structure of this singularity resembles what is known as the renormalon problem in quantum field theory. 18 We also note that this singularity is obviously related to the one discussed below Eq. (3.10b). In the following section we will argue that the Cooper propagator actually consists of multiple scaling parts, and that any theoretical approach that does not acknowledge this feature is invalid. It is unclear whether or not the singularity discussed in connection with Eq. (3.14) is related to this problem, which in turn is related to the question of whether or not the theory is renormalizable. Usually it is necessary to go to higher than one-loop order in order to verify the presence of multiple scaling parts. In the present case, however, the fact that a renormalization of the propagator 1,2 and the vertex function, 10 respectively, led to different results is already an indication of their presence. IV. THE ELIASHBERG EQUATION, AND LOGARITHMIC CORRECTIONS TO SCALING In the first part of this section we discuss some general features of the Eliashberg equation with a repulsive kernel given by Eq. (2.20a). We then give theoretical arguments in favor of the existence of logarithmic corrections to scaling at the MIT, and discuss their experimental relevance. A. The Eliashberg Equation In order to complete the RG description started in Sec. III A, the scaling properties of the Cooper propagator given by Eqs. (2.19), (2.20), and (3.7) need to be determined. In Sec.III B we showed why previous attempts to reduce the integral equation forΓ to a differential equation are not satisfactory. Here we pursue a different approach. We acknowledge that one has to actually solve the integral equation in order to obtain information aboutΓ. Since this is very difficult to do in general, we classify possible solutions for different behaviors of the kernel γ(ω, Ω, ω ′ ). To simplify our considerations we work at zero momentum. We further specialize to a model with a separable kernel. After drawing some conclusions for this special case we will argue that these are actually generic. The main points can be illustrated using a kernel that is a sum of two separable parts, Note that Eq. (eq:4.1a) satisfies the symmetry requirement, γ(ω, Ω, ω ′′ ) = γ(ω ′′ , Ω, ω) . Eq. (4.1b) is a consequence of the symmetry property K c n 1 n 2 ,n 3 n 4 = K c n 3 n 4 ,n 1 n 2 , which in term follows from Eq. (2.2c). Inserting Eq. (4.1a) into Eq. (2.20a) leads to a separable integral equation that can be easily solved. We obtain, with, and, In the previous literature it has been suggested thatΓ(µ) given by Eq. (3.7a) goes to a finite fixed point value at the MIT (cf. the discussion in the previous subsection), and that the approach to criticality is characterized by a conventional power law. To see how this type of behavior can be realized from Eqs. (4.1), (4.2), we specialize to criticality, and put f 2 = 0. If f 1 diverges like, e.g., then the fixed point (FP) value ofΓ is, If we assume that these asymptotic results are not tied to the separable kernel, but are generic properties of the general Eliashberg equation, then we have the following situation: If γ diverges (vanishes) at the MIT, thenΓ has a finite (zero) FP value, and if γ goes to a constant thenΓ vanishes logarithmically slowly. In all of these casesΓ satisfies an autonomous differential equation with universal coefficients. However, from a more general point of view, going beyond the asymptotic behavior, one expects a more complex result. For instance, even if γ → γ * at the MIT one expects a correction that vanishes either as a power law or as a logarithm. In fact, the Eq. (3.6d) predicts this kind of behavior. In our model calculation this happens if f 2 = 0. With Eqs. (4.2) it is easy to see thatΓ(µ) does not satisfy an autonomous differential equation if f 2 = 0. Of course, this just reflects the obvious fact that in general an integral equation cannot be reduced to a single differential equation. We further note that even if γ diverges at the MIT, then one generically still expects a finite subleading contribution. Using this in either Eq. (4.2a) or in the actual Eliashberg equation, Eq. (2.20a), one finds that (1)Γ(µ) does not satisfy a single differential equation, and (2)Γ approaches a finite FP valueΓ * , but logarithmically slowly so. We conclude that for both the case where γ diverges and where it approaches a constant at the MIT one expects a logarithmically slow approach to the FP value ofΓ. The only other possibility is that γ vanishes as a power law at the MIT. For this caseΓ also vanishes as a power law, and in a scaling description (cf. below)Γ is a conventional irrelevant variable. While at present we cannot exclude this scenario, we consider it unlikely because of the first term on the r.h.s. of Eq. (3.6d) (or the second term on the r.h.s of Eq. (2.15e)), which tends to drive γ c and hence γ towards larger values. B. Logarithmic Corrections to Scaling In the previous subsection we have argued that in general one expectsΓ to approach its fixed point value logarithmically slowly. This result is consistent with Eq. (3.6d) which gives γ c → γ * c at the MIT, which in turn implies thatΓ vanishes logarithmically slowly at the MIT. BecauseΓ couples to all physical quantities, cf. Eqs. (3.6), we conclude that in all universality classes where Cooperons are present, logarithmic corrections to scaling will appear. Note that this conclusion is independent of the spatial dimensionality and depends only on whether or not the kernel γ has a constant contribution at the MIT. We also note that this is a zero-temperature, quantum mechanical effect that might be relevant for other quantum phase transitions. For a specific example of an observable quantity, let us consider the electrical conductivity σ, which is related to the disorder by, (4.5) Note that in giving Eq. (4.5) we have used units such that e 2 /h is unity, and we have ignored the possibility of charge renormalization. For a discussion of the latter point we refer the reader elsewhere. 25 With t the dimensionless distance from the critical point at zero temperature, and δΓ =Γ −Γ * , the conductivity satisfies the scaling equation, Here ν is the correlation length exponent, z is the dynamical scaling exponent, and of the irrelevant variables in the scaling function F we have kept only the one that decays most slowly at the MIT, i.e. δΓ. At zero temperature we let b = t −ν , and assume that F [1, 0, x] is an analytic function of x since it is evaluated far from the MIT. We obtain, with s = ν(D − 2). Here σ 0 is an unknown amplitude, and the a i are unknown expansion coefficients. Depending on what the subleading behavior of δΓ(b) is, the a i with i ≥ 2 could carry a very weak t-dependence (e.g. a 2 ∼ ln ln t). An interesting consequence of the logarithmically marginal operatorΓ is that the dynamical scaling exponent in Eq. (4.6) is ill-defined. To see this we use that z is normally defined by, with α a universal constant andκ = κ if δΓ vanishes as a power law. However, if δΓ vanishes logarithmically at the MIT, then, i.e., h does not scale as bκ. This in turn implies that Eq. (4.6) should be replaced by, We conclude that for σ(t = 0, T ) the asymptotic scaling is in general determined by logarithms. A. Experiments in Zero Magnetic Field It is well known that for the experimental determination of critical exponents, and for the comparison of theoretical and experimental values for these quantities, one must take into account corrections to scaling. This is so mainly because the asymptotic critical region, where corrections are negligible, is too small to be experimentally accessible. Furthermore, a reliable determination of the critical exponents usually requires experimental data that cover many decades of the control parameter. 28 For conventional phase transitions, where the control parameter is the temperature which is relatively easy to control, these conditions can be met. In the case of the MIT the situation is much less favorable. The main reason is that changing the control parameter, i.e. the impurity concentration, usually requires the preparation of a new sample. The only known way to avoid this problem is the stress-tuning technique of Ref. 6. Also, since the transition occurs at T = 0, measurements at very low temperatures and a careful extrapolation to T = 0 are required. The application of these techniques to Si:P has led to the most accurate determination of the critical behavior of a MIT system to date. 6,29 Still, by the standards of critical phenomena experiments the data taken are relatively far from the critical point, with t ≥ 10 −3 , and corrections to scaling have not been considered in the data analysis. This means that the measured exponent for the conductivity cannot be identified with the asymptotic critical exponent s. Rather, the measured value must be taken to represent some effective exponent s ef f , which is different from s due to corrections to scaling. This observation offers an interesting possibility to explain the discrepancy between the measured value 0.51 for s, and the theoretical bound s ≥ 2/3: 9 If the measured value represents an effective exponent, then the latter is not subject to the theoretical constraint for s. According to this hypothesis, the discrepancy between the measured value s ef f = 0.51 and the theoretical lower bound s ≥ 0.67 must then be due to the corrections to scaling. With the usual, power-law corrections such a large discrepancy would be hard to explain. In the present case, however, where the corrections are logarithmic, it turns out that they provide a viable explanation for the observations. We can use Eq. (4.7) to reconcile experiments 6-8 near the MIT which seem to give s < 2/3 with the theoretical bound 9 s ≥ 2/3. In Fig. 1 we show experimental data, extrapolated to zero temperature, for the conductivity of Si:P. The data points were chosen as follows. For small t, roundoff due to sample inhomogeneities sets a limit at t ≃ 10 −3 . 6,29 At large t, at some point one leaves the region where scaling, even with corrections taken into account, is valid. We chose to include points up to t = 10 −1 . Obviously, for t → 1 the concept of corrections to scaling loses its meaning, and the expansion, Eq. (4.7), in powers of 1/lnt breaks down. We assumed a standard deviation of a quarter of the symbol size for all points except for the one at the smallest t, where we assumed it to be three times as large. In order to improve the statistics with many correction terms taken into account, we augmented the thirteen data points for 10 −3 ≤ t ≤ 10 −1 by another twelve points obtained by linear interpolation between neighboring points. If all a i are set equal to zero then a best fit to the data yields s = 0.51 ± 0.05 < 2/3. 6 We now assume, somewhat arbitrarily, s = 0.7, and use Eq. (4.7) with the a i = 0. We have used a standard χ 2 fitting routine with singular value decomposition 30 to optimize the values of the a i . The dotted, dashed, and full lines, respectively, in Fig. 1 represent the best fits obtained with one, two, and three logarithmic correction terms. These fits are of significantly higher quality than a straight line fit optimizing s. More than three correction terms did not lead to further improvements in the fit quality. While the value of s was chosen arbitrarily, this demonstrates that this experient is certainly consistent with a lower bound s ≥ 2/3 once the logarithmic corrections to scaling are taken into account. We have also tried to optimize s by choosing different values for s and comparing the quality of the resulting fits with a fixed number of coefficients a i taken into account (letting s float together with the a i proved unstable). The result was a very shallow minimum in the fit quality around s=0.7. Relatively large (±0.15) fluctuations in the best value of s were observed when large t data points were successively eliminated. From our fitting procedure for this experiment we estimate s = 0.70 +0. 20 −0.03 , where the lower bound is set by the theoretical bound rather than by the fit. B. Experiments in a Magnetic Field While this success in fitting the Si:P data is encouraging, one might object that the model invoked contains an infinite set of unknown parameters, namely the a i in Eq. (eq:4.7), and that therefore the quality of the fit obtained is of no significance. Also, since the value of the asymptotic exponent in 3-D is not known, no quantitative statements can be made. It is therefore important to see if the model can predict any qualitative features that are independent of the actual value of s, and whether or not such features are observed. There are obvious features predicted by our model which follow from the qualitative magnetic field dependence of the Cooperon. The first one is a strong magnetic field dependence of the effective exponent s ef f . If the logarithmic corrections to scaling are caused by the Cooper propagator, which acquires a mass in a magnetic field, then any nonzero field must act to destroy the logarithms. It thus follows from the model that Si:P, or any material, in a magnetic field must show a value of s ef f that is larger than 2/3. Any observation of an s ef f smaller than 2/3 in a magnetic field would rule out the logarithmic corrections to scaling as the source of the anomalously small value of s ef f . The second feature concerns the temperature dependence of the conductivity. It is well known that those materials which show an s ef f < 0 also show a change of sign of the temperature derivative dσ/dT of the conductivity as the transition is approached. 7,31 Within the RG description of the MIT this phenomenon can be explained by a change of sign of the g 2 -term in the g-flow equation, Eq. (3.6a) or (3.8a). Since it is observed in Si:B, for which Eq. (3.8a) is the relevant flow equation, as well as in Si:P, the change of sign cannot be associated with the scaling behavior of the triplet interaction constant, but must be due toΓ. Since a magnetic field suppresses the Cooperon channel, this feature should therefore disappear in a sufficiently strong magnetic field. Let us make these predictions somewhat more quantitative. The relevant magnetic field scale B x is given by the magnetic length l B =hc/eB being equal to the correlation length ξ ≃ k F t −ν . Let us assume, as we did in connection with Fig. 1, that the boundary of the critical region is given by t ≃ 0.1. With k F ≃ 4 × 10 6 cm −1 for Si:B or Si:P near the MIT, and with ν ≃ 1 we then obtain B x ≃ 1T . The model thus predicts that for magnetic fields exceeding about 1T the observed effective conductivity exponent should be larger than 2/3, and the change of sign of dσ/dT observed in zero field should disappear. Both of these features have been observed in the recent experiments by Sarachik et al.. 7,31 The Cooperon induced logarithmic corrections to scaling thus provide a consistent explanation for several observed features of doped Silicon which otherwise would be mysterious. VI. CONCLUSION In this paper we have reconsidered the disordered electron problem in the presence of Cooperons, previous treatments of which had led to conflicting results. 1,2,10 In particular we have analyzed the technical differences between Refs. 1,2, which renormalized the Cooper propagator, and Ref. 10, which renormalized the Cooper vertex function. Let us summarize the results of this analysis. (1) Crucial differences exist between the electron-electron interaction in the particle-hole channel and the particle-particle or Cooper channel, respectively. In the particle-hole channel the conservation laws for particle number and spin enforce an essentially scalar structure of the propagators and vertex functions. As a result, vertex functions are simply scalar inverses of propagators, and standard renormalization procedures applied to either object lead to identical results. In contrast, in the Cooper channel there is no conservation law which would lead to an analogous simplification, and vertex functions and propagators are related by complicated matrix inversion procedures, cf. Sec. II. This leads to fundamentally different structures of the singularities in the two objects, which poses a problem for the renormalization, cf. Sec. III A. (2) An attempt to reduce the Cooper propagator renormalization to a single RG flow equation encounters severe structural problems which we have discussed in Sec. III B. At the Gaussian level, the renormalizaton constant needed displays a BCS-like singularity at a finite scale. This leads to a renormalized vertex function which is not finite, but shows the same singularity. If one ignores this problem and proceeds to derive a flow equation for the Cooper propagator, the singularity in the Gaussian renormalization constant produces imaginary terms in the flow equation at one-loop level which resemble the renormalon singularity known in quantum field theory. This problem is nonperturbative in nature with respect to the Cooper interaction constant γ c . This explains why it went unnoticed in Refs. 1,2, which expanded in both γ c and in the disorder g. It invalidates the simple fixed point structure found in these references, according to which the Cooper channel interaction is a conventional irrelevant operator. (3) Any renormalization scheme for the Cooper vertex function must acknowledge the fact that the object that appears in perturbation theory is the Cooper propagatorΓ, not just the Cooper interaction amplitude γ c . In fact the propagator plays the role of an effective interaction amplitude, cf. Sec. III A. Due to the inversion problem mentioned in point (1) above it is a much more complicated object than γ c . This point was missed in Ref. 10. The procedure there was effectively equivalent to inserting the zero-loop order result forΓ into the one-loop order flow equations for the other coupling constants. The RG flow equations of Ref. 10 are therefore not consistent in general. They are valid near the transition only if γ c has a finite FP value. (4) A possible solution of these problems is to derive flow equations for all coupling constants in terms ofΓ, and to determine the latter from an Eliashberg-type integral equation which deals with the inversion problem. This is the approach taken in Sec. IV. Solutions of the integral equation obtained for separable model kernels suggest thatΓ is not a simple scaling variable and does not satisfy an autonomous differential equation with universal coefficients. Plausible assumptions about the kernel of the integral equation lead to the conclusion that logarithmic corrections to scaling, as predicted in Ref. 10, should exist regardless of whetherΓ approaches a nonzero fixed point value, as asserted in Refs. 1,2, or vanishes logarithmically at large scales, as assumed in Ref. 10. However, this conclusion could not be made mathematically precise since the full integral equation has not been solved. (5) Logarithmic corrections to scaling, as discussed in Sec. IV B, can explain some otherwise mysterious properties of materials that belong to universality classes with Cooperons, cf. Sec. V. The discrepancy between the asymptotic critical exponent for the conductivity and the observable effective exponent in this case is large enough to provide an explanation for the 'exponent puzzle' in Si:P. This explanation is bolstered by two predictions which have been verified by recent experiments: 31 In the presence of a sufficiently strong magnetic field the observed conductivity exponent s must satisfy the lower bound s ≥ 2/3, and the characteristic change in the temperature dependence of the conductivity observed in zero field as the transition is approached should disappear. While these results are encouraging, some serious problems remain, most notably the question of whether or not the theory is renormalizable. We recall (cf. the discussion in Sec. III A) that the pure nonlinear sigma-model is renormalizable with two renormalization constants. The proof of this makes heavy use of the symmetry properties of the model. 21 Since the interaction terms in the action break the symmetry of the sigma-model, the proof of Ref. 21 is no longer applicable for the interacting model and renormalizability has never been proven. It is conceivable, however, that the conservation laws in the particle-hole channel are still sufficient to ensure renormalizability for the model without Cooperons. This would be consistent with the considerable body of perturbative evidence of renormalizability for this case. For the case with Cooperons the absence of a conservation law and the unusual structure encountered in attempts to apply renormalization techniques makes one much more doubtful. It would be useful to further study this question. For instance, it would be interesting to see whether the conservation laws are indeed sufficient to ensure renormalizability. It would also be useful to have an alternative approach to the subject which avoids the generalized nonlinear sigma-model with its many technical problems. One could, for instance, try to work directly with the Grassmannian action as in recent work on interacting fermions without disorder. 32 One could also imagine an order parameter description of the MIT, using the Q-field theory, but not the sigma-model. Work in these directions is underway and will be presented in future publications.
2019-04-14T02:14:08.585Z
1993-10-15T00:00:00.000
{ "year": 1993, "sha1": "65cbaa638ddb69eb67f145886a985c7828fbc7e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "65cbaa638ddb69eb67f145886a985c7828fbc7e1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
88518194
pes2o/s2orc
v3-fos-license
Sparse recovery in convex hulls via entropy penalization Let $(X,Y)$ be a random couple in $S\times T$ with unknown distribution $P$ and $(X_1,Y_1),...,(X_n,Y_n)$ be i.i.d. copies of $(X,Y).$ Denote $P_n$ the empirical distribution of $(X_1,Y_1),...,(X_n,Y_n).$ Let $h_1,...,h_N:S\mapsto [-1,1]$ be a dictionary that consists of $N$ functions. For $\lambda \in {\mathbb{R}}^N,$ denote $f_{\lambda}:=\sum_{j=1}^N\lambda_jh_j.$ Let $\ell:T\times {\mathbb{R}}\mapsto {\mathbb{R}}$ be a given loss function and suppose it is convex with respect to the second variable. Let $(\ell \bullet f)(x,y):=\ell(y;f(x)).$ Finally, let $\Lambda \subset {\mathbb{R}}^N$ be the simplex of all probability distributions on $\{1,...,N\}.$ Consider the following penalized empirical risk minimization problem \begin{eqnarray*}\hat{\lambda}^{\varepsilon}:={\mathop {argmin}_{\lambda\in \Lambda}}\Biggl[P_n(\ell \bullet f_{\lambda})+\varepsilon \sum_{j=1}^N\lambda_j\log \lambda_j\Biggr]\end{eqnarray*} along with its distribution dependent version \begin{eqnarray*}\lambda^{\varepsilon}:={\mathop {argmin}_{\lambda\in \Lambda}}\Biggl[P(\ell \bullet f_{\lambda})+\varepsilon \sum_{j=1}^N\lambda_j\log \lambda_j\Biggr],\end{eqnarray*} where $\varepsilon\geq 0$ is a regularization parameter. It is proved that the ``approximate sparsity'' of $\lambda^{\varepsilon}$ implies the ``approximate sparsity'' of $\hat{\lambda}^{\varepsilon}$ and the impact of ``sparsity'' on bounding the excess risk of the empirical solution is explored. Similar results are also discussed in the case of entropy penalized density estimation. λj log λj along with its distribution dependent version where ε ≥ 0 is a regularization parameter. It is proved that the "approximate sparsity" of λ ε implies the "approximate sparsity" ofλ ε and the impact of "sparsity" on bounding the excess risk of the empirical solution is explored. Similar results are also discussed in the case of entropy penalized density estimation. 1. Introduction. Let S and T be measurable spaces with σ-algebras S and T , respectively, and let (X, Y ) be a random couple in S × T. The distribution of (X, Y ) will be denoted by P and the distribution of X by Π. The training data (X 1 , Y 1 ), . . . , (X n , Y n ) consists of n i.i.d. copies of (X, Y ) (the distribution P is not known and it is to be estimated based on the data). This is an electronic reprint of the original article published by the Institute of Mathematical Statistics in The Annals of Statistics, 2009, Vol. 37, No. 3, 1332-1359. This reprint differs from the original in pagination and typographic detail. 1 2 V. KOLTCHINSKII We will denote P n the empirical distribution of the data and will write in what follows P g = Eg(X, Y ) and P n g = n −1 n j=1 g(X i , Y i ) for functions g on S × T (as well as for functions on S since they can be also viewed as functions on S × T ). We will be interested in a class of prediction problems in which Y is to be predicted based on an observation of X. Prediction rules will be based on the training data (X 1 , Y 1 ), . . . , (X n , Y n ). Let ℓ : T × R → R + be a loss function. It will be assumed in what follows that, for all y ∈ T, ℓ(y, ·) is convex. For a function f : S → R, let (ℓ•f )(x, y) := ℓ(y, f (x)). Then the quantity P (ℓ • f ) is the (true) risk of the prediction rule f and P n (ℓ • f ) is the corresponding empirical risk. The excess risk of f is defined as where the infimum is taken over all measurable functions and it is assumed for simplicity that it is attained at f * ∈ L 2 (Π) (moreover, it will be assumed in what follows that f * is uniformly bounded by a constant M ). Let H := {h 1 , . . . , h N } be a given finite class of measurable functions from S into [−1, 1] called a dictionary (of course, it can be assumed instead that the functions in the dictionary are uniformly bounded by an arbitrary constant; the only change will be in the constants in the results below). The dictionary can be an orthonormal system of functions, a union of several orthonormal systems suitable for approximation of the target function f * , a base class of a boosting type algorithm, a set of pretrained estimators in an aggregation problem, etc. Let P(H) be the set of all probability measures on H. For λ ∈ P(H), denote λ j := λ({h j }) and Denote Λ := {(λ 1 , . . . , λ N ) : λ j ≥ 0, j = 1, . . . , N, N j=1 λ j = 1}. We will identify (whenever it is convenient) probability measures λ ∈ P(H) with vectors (λ 1 , . . . , λ N ) from the simplex Λ. We will write (with a little abuse of notation) λ = (λ 1 , . . . , λ N ). Clearly, the function f λ : S → [−1, 1] is a convex combination (a mixture) of functions from the dictionary and the set conv(H) := {f λ : λ ∈ P(H)} SPARSE RECOVERY 3 is the convex hull of H. As always, define the entropy of λ as The Kullback-Leibler divergence between λ, ν ∈ Λ is defined as The following penalized empirical risk minimization problem will be studied: where ε ≥ 0 is a regularization parameter. Since, for all y, ℓ(y, ·) is convex, the empirical risk P n (ℓ • f λ ) is a convex function of λ. Since also the set P(H) is convex (it can be identified with the simplex Λ) and the function λ → −H(λ) is convex, this makes the problem (1.1) a convex optimization problem. It is natural to compare this problem with its distribution dependent version In the recent literature, there has been considerable attention to the problem of sparse recovery in a linear span of a given dictionary using penalized empirical risk minimization with ℓ 1 -penalty (this method is called LASSO in the literature on regression), and the current paper is close to this line of work. It has become clear that sparse recovery is possible not always, but only under some geometric assumptions on the dictionary. These assumptions are often described in terms of the properties of the Gram matrix of the dictionary, which in the case of random design models is the matrix and they take form of various conditions on the entries of this matrix ("coherence coefficients"), or on its submatrices (in spirit of "uniform uncertainty principle" or "restricted isometry" conditions). The essence of these assumptions is to try to keep the dictionary not too far from being orthonormal in L 2 (Π), which in some sense is an ideal case for sparse recovery [see, e.g., Donoho (2006), Candes and Tao (2007), Rudelson and Vershynin (2005), Mendelson, Pajor and Tomczak-Jaegermann (2007), Bunea, Tsybakov and Wegkamp (2007a), van de Geer (2008), Koltchinskii (2008aKoltchinskii ( , 2008b and Bickel, Ritov and Tsybakov (2008), among many other papers that study both the random design and the fixed design problems]. The idea to use the entropy for complexity regularization is not new in information theory and statistics (recall, e. g., the principle of maximum entropy). In particular, it has been studied recently in connection with the problem of aggregation of statistical estimators by exponential weighting and also in a large number of papers on PAC-Bayesian approach in learning theory [see, e.g., McAllester (1999), Catoni (2004), Audibert (2004), Zhang (2001Zhang ( , 2006aZhang ( , 2006b) and references therein]. However, we are not aware of any attempt to relate this penalization technique to sparse recovery problems with an exception of a very recent paper by Dalalyan and Tsybakov (2007), where it is done in the context of aggregation with exponential weighting. Moreover, at least at the first glance, the idea of using this type of penalization to achieve sparse recovery seems counterintuitive since the penalty −H(λ) attains its minimum at the uniform distribution λ j = N −1 , j = 1, . . . , N , and, from this point of view, it penalizes for "sparsity" rather than for "nonsparsity" [in fact, solutions of (1.1), (1.3) can be only "approximately sparse"]. In this paper we follow the approach of Koltchinskii (2005Koltchinskii ( , 2008a, where the problem was studied in the case of ℓ p -penalization with 1 ≤ p ≤ 1 + c log N . This approach is based on separate study of random error |E(fλ ε ) − E(f λ ε )| and of approximation error E(f λ ε ). It happens that these are two different problems with not entirely the same geometric parameters responsible for the size of each of the two errors, and the geometry of the problem is more subtle than in the standard approach based on conditions on the Gram matrix H. In many problems in Statistics and Learning Theory the distribution of the design variable is completely unknown and it is unrealistic to make any restrictive assumptions on its Gram matrix. Because of this reason, it is desirable to study in a more precise way how the excess risk of the solution of (1.1) depends on geometric parameters of the problem. One of our goals is to show that if λ ε is "approximately sparse" (i.e., this measure is almost concentrated on a small set of atoms), then a similar property is enjoyed byλ ε . These sparsity bounds provide a way to control fλ ε − f λ ε L 2 (Π) and K(λ ε , λ ε ) (see Theorems 1 and 2). For instance, we show that for any set J ⊂ {1, . . . , N } with card(J) = d and such that the following bound holds with a high probability: This allow us also to bound "the random error" |E(fλ ε ) − E(f λ ε )| in terms of "approximate sparsity" of the problem (Theorem 3). Some further geometric parameters (such as "the alignment coefficient" introduced in the next section) provide a way to control "the approximation error" E(f λ ε ) (see Theorem 4). Namely, suppose there exists a vector λ ∈ Λ with the following properties: (i) λ is "sparse" [i.e., its support J = supp(λ) is a set of relatively small cardinality]; (ii) the excess risk E(f λ ) is small; (iii) λ is "aligned" nicely with the dictionary (the precise definitions are given in the next section). Then λ ε is approximately sparse and its excess risk E(f λ ε ) is small (more precisely, its size is controlled by sparsity of λ and its "alignment" with the dictionary). These results ultimately yield oracle inequalities on the excess risk E(fλ ε ) showing that this estimation method provides certain degree of adaptation to unknown "sparsity" of the problem (see Corollary 1). Density estimation problem can be also studied rather naturally in a similar framework. In this problem, the data consists of n independent identically distributed observations X 1 , . . . , X n in S with common distribution P. Suppose that P has density f * with respect to a σ-finite measure µ in (S, A). We will assume that f * is uniformly bounded by a constant M. Let h 1 , . . . , h N be a large dictionary of probability densities with respect to µ uniformly bounded by 1 (as in the case of prediction problem discussed above, one can assume that these densities are uniformly bounded by an arbitrary constant resulting in a proper change of constants in the theorems). The goal is to construct an estimator of f * in the class of mixtures {f λ : λ ∈ Λ}. The underlying assumption is that there exists a "sparse" mixture that approximates the unknown density reasonably well. One can use an estimator based on minimizing the entropy penalized empirical risk with respect to quadratic loss:λ which is again a convex minimization problem. The corresponding penalized true risk minimization problem is Recently, Bunea, Tsybakov and Wegkamp (2007b) studied a similar density estimation problem with ℓ 1 -penalized empirical risk with respect to quadratic loss (and for the linear aggregation instead of convex aggregation). As in the case of prediction problems (regression, classification), we also obtain the bounds characterizing approximate sparsity of the empirical solution in terms of approximate sparsity of the true solution and oracle inequalities for fλ ε − f * 2 L 2 (µ) (which is equivalent to considering the excess risk in this problem; see Theorems 5-7, Corollary 2). Moreover, denote It will be assumed that . Without loss of generality, we also assume that τ (R) ≤ 1, R > 0 (otherwise, it can be replaced by a lower bound). There are many important examples of loss functions satisfying these assumptions, most notably, the quadratic loss ℓ(y, u) := (y − u) 2 in the case when T ⊂ R is a bounded set. In this case, τ = 1. In regression problems with a bounded response variable, one can also consider more general loss functions of the form ℓ(y, u) := φ(y − u), where φ is an even nonnegative convex twice continuously differentiable function with φ ′′ uniformly bounded in R, φ(0) = 0 and φ ′′ (u) > 0, u ∈ R. In binary classification setting (i.e., SPARSE RECOVERY 7 when T = {−1, 1}), one can choose the loss ℓ(y, u) = φ(yu) with φ being a nonnegative decreasing convex twice continuously differentiable function such that φ ′′ is uniformly bounded in R and φ ′′ (u) > 0, u ∈ R. The loss function φ(u) = log 2 (1+e −u ) (often called the logit loss) is a typical example. Note that the condition that the second derivative ℓ ′′ u is uniformly bounded in T × R can be replaced by its uniform boundedness in The constants in the theorems below will then depend on the sup-norm of the second derivative (and, as a consequence, on M ); otherwise, the results will be the same. This allows one to cover several other popular choices of the loss function, such as the exponential loss ℓ(y, u) := e −yu in binary classification. We will also assume in what follows that N ≥ (log n) γ for some γ > 0 (this is needed only to avoid additional terms of the order log log n n in several inequalities). Sparsity bounds. Our first goal is to provide upper bounds on fλ for an arbitrary subset J ⊂ {1, . . . , N }, in terms of the cardinality of this set d = card(J) and the measure j / ∈J λ ε j . The idea is to show that if λ ε is approximately sparse, that is, there exists a small set J such that j / ∈J λ ε j is also small, then λ ε is approximately sparse, too, with a high probability and the L 2 -error of approximation of f λ ε by fλ ε as well as the Kullback-Leibler error of approximation of λ ε byλ ε are small. The first result in this direction is the following theorem. Theorem 1. There exist constants D > 0 and C > 0 depending only on ℓ such that, for all J ⊂ {1, . . . , N } with d := d(J) = card(J), for all A ≥ 1 and for all the following bounds hold with probability at least 1 − N −A : Note that these bounds hold without any conditions on the dictionary (except the assumption that the functions h j are uniformly bounded). However, the result is true only for ε ≥ D d+A log N n . Since it is not known for which set J j / ∈J λ ε j is small, it is also not known for which d the condition (2.2) is to be satisfied. In other words, the regularization parameter ε in this result depends on unknown degree of sparsity of the problem. In the next theorem, it will be assumed only that ε ≥ D A log N n , but there will be more dependence of the bounds on the geometric properties of the dictionary. On the other hand, the error will be controlled not by d = card(J), but rather by the dimension of a linear space L that provides a good approximation of the functions {h j : j ∈ J}. This dimension can be smaller than card(J), which makes the bound more precise. Given a subspace L of L 2 (Π), define It is easy to check that for any L 2 (Π)-orthonormal basis φ 1 , . . . , φ d of L, In what follows P L denotes the orthogonal projector onto L and L ⊥ denotes the orthogonal complement of L. We will be interested in subspaces L for which dim(L) and U (L) are not very large and, at the same time, functions {h j : j ∈ J} in the "relevant" part of the dictionary can be approximated well by the functions from L in the sense that the quantity max j∈J P L ⊥ h j L 2 (Π) is relatively small. for all subspaces L of L 2 (Π) with d := dim(L) and for all A ≥ 1, the following bounds hold with probability at least 1 − N −A and with a constant C > 0 depending only on ℓ: If, for some J, and, for some L with U (L) ≤ √ d, h j ∈ L, j ∈ J, the bound (2.6) simplifies and becomes It means that the fact that the dictionary is not orthogonal and even is not linearly independent might actually help to make the random errors fλ ε − f λ ε 2 L 2 (Π) and K(λ ε , λ ε ) small: their size is controlled in this case by the dimension d of the linear span L of the "relevant part" of the dictionary {h j : j ∈ J}, and d can be much smaller than card(J). Random error bounds. The following result is a simple corollary of Theorems 1, 2 and the properties of the loss function. Denote by L the linear span of the dictionary {h 1 , . . . , h N } and let P L be the orthogonal projector on L ⊂ L 2 (P ). Define Similarly, under the conditions of Theorem 2, with probability at least 1 − N −A and with d = dim(L) . Recall that f * denotes a function that minimizes the risk P (ℓ • f ) and it was assumed that f * is uniformly bounded by a constant M. Clearly, by necessary conditions of minimum, we have Note that for any functionf uniformly bounded by M and such that ℓ ′ •f ∈ L ⊥ (in particular, for f * ) we have where we used the fact that ℓ ′ is Lipschitz with respect to the second variable. Under the conditions on the loss function, for all λ ∈ Λ which easily follows from a version of Taylor expansion for the risk. To bound the excess risk E(fλ ε ), one has to solve two different problems: bounding the random error and bounding the approximation error E(f λ ε ). Using the above facts, one can easily get from Theorem 3 the following bounds on the random error: under the conditions of Theorem 1, with probability at least 1 − N −A and with d = card(J) and under the conditions of Theorem 2, with probability at least 1 − N −A and with d = dim(L) , which reduces the problem to bounding only the approximation error. Approximation error bounds, alignment and oracle inequalities. To consider the approximation error we need several definitions. For λ ∈ Λ, denote The set T Λ (λ) is the tangent cone of convex set Λ at point λ. Recall that H denotes the Gram matrix of the dictionary in the space L 2 (Π). Whenever it is convenient, H will be viewed as a linear transformation of R N . For a vector w ∈ R N , let a H (Λ, λ, w) := sup{ w, u ℓ 2 : u ∈ T Λ (λ), f u L 2 (Π) = 1}. We will call this quantity the alignment coefficient of vector w, matrix H and convex set Λ at point λ ∈ Λ. Note that f u 2 L 2 (Π) = Hu, u ℓ 2 = H 1/2 u, H 1/2 u ℓ 2 . Therefore, the alignment coefficient can be bounded as follows: If H is nonsingular, we can further write Even when H is singular, we still have where for w ∈ Im(H 1/2 ) = H 1/2 R N , one defines [which means factorization of the space with respect to Ker(H 1/2 )] and for w / ∈ Im(H 1/2 ) the norm H −1/2 w ℓ 2 becomes infinite. It is also easy to see that if J = supp(w), then where d(J) := card(J), κ(J) is the minimal eigenvalue of the matrix L J denoting the linear span of {h j : j ∈ J} [see Koltchinskii (2008a), the proof of Proposition 1, for a similar argument]. Measures of linear dependence similar to ρ(J) are known in multivariate statistical analysis as "canonical correlations." These upper bounds show that the size of the alignment coefficient is controlled by the "sparsity" of the vector w as well as by some characteristics of the dictionary (or its Gram matrix H). In particular, for orthonormal dictionaries and for dictionaries that are close enough to being orthonormal [so that κ(J) is bounded away from 0 and ρ 2 (J) is bounded away from 1], the alignment coefficient is bounded from above by a quantity of the order w ℓ∞ d(J). However, the alignment coefficient can be much smaller than this upper bound and it reflects in a more delicate way rather complicated geometric relationships between the vector w, the dictionary and the convex set Λ. Even the quantity H −1/2 w 2 ℓ 2 , which is a rough upper bound on the alignment coefficient that does not take into account the geometry of set Λ, depends not only on the sparsity of w, but also on how this vector is aligned with the eigenspaces of H. For instance, if w belongs to the linear span of the eigenspaces that correspond only to the eigenvalues of H that are not too small, H −1/2 w 2 ℓ 2 becomes of the order w 2 ℓ 2 . Note also that the geometry of the problem crucially depends on the unknown distribution Π of the design variable [since one has to deal with the Hilbert space L 2 (Π)]. It happens that both the approximation error E(f λ ε ) and the "approximate sparsity" of λ ε can be controlled by the alignment coefficient of the vector s N (λ) for an arbitrary λ ∈ Λ. Denote α N (λ) := a H (Λ, λ, s N (λ)). Theorem 4. There exists a constant C > 0 that depends only on ℓ and on the constant M (for which f * ∞ ≤ M ) such that, for all ε > 0 and all Theorem 4 and either of the bounds on the random error (2.10) and (2.11) immediately imply oracle inequalities for the excess risk E(fλ ε ). For instance, the next corollary follows from (2.11). 2.5. Density estimation and sparse mixtures recovery. In the case of density estimation based on entropy penalized empirical risk minimization with quadratic loss, as in (1.3), the results are rather similar to what was described above for prediction problems (regression and classification) and their proofs are quite similar, too. Recall the notations at the end of the Introduction 1. Recall also the assumptions that the unknown density f * of distribution P is uniformly bounded by M and the densities in the dictionary h j are uniformly bounded by 1. The following results hold. the following bounds hold with probability at least 1 − N −A : Theorem 6. Suppose that ε ≥ D A log N n with a large enough numerical constant D > 0. For all J ⊂ {1, . . . , N }, for all subspaces L of L 2 (P ) with d := dim(L) and for all A ≥ 1, the following bounds hold with probability at least 1 − N −A and with a numerical constant (2.14) In the case of density estimation, it makes sense to redefine the alignment coefficient in terms of measure µ: The approximation error bound then becomes as follows. Theorem 7. There exists a numerical constant C > 0 such that, for all ε > 0 and all λ ∈ Λ (2.16) Finally, this results in the following oracle inequality. Corollary 2. Under the conditions of Theorem 6, for all λ ∈ Λ with J = supp(λ) and for all subspaces L of L 2 (Π) with d := dim(L), the following bound holds with probability at least 1 − N −A and with a numerical constant C: 3. Proofs. The proofs of Theorems 1 and 2 are quite similar. We give only the proof of Theorem 2. Proof of Theorem 2. The following necessary conditions of minima in minimization problems defining λ ε andλ ε will be used to derive sparsity bounds: The inequality (3.1) holds because the directional derivative of the penalized risk function (which is convex) at the point of its minimum λ ε is nonnegative in the direction of any other point of the convex set Λ, in this case in the direction ofλ ε . The inequality (3.2) is based on similar considerations in the case of penalized empirical risk (note that in this case the minimum of the convex function is atλ ε and we are differentiating in the direction to the minimal point, not from the minimal point). Subtracting (3.1) from (3.2) and replacing P by P n in (3.2), we get so bound (3.3) can be also written as To extract from this bound some information about approximate sparsity ofλ ε note that This implies that for any J ⊂ {1, . . . , N } j / ∈Jλ Therefore, taking into account (3.4), Since the second derivative of the loss function is bounded away from 0, we also have where c = τ (1) (note that f λ ε ∞ ≤ 1 and fλ ε ∞ ≤ 1). In view of (3.4), this implies The following two lemmas are somewhat akin to Lemma 5 in Koltchinskii (2008a). We will give below the proof of Lemma 2 that is needed to complete our proof of Theorem 2. Lemma 1 can be used in a similar way in the proof of Theorem 1, which we skip. 19 It follows from Lemma 2 and from (3.8), (3.9) that, for (3.13) the following bounds hold with β n (δ, ∆) defined in (3.12): cδ 2 ≤ β n (δ, ∆) (3.14) and provided that δ ≥ n −1/2 , ∆ ≥ n −1/2 . In the case if δ < n −1/2 or ∆ < n −1/2 one can replace δ or ∆, respectively, by n −1/2 in the expression for β n (δ, ∆) and still have bounds (3.14) and (3.15). The proof below goes through in this case, even with some simplifications. In the main case, when δ ≥ n −1/2 , ∆ ≥ n −1/2 , it remains to solve the inequalities (3.14), (3.15) to complete the proof. To this end, note that (3.15) can be rewritten (with a proper adjustment of constant C) as ε∆ ≤ C∆ A log N n Under the assumption that the constant D in the condition (2.3) on ε is larger than 1, the term j / ∈J λ ε j A log N n in the maximum can be dropped since it smaller than the first term ε j / ∈J λ ε j . If D ≥ 2C, the bound can be further rewritten as (again with an adjustment of C). To get a bound on ∆, it is enough to solve the inequality separately for each term in the maximum and take the maximum of the solutions. This yields Under the assumption (2.3) on ε (with D ≥ 1), this can be further simplified and the bound becomes Let us now substitute ∆(δ) instead of ∆ in (3.14) [note than β n (δ, ∆) is nondecreasing in ∆]. This easily gives the following bound on δ: and the second and the third terms in the maximum can be dropped again since 1 ε A log N n ≤ 1. Thus, we have which gives the following bound on δ 2 : This can be substituted back into the expression for ∆(δ) yielding the bound on ∆: which, using the inequality ab ≤ (a 2 + b 2 )/2 and the condition 1 ε A log N n ≤ 1, can be simplified and rewritten as with a proper change of C (still depending only on ℓ). Now we can substitute (3.16) and (3.17) in the expression for β n (δ, ∆). We skip the details that are simple and similar to the bounds earlier in the proof. In view of Lemma 2, this gives the following bound on α n (δ, ∆) that holds for δ, ∆ defined by (3.13) with probability at least 1 − N −A : Together with (3.9) this yields the bound which is equivalent to (2.6). Bound (2.4) follows immediately from bound (3.17) (under the assumption on ε, the term A log N n is smaller than d+A log N nε , so, it can be discarded), and bound (2.5) follows from (3.7) and (3.18), which completes the proof. Proof of Lemma 2. The proof relies on Talagrand's concentration inequality for empirical processes as well as on Rademacher symmetrization and contraction inequalities [see, e.g., Koltchinskii (2006) or Massart (2007) for their formulations in a form convenient for our purposes]. By Talagrand's concentration inequality, with probability at least 1 − e −t α n (δ; ∆) ≤ 2 Eα n (δ; ∆) + Cδ t n + Ct n (3.19) and, by symmetrization inequality, Since ℓ ′ (f λ (·))(f λ (·) − f λ ε (·)) = ℓ ′ (f λ ε (·) + u)u| u=f λ (·)−f λ ε (·) and the function is Lipschitz with a constant C depending only on ℓ, the application of Rademacher contraction inequality yields the bound Now we use the following representation Clearly, for all λ ∈ Λ(δ, ∆), [see, e.g., Koltchinskii (2006), Section 2, Example 1]. On the other hand, since λ, λ ε ∈ Λ, we have j∈J |λ j − λ ε j | ≤ 2 and We now proceed with rather well-known approach to bounding the sup-norm of Rademacher sums: Note that We use symmetrization inequality together with Rademacher contraction inequality to get the following bound The last inequality can be solved for which gives the bound Quite similarly, we have for all λ ∈ Λ(δ, ∆) Repeating what we have done in the case of j ∈ J, we get where we used the fact that It remains to recall representation (3.21) and bound (3.20) to show that Eα n (δ, ∆) which can be bounded further as This can be plugged in (3.19) to get that with probability 1 − e −t α n (δ, ∆) ≤β n (δ, ∆, t) with a constant C > 0 depending only on ℓ. Proof of Theorem 3. The proof easily follows from the fact that under the conditions on the loss function we have where |R(x, y)| ≤ C(fλ ε − f λ ε ) 2 (x). Integrating with respect to P yields It is enough to observe that and to use the bounds of Theorems 1 and 2.
2009-05-13T11:54:39.000Z
2009-05-13T00:00:00.000
{ "year": 2009, "sha1": "79bf12d2da02a937e0222b354aa7d7d1e4c583fe", "oa_license": "implied-oa", "oa_url": "https://doi.org/10.1214/08-aos621", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "79bf12d2da02a937e0222b354aa7d7d1e4c583fe", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
8577679
pes2o/s2orc
v3-fos-license
CENP-C recruits M18BP1 to centromeres to promote CENP-A chromatin assembly CENP-C provides a link between existing CENP-A chromatin and the proteins required for new CENP-A nucleosome assembly. Introduction Accurate chromosome segregation is essential for the faithful distribution of the genome during cell division. In mitosis, each chromosome assembles a multiprotein complex called the kinetochore that serves as the primary binding site for micro tubules of the mitotic spindle. The kinetochore mediates the bipolar attachment of paired chromosomes to the mitotic spin dle, monitors proper chromosome-spindle attachment through the mitotic checkpoint, and couples spindle forces to chromo some segregation at anaphase (Rieder and Salmon, 1998;Cleveland et al., 2003;. The as sembly site for the kinetochore is a specialized chromatin domain called the centromere that is characterized by the incorporation of the histone H3 variant CENPA (centromere protein A) into centromeric nucleosomes (Palmer et al., 1987(Palmer et al., , 1991Sullivan et al., 1994). CENPA chromatin recruits a collection of 20 pro teins called the constitutive centromereassociated network (CCAN) that are bound to the chromosome throughout the cell cycle and serve as the site for mitotic kinetochore assembly. Mutation or loss of either CENPA or proteins of the CCAN results in kinetochore formation defects and chromosome missegregation (Foltz et al., 2006;McClelland et al., 2007;Hori et al., 2008;Amano et al., 2009). Vertebrate centromeres occur at a single region along the length of the chromosome that is maintained through mitotic and meiotic divisions. Human centromeric chromatin assembles on long stretches of repetitive satellite DNA. In rare cases, de novo centromere formation can occur outside of satellitecontaining DNA sequences, resulting in the formation of stably maintained neocentromeres marked by CENPA nucleosomes. This suggests that the determinant of centromere position is the site of CENPA nucleosome incorporation into chromatin and not DNA sequence (Voullaire et al., 1993;Barry et al., 1999;Carroll and Straight, 2006;Allshire and Karpen, 2008). CENPA nucleosome assembly into chromatin occurs during a specific time window in the cell cycle, during telophase/G1 in somatic cells and after anaphase chromosome segregation in embryos (Jansen et al., 2007;Schuh et al., 2007;Bernad et al., 2011). How the preexisting centromere directs the local assembly of new CENPA nucleosomes during G1 is not known. E ukaryotic chromosomes segregate by attaching to microtubules of the mitotic spindle through a chromosomal microtubule binding site called the kinetochore. Kinetochores assemble on a specialized chromosomal locus termed the centromere, which is characterized by the replacement of histone H3 in centromeric nucleosomes with the essential histone H3 variant CENP-A (centromere protein A). Understanding how CENP-A chromatin is assembled and maintained is central to understanding chromosome segregation mechanisms. CENP-A nucleosome assembly requires the Mis18 complex and the CENP-A chaperone HJURP. These factors localize to centromeres in telophase/G1, when new CENP-A chromatin is assembled. The mechanisms that control their targeting are unknown. In this paper, we identify a mechanism for recruiting the Mis18 complex protein M18BP1 to centromeres. We show that depletion of CENP-C prevents M18BP1 targeting to metaphase centromeres and inhibits CENP-A chromatin assembly. We find that M18BP1 directly binds CENP-C through conserved domains in the CENP-C protein. Thus, CENP-C provides a link between existing CENP-A chromatin and the proteins required for new CENP-A nucleosome assembly. assembly process. In human cells, the Mis18 complex and the HJURP chaperone complex are targeted to centromeres during late anaphase/telophase and remain associated with centromeres during G1 phase, the time window during which new CENPA assembly occurs (Fujita et al., 2007;Maddox et al., 2007;Dunleavy et al., 2009;Foltz et al., 2009;Silva and Jansen, 2009). The Mis18 complex localizes to centromeres before HJURP (Foltz et al., 2009), and HJURP fails to localize to cen tromeres in cells depleted of the Mis18 complex (Barnhart et al., 2011). It has been proposed that Mis18 complex proteins may "prime" the centromere so that it is competent to load new CENPA histone molecules (Fujita et al., 2007). However, the molecular interactions that target the Mis18 complex and the HJURP complex to centromeres are unknown. Here, we use an in vitro system for CENPA assembly on sperm chromatin in extracts of Xenopus eggs that recapitu lates the cell cycle dependence, HJURP dependence, and Mis18 complex requirements for CENPA assembly in cells. Using this in vitro system, we demonstrate that CENPC is required to target the Mis18 complex protein M18BP1 to Xenopus sperm centromeres in metaphase. In the absence of CENPC, M18BP1 and HJURP targeting to centromeres is disrupted, and new CENPA assembly into centromeric chromatin is inhibited. We find that CENPC interacts directly with M18BP1 and that this interaction requires both the conserved CENPC motif and the cupin dimerization domain. Our experimental results demonstrate a novel role for CENPC in recruiting CENPA nucleosome assembly proteins to the centromere to ensure centromere propagation. Xenopus HJURP (xHJURP) promotes CENP-A assembly in a cell-free system To study the assembly of CENPA chromatin, we developed an in vitro system for CENPA assembly in Xenopus egg extracts similar to a recently described method (Bernad et al., 2011). Xenopus sperm chromatin contains CENPA, and when that chromatin is added to metaphase egg extracts, functional cen tromeres and kinetochores assemble at the preexisting centro mere (Desai et al., 1997;Milks et al., 2009). To follow new CENPA assembly in egg extracts, we translated myc epitopetagged CENPA in the extracts and then added Xenopus sperm chromatin and calcium to drive the extract from metaphase into interphase. The translation of myc-CENPA in the extract pro duced a fivefold excess of myc-CENPA to the endogenous CENPA ( Fig. S1 A) but did not result in detectable myc-CENPA at sperm centromeres as assayed by immunofluorescence (Fig. 1, A-C; Bernad et al., 2011). We cloned the Xenopus homo logue of HJURP (xHJURP) and tested its ability to stimulate myc-CENPA assembly into centromeric chromatin. We found that adding xHJURP to the extracts and releasing the extract from metaphase arrest caused a 26 ± 11-fold stimulation in cen tromeric myc-CENPA compared with extracts lacking exoge nously added xHJURP and calcium (Fig. 1, B and C). Western blotting confirmed that the stimulation of myc-CENPA assem bly was not caused by variation in myc-CENPA protein levels Several protein complexes that govern CENPA assembly and maintenance at centromeres have been identified through ge netic and biochemical studies in yeasts, flies, worms, and humans. A study of chromosome missegregation mutants in Schizosaccharomyces pombe discovered the mis16 and mis18 genes, which when mutated lead to a loss of Cnp1 (S. pombe CENPA) from centromeres (Hayashi et al., 2004). S. pombe mis16 is homolo gous to the histone chaperones RbAp46 and 48, and RbAp46/48 depletion from human cells using RNAi leads to defects in CENPA assembly (Hayashi et al., 2004;Dunleavy et al., 2009). Two homologues of mis18 have been identified in vertebrates, Mis18 and Mis18, as well as an additional Mis18-Mis18 binding protein, M18BP1 (Fujita et al., 2007). A homo logue of M18BP1, knl-2 (kinetochore-null 2), was discovered through RNAi screening for chromosome segregation defects in worms (Maddox et al., 2007). Depletion of either Mis18 from human cells or KNL2 from worm embryos prevented CENPA assembly at centromeres (Fujita et al., 2007;Maddox et al., 2007). Immunoprecipitation experiments from yeast and human cells have shown that S. pombe Mis18 can coprecipitate Mis16, and both human Mis18 and M18BP1 can coprecipitate RbAp46/48 (Fujita et al., 2007;Hayashi et al., 2008;Lagana et al., 2010). Furthermore, RbAp46/48 copurifies with CENPA in affinity pu rification experiments from both human and fly cells (Furuyama et al., 2006;Dunleavy et al., 2009). Immunoprecipitation of KNL2 from Caenorhabditis elegans embryos also precipitates CENPA-containing chromatin, but the human Mis18 complex components do not appear to directly associate with soluble or nucleosomal CENPA (Hayashi et al., 2004;Fujita et al., 2007;Maddox et al., 2007;Carroll et al., 2009;Lagana et al., 2010). HJURP (Holliday junction-recognizing protein)/Scm3 is a CENPA chaperone protein that binds directly to soluble CENPA and is required for CENPA chromatin assembly. First identified as a suppressor of CSE4 (S. cerevisiae CENPA) mu tants (Chen et al., 2000), the Scm3 protein was shown to interact directly with Cse4 and was required for Cse4 association with yeast centromeres (Camahort et al., 2007;Mizuguchi et al., 2007;Stoler et al., 2007). The S. pombe homologue of Scm3 is also re quired for Cnp1 localization to centromeres in fission yeast and was shown to interact with Cnp1 and with Mis16 and Mis18, thus providing another molecular link between the Mis18 complex of proteins and CENPA Williams et al., 2009). In human cells, affinity purification of a soluble (nonchromatin associated) human CENPA complex identified four proteins (CENPA, histone H4, NPM1 [nucleophosmin 1], and HJURP) that stably associate and function as a chaperone complex for CENPA (Dunleavy et al., 2009;Foltz et al., 2009). HJURP con tains a region of homology to the yeast Scm3 proteins (the Scm3 domain), and HJURP appears to function analogously to Scm3 in fungi (SanchezPulido et al., 2009), as RNAimediated depletion of HJURP from human cells or depletion of Xenopus laevis HJURP from frog egg extracts causes defects in CENPA assem bly akin to the defects resulting from Scm3 mutation in yeast (Dunleavy et al., 2009;Foltz et al., 2009;Bernad et al., 2011). Regulating the localization of the Mis18 complex and the HJURP chaperone complex proteins is likely to be an important step in achieving spatial and temporal control of the CENPA levels of HJURP present in the extract was sufficient to promote myc-CENPA assembly (Fig. S2, B and C). In contrast to a previous study (Bernad et al., 2011), we did not detect xHJURP FLAG at metaphase centromeres ( Fig. S2 A), and supplement ing the extract with human HJURP (hHJURP) did not stimulate myc-CENPA assembly (Fig. S2, D-H). We measured the relative amounts of myc-CENPA and endogenous CENPA in chromatin after HJURPdependent in the extract (Fig. S1 B). Myc-CENPA was productively in corporated into chromatin as indicated by its resistance to salt extraction ( Fig. S1 C), and myc-CENPA assembly was inde pendent of DNA replication (Fig. S1, D and E). We added xHJURPFLAG to extracts and found that it lo calized to centromeres in interphase, similar to the endogenous protein ( Fig. S2 A). Western blotting with an xHJURPspecific antibody demonstrated that addition of xHJURP to 1.6fold the Representative images from xHJURP-mediated CENP-A assembly assay. The staining for myc-CENP-A, total CENP-A, or the merge of myc-CENP-A, total CENP-A, and DNA is indicated above the images. Calcium, xHJURP, or myc-CENP-A addition is shown to the left of each image row. Bar, 10 µm. (C) Quantification of myc-CENP-A fluorescence intensity at centromeres for the assembly reactions represented in B, normalized to the metaphase control sample without xHJURP but with myc-CENP-A RNA added (//+). Mean per pixel intensity at centromeres was quantified, and >200 centromeres were quantified per condition. Error bars show SEM; n = 4. (D) Western blot of chromatin fractions from CENP-A assembly reactions performed in the presence (+) of calcium, Myc-CENP-A, and xHJURP (+/+/+) or in the presence of calcium and Myc-CENP-A but lacking () xHJURP (+//+). Histone H4 was used as a loading control. n = 3. We determined the function of xM18BP1 in CENPA assembly in vitro by depleting both isoforms from egg extracts and then measuring myc-CENPA incorporation into chroma tin. In this and all subsequent CENPA assembly reactions, myc-CENPA, xHJURP, and calcium were added to control and depleted extracts (+/+/+ condition, as described in Fig. 1 B). We were able to deplete ≥90% of both isoforms from egg ex tracts, resulting in a 79 ± 1.4% reduction in M18BP1 levels at metaphase centromeres and a 93 ± 1.6% reduction in M18BP1 levels at interphase centromeres. (Fig. 3, A and B). In M18BP1 depleted extracts, we found that myc-CENPA assembly was reduced by 68 ± 5% relative to mockdepleted extracts (Fig. 3, C-E). To determine whether the defect in myc-CENPA assem bly was specific to the loss of M18BP1, we complemented the depleted extracts with endogenous levels of in vitro translated FLAGtagged M18BP11 and M18BP12 proteins and found that myc-CENPA assembly was fully restored (115 ± 18%; Fig. 3, C-E). Collectively, these data indicate that xM18BP1 is required for CENPA assembly in Xenopus egg extract, as it is in other systems. CENP-C is required for the cell cycle-dependent localization of M18BP1 to centromeres M18BP1 targeting to centromeres is thought to be important for new CENPA assembly, but how it is recruited to centromeres is unknown (Maddox et al., 2007). Immunoprecipitation of C. elegans KNL2 coprecipitates CENPA chromatin, but human M18BP1 does not appear to directly bind CENPA nucleosomes (Maddox et al., 2007;Carroll et al., 2009). However, CENPC directly binds CENPA nucleosomes and is required for new CENPA assembly (Erhardt et al., 2008;Carroll et al., 2010); thus, we tested whether CENPC might recruit M18BP1 to cen tromeres. We depleted CENPC from egg extracts (Fig. 4 A) and found that CENPC depletion reduced M18BP1 levels at meta phase centromeres by 87 ± 3% (Fig. 4, B and C). M18BP11, but not M18BP12, localized to metaphase centromeres, and we found that CENPC depletion only affected M18BP11-FLAG targeting to the centromere (Fig. S5 B). We confirmed by West ern blotting that the loss of M18BP1 at centromeres was not caused by codepletion of M18BP1 with CENPC ( Fig. 4 F). In contrast to the loss of M18BP1 from metaphase centro meres in CENPC-depleted extracts, M18BP1 prematurely lo calized to interphase centromeres in extracts lacking CENPC (Fig. 4,D and E). In CENPC-depleted extracts, M18BP1 asso ciated with centromeres within 45 min of release from mitosis, but in mockdepleted extracts, M18BP1 did not accumulate at centromeres until 60-75 min after mitotic exit (Fig. 2 C and Fig. 4,D and E). CENPC depletion led to increased M18BP1 at interphase centromeres (Fig. 4,D and E), resulting in approx imately threefold more centromeric M18BP1 by 90 min after mitotic exit (Fig. 4 E). Both isoforms of M18BP1 localized to interphase centromeres in the CENPC-depleted extract ( Fig. S5 B), indicating that CENPC is required for M18BP11 targeting to centromeres in metaphase extract only. M18BP1 depletion did not affect CENPC levels at centromeres, showing that the localization of CENPC does not depend on M18BP1 CENPA assembly. We recovered interphase nuclei from the assembly reactions, extracted the nuclei with 300 mM salt to re move nonnucleosomal CENPA, and then Western blotted for CENPA. We observed a twofold increase in total CENPA in chromatin upon addition of calcium and xHJURP ( Fig. 1 D), indicating that CENPA is not over assembled under these con ditions. We found that xHJURP addition led to increased incor poration of both the endogenous CENPA and myc-CENPA into chromatin. The ratio of newly assembled myc-CENPA to CENPA in chromatin corresponded to the ratio of myc-CENPA to CENPA in the extract (3.1:1 and 3.2:1, respectively), indi cating that exogenously added xHJURP stimulates assembly of endogenous and myctagged CENPA with equal efficiency (Fig. 1 D). When we omitted myc-CENPA from the assembly reactions, we detected a 1.2fold increase in the levels of CENPA in chromatin without added xHJURP (Fig. S3, A-C) and a 1.7 fold increase in chromatinincorporated CENPA with xHJURP addition (Fig. S3, B and C). We tested whether xHJURP, CENPA, and/or sperm chro matin must be present in the extract during mitotic exit to assem ble CENPA. We added xHJURP, myc-CENPA, and sperm chromatin to extracts in metaphase or interphase followed by release of the metaphase extract into interphase. CENPA as sembled into chromatin regardless of the timing of xHJURP, myc-CENPA, and sperm chromatin addition (Fig. S4, A-D), indicating that xHJURP, myc-CENPA, and sperm chromatin do not have to be present in the extract as it exits mitosis. Characterization of Xenopus M18BP1 In human somatic cells and C. elegans, new CENPA assembly requires the M18BP1/KNL2 protein (Fujita et al., 2007;Maddox et al., 2007). To understand the functions of M18BP1 in CENPA assembly, we cloned the Xenopus M18BP1 gene (xM18BP1) and found that two closely related isoforms of M18BP1 (xM18BP11 and xM18BP12) are expressed in Xenopus eggs. An antibody to a conserved region of xM18BP1 recognized both isoforms in frog egg extracts ( Fig. 2 A). Unlike CENPA, xM18BP1 was not re tained in Xenopus sperm chromatin ( Fig. 2 B). We determined the timing of M18BP1 association with centromeres in somatic cells and in egg extracts using immunofluorescence. We found that xM18BP1 localized to centromeres throughout mitosis in so matic cells and to sperm centromeres in metaphase egg extracts ( Fig. 2 C and Fig. S5 A). Upon release of the egg extract from metaphase into interphase, xM18BP1 was lost from centromeres within 30 min and then reaccumulated at centromeres 60-75 min after calcium addition (Fig. 2 C). Our antibody recognized both isoforms of xM18BP1; therefore, to assay the localization of each isoform separately, we expressed FLAG epitope-tagged versions of each in meta phase and interphase extracts. We found that only M18BP11 localized to metaphase centromeres ( Fig. 2 D), similar to C. elegans KNL2, which is present at centromeres throughout mito sis (Maddox et al., 2007). Both M18BP11 and M18BP12 localized to interphase centromeres ( Fig. 2 D), similar to the human M18BP1 and S. pombe Mis18 proteins, which localize to centromeres during late mitosis and G1 (Fujita et al., 2007;Maddox et al., 2007). extract with preimmune sera (Preimmune) or affinity-purified rabbit antibody raised against Xenopus M18BP1 (-M18BP1). -M18BP1 recognizes both isoforms of xM18BP1, and both isoforms of xM18BP1 are present in egg extract. The asterisk denotes a cross reacting band. (B) M18BP1 is not present on Xenopus sperm chromatin. After decondensation with Xenopus Nap1, Xenopus sperm chromatin shows staining for CENP-A but not M18BP1. (C) Time course of M18BP1 localization in the Xenopus egg extract. Xenopus sperm chromatin was incubated in metaphase extract for 30 min, and sperm chromatin was stained for M18BP1 at various time points after release from metaphase arrest. Time (minutes) after release from metaphase is indicated above each image column. M18BP1, CENP-A, or merged M18BP1, CENP-A, and DNA staining are indicated on the left. (D) M18BP1-1 assembles at sperm centromeres in metaphase extract, and both M18BP1 isoforms assemble at sperm centromeres in interphase extract. Metaphase extracts are shown on the left, and interphase extracts are shown on the right. The FLAG epitope-tagged M18BP1 isoform that was added to the extract is indicated on the left. Immunostaining is shown above the images. Bars, 10 µm. HJURP localization in Xenopus extracts. Our HJURP antibody ( Fig. S2 B) does not work well for immunofluorescence, so we assayed the centromeric localization of exogenously added xHJURPFLAG after CENPC depletion (Fig. 5, A and B). We found that CENPC depletion reduced xHJURPFLAG levels at centromeres by 57 ± 8% relative to control extracts, indicat ing that CENPC promotes localization of HJURP to centro meres (Fig. 5, A and B). We next tested whether M18BP1 depletion affected HJURPFLAG localization to centromeres using the same assay (Fig. 5,C and D). We found that centro meric levels of HJURPFLAG were reduced by 71 ± 11% in M18BP1depleted extracts relative to mockdepleted extracts (Fig. 5,C and D), indicating that M18BP1 promotes HJURP localization to centromeres in Xenopus egg extract as it does (Fig. S5,C and D). Collectively, our observations demonstrate that CENPC plays a critical role in the centromere targeting of M18BP1 during metaphase but antagonizes the localization of M18BP1 during interphase. CENP-C and M18BP1 promote centromere targeting of HJURP HJURP is a CENPA-specific chaperone that localizes to the centromere in early G1, in which it functions to assemble new CENPA nucleosomes into chromatin. HJURP localization to the centromere depends on the Mis18 complex (Dunleavy et al., 2009;Foltz et al., 2009;Barnhart et al., 2011). We tested whether CENPC depletion and the accompanying loss of M18BP1 from metaphase centromeres would disrupt (B) Quantification of M18BP1 fluorescence intensity at centromeres (normalized to the levels in mock-depleted extracts) after M18BP1 depletion. Quantification was performed as described in Fig. 1. Error bars show SEM; n = 3. (C) M18BP1 is required for CENP-A assembly in Xenopus extract. Representative images from CENP-A assembly assays performed using either mock-or M18BP1-depleted extracts that were supplemented with mock or M18BP1-FLAG IVT proteins. The depletion and addback conditions are listed to the left of each image row, and the immunolocalized proteins are listed above. (D) Representative Western blots of extracts from CENP-A assembly assays described in C. The top blots show the levels of the FLAG-tagged M18BP1 isoforms added back to the extract. The bottom blots show the total M18BP1 levels in the extract. Tubulin was used as a loading control. (E) Quantification of myc-CENP-A fluorescence intensity at centromeres for CENP-A assembly reactions described in C, which were normalized to the levels in mock-depleted extracts. Quantification was performed as described in Fig. 1. Error bars show SEM; n = 3. Bars, 10 µm. . CENP-C is required for M18BP1 assembly at metaphase centromeres in Xenopus egg extract. (A) Representative Western blot of mock-and CENP-C-depleted extracts. CENP-C depletion led to a >90% reduction in CENP-C levels. Tubulin was used as a loading control. (B) CENP-C depletion prevented M18BP1 assembly at centromeres in metaphase. Xenopus sperm chromatin was incubated in mock-or CENP-C-depleted metaphase extracts and stained for M18BP1 and CENP-A. The depletion conditions are listed to the left of the images, and immunolocalized proteins are listed above. (C) Quantification of M18BP1 fluorescence intensity at metaphase centromeres (normalized to the levels in mock-depleted extracts) after CENP-C depletion. Quantification was performed as described in Fig. 1. Error bars show SEM; n = 3. (D) CENP-C depletion led to premature, increased association of M18BP1 at interphase centromeres. Xenopus sperm chromatin was incubated in mock-or CENP-C-depleted extracts and stained for M18BP1 at various time points after release from metaphase arrest. Depletion conditions are listed to the left of the images, time (minutes) after calcium addition is listed below the images, and immunolocalized proteins are listed above. (E) Quantification of M18BP1 fluorescence intensity at centromeres in mock-and CENP-Cdepleted extracts at various time points after release from metaphase arrest as described in D. Values are normalized to the levels in mock-depleted extracts 75 min after calcium addition. Quantification was performed as described in Fig. 1. Error bars show SEM; n = 3. (F) CENP-C depletion does not significantly affect M18BP1 levels in extract. Mock-and CENP-C-depleted extracts were Western blotted for M18BP1 and Anillin as a loading control. M18BP1-1 and M18BP1-2 levels in CENP-C-depleted extracts are 77 ± 8 and 92 ± 11% of control extracts, respectively (n = 3; error = SEM). Bars, 10 µm. of CENPC-depleted extracts with myc-CENPC indicates that these defects are specific to the loss of CENPC and not other factors. Collectively, these data show that CENPC plays a critical role in new CENPA assembly in Xenopus egg extract by promot ing the centromeric localization of M18BP1 and/or HJURP. CENP-C directly interacts with M18BP1 To better understand how CENPC targets M18BP1 to meta phase centromeres, we tested whether CENPC and M18BP1 are associated in metaphase extracts. Immunoprecipitation of M18BP1 from egg extracts coprecipitated CENPC, and immunoprecipitation of CENPC coprecipitated isoform 1 of M18BP1 (Fig. 7, A and B). We confirmed that the interaction was isoform specific by translating FLAGtagged versions of xM18BP11 and xM18BP12 in Xenopus egg extracts, immuno precipitating the tagged xM18BP1 from the extract, and then Western blotting for CENPC. We found that M18BP11 pre cipitated CENPC and that this association was specific to metaphase (Fig. 7, C and D). We also detected coprecipitation of M18BP12 and CENPC from metaphase extracts, but this interaction appeared to be less robust (Fig. 7, C and D) than in human cells (Barnhart et al., 2011). The HJURP targeting defects we observed were not a result of variation in HJURP FLAG protein levels in extracts (Fig. 5, E and F). Our results demonstrate that both CENPC and M18BP1 promote the re cruitment of HJURP to centromeres. CENP-C is required for CENP-A assembly in Xenopus egg extract We determined whether CENPC depletion and the resulting defects in M18BP1 and HJURP targeting caused a defect in CENPA assembly. Depletion of CENPC reduced the levels of centromeric FLAG-CENPA by 80 ± 8.4% compared with mockdepleted extracts (Fig. 6, A and B). Complementation of the extract with myc-CENPC restored CENPC levels at cen tromeres to 40-60% of their original levels (Fig. 6, A-D; Milks et al., 2009;Carroll et al., 2010) and rescued FLAG-CENPA levels to 57 ± 9% of that of mockdepleted extracts (Fig. 6, A and B). Furthermore, complementation of CENPC-depleted extracts with myc-CENPC fully rescued the M18BP1 localiza tion defect (106 ± 21%; Fig. 6, C and D). Our ability to rescue CENPA assembly and M18BP1 localization by complementation Figure 5. CENP-C and M18BP1 promote the recruitment of HJURP to centromeres. (A) CENP-C depletion inhibited HJURP assembly at centromeres. Xenopus sperm chromatin was incubated in mock-or CENP-C-depleted extracts supplemented with HJURP-FLAG protein and stained for FLAG and CENP-A. The depletion conditions are listed to the left of the images, and immunolocalized proteins are listed above. (B) Quantification of HJURP-FLAG fluorescence intensity at centromeres (normalized to the levels in mock-depleted extracts) after CENP-C depletion. Quantification was performed as described in Fig. 1. (C) M18BP1 depletion inhibited HJURP assembly at centromeres. Xenopus sperm chromatin was incubated in mock-or M18BP1-depleted extracts supplemented with HJURP-FLAG and stained for FLAG and CENP-A. The depletion conditions are listed to the left of the images, and immunolocalized proteins are listed above. (D) Quantification of HJURP-FLAG fluorescence intensity at centromeres (normalized to the levels in mock-depleted extracts) after M18BP1 depletion. Quantification was performed as described in Fig. 1. (E) Representative Western blot from HJURP-targeting assay described in A showing that equal amounts of the HJURP-FLAG protein were added to mock-and CENP-Cdepleted extracts. Tubulin is shown as a loading control. (F) Representative Western blot from HJURP-targeting assay described in C showing that equal amounts of the HJURP-FLAG protein were added to mock-and M18BP1-depleted extracts. Tubulin is shown as a loading control. Error bars show SEM; n = 3. Bars, 10 µm. Using this same in vitro binding assay, we mapped the region of CENPC required for interaction with M18BP11 (Fig. 8, E and F). We found that a fragment of CENPC spanning amino acids 1,191-1,350 was sufficient to bind M18BP11 (Fig. 8 F). This fragment contains the conserved CENPC signature motif (residues 1,204-1,226) and the Nterminal half of the con served cupin domain (residues 1,311-1,397), which mediates dimerization of CENPC (Fig. 8 E). We found that removal of either the CENPC motif or the cupin domain prevented the binding of CENPC to M18BP11 (Fig. 8 F). Despite extensive mutagenesis of conserved amino acids throughout the 1,191-1,350 region of CENPC, we have been unable to isolate a point mutant that disrupts the interaction between M18BP11 and CENPC and yet still localizes normally to centromeres (unpublished data). the interaction between M18BP11 and CENPC given that equal amounts of FLAGtagged M18BP1 isoforms were pres ent in the extracts (Fig. 7 D). Together, this suggests that CENPC's association with M18BP1 in metaphase may re cruit M18BP1 to mitotic centromeres. We determined whether CENPC and M18BP1 bind one another by testing the coprecipitation of the two proteins translated in rabbit reticulocyte extracts in the presence of [ 35 S]methionine. The two isoforms of M18BP1 or mCherrytubulin as a control were mixed with unlabeled CENPC and immunoprecipitated with either CENPC antibody or IgG. We found that both M18BP11 and M18BP12 specifically associated with CENPC, indicating that they directly interact (Fig. 8, A and B). This interaction was conserved in humans, as human M18BP1 and CENPC bind to one another in the same assay (Fig. 8, C and D). . CENP-C depletion inhibits M18BP1 targeting to metaphase centromeres and xHJURP-dependent CENP-A assembly. (A) Depletion of CENP-C inhibits xHJURP-dependent CENP-A assembly. Representative images from CENP-A assembly assays were performed using mock-or CENP-C-depleted extracts supplemented with mock or myc-CENP-C IVT protein. The depletion and addback conditions are indicated to the left of the images, and immunolocalized proteins are listed above. (B) Quantification of myc-CENP-A fluorescence intensity at centromeres for CENP-A assembly reactions described in A, which were normalized to the levels in mock-depleted extracts. Quantification was performed as described in Fig. 1. (C) Addback of myc-CENP-C to CENP-C-depleted metaphase extract restores M18BP1 localization to centromeres. Representative images of sperm chromatin incubated in mock-or CENP-C-depleted metaphase extracts supplemented with mock or the myc-CENP-C IVT protein. The depletion and addback conditions are indicated to the left of the images, and immunolocalized proteins are listed above. (D) Quantification of CENP-C and M18BP1 levels in mock-or CENP-C-depleted metaphase extracts that were supplemented with mock or myc-CENP-C protein. Quantification was performed as described in Fig. 1. Error bars show SEM; n = 3. Bars, 10 µm. extracts is dependent on the CENPA assembly factors HJURP and M18BP1, requires the exit from mitosis, and is independent of DNA replication (Shelby et al., 2000;Fujita et al., 2007;Jansen et al., 2007;Maddox et al., 2007;Dunleavy et al., 2009;Foltz et al., 2009;Bernad et al., 2011). Our observations on CENPA assembly using this assay are similar to a recent study (Bernad et al., 2011) with some noteworthy differences. In con trast to the previous study, we observed no effect on myc-CENPA assembly when we supplemented egg extracts with hHJURP, but we observed a 30fold stimulation of myc-CENPA assembly when we supplemented the extract with xHJURP. We also found that the assembly of exogenously added CENPA continued throughout interphase rather than being restricted to a narrow window before replication. The differences we observe may be because there is a limited pool of CENPA in the extract and augmenting that pool with additional CENPA allows for CENPA assembly at any time during interphase. CENP-A assembly to the centromere We find that CENPC is required for M18BP1 targeting to meta phase centromeres and that this is likely through direct inter action between the two proteins. M18BP1 depletion does not affect the centromeric localization of CENPC, indicating that CENPC functions upstream of M18BP1 in Xenopus egg extract. This result Discussion How centromeres are maintained at the same locus through cell divisions and organismal generations is not well under stood. In this study, we developed an in vitro assay for CENPA chromatin assembly in Xenopus egg extracts and used it to study the mechanisms that recruit the factors required for new CENPA nucleosome assembly to the centromere. Previous studies have shown that CENPC binds to CENPA nucleo somes and is required for CENPA assembly (Goshima et al., 2007;Erhardt et al., 2008;Carroll et al., 2010). We found that CENPC is also required for M18BP1 recruitment to meta phase centromeres, and the two proteins interact directly through two highly conserved domains within CENPC, the CENPC motif and the cupin domain. In the absence of CENPC, M18BP1 fails to target to centromeric chromatin, centromeric levels of HJURP are reduced, and new CENPA nucleosome assembly is inhibited. This suggests that a key function for CENPC is to recruit factors required for new CENPA assem bly to the existing centromere. In vitro CENP-A chromatin assembly The in vitro CENPA assembly assay described here recapitu lates several key aspects of CENPA assembly in somatic cells. Namely, assembly of epitopetagged CENPA in Xenopus Figure 7. CENP-C associates with M18BP1-1 in metaphase extract. (A) M18BP1 coprecipitates CENP-C from Xenopus egg extract. Western blot for CENP-C after immunoprecipitation (IP) of metaphase Xenopus extract with IgG or -M18BP1. (B) CENP-C coprecipitates M18BP1 from the Xenopus egg extract. Western blot for M18BP1 after immunoprecipitation of metaphase Xenopus extract with IgG or -CENP-C. We only detected a single M18BP1 isoform (asterisk) coprecipitating with CENP-C, which migrates at the molecular mass of isoform 1. (C) M18BP1-1 associates with CENP-C in metaphase extract. FLAG epitope-tagged M18BP1 IVT proteins were added to metaphase or interphase extracts, reactions were split in half, and M18BP1-FLAG proteins were immunoprecipitated with control (IgG) or -FLAG (FL) antibodies. A Western blot of the immunoprecipitates for CENP-C is shown. We have demonstrated a role for CENPC in CENPA as sembly in Xenopus egg extracts. Complementation of CENPCdepleted extracts to 40-60% of the levels in mockdepleted extracts fully restored M18BP1 targeting to centromeres, whereas CENPA assembly was only rescued to 60% of the levels in control extracts. This suggests that other factors that are depen dent on CENPC may act in concert with M18BP1 to promote CENPA assembly. CENP-C interacts directly with M18BP1 in a cell cycle-dependent manner Our observations have identified a second important function for CENPC in recruiting M18BP1 to centromeres to promote contrasts with observations in C. elegans, in which KNL2 de pletion prevented localization of CeCENPC to centromeres. However, KNL2 depletion in C. elegans embryos also caused CeCENPA loss from centromeric chromatin, which may have in turn prevented CeCENPC targeting (Maddox et al., 2007). We also find that CENPC depletion reduces centromeric HJURP levels. M18BP1 depletion leads to a comparable defect in HJURP targeting, and HJURP requires M18BP1 for its localiza tion to centromeres in human cells; thus, it is likely that the role of CENPC in HJURP localization is through M18BP1 recruit ment. However, we cannot exclude the possibility that CENPC and M18BP1 function in parallel pathways to promote HJURP recruitment to centromeres in Xenopus extract. Figure 8. CENP-C directly interacts with M18BP1 through two conserved domains in CENP-C. (A) xCENP-C binds directly to both isoforms of xM18BP1. A representative gel from an xCENP-C-xM18BP1 in vitro binding assay is shown. [ 35 S]methionine-labeled in vitro translated xM18BP1-1 or xM18BP1-2 was mixed with unlabeled in vitro translated xCENP-C and immunoprecipitated with control (IgG) or -xCENP-C antibodies. Samples were run on an SDS-PAGE gel, and bound xM18BP1 protein was detected by autoradiography. mCherry (mCh)-tubulin was used as a negative control. The addition of each component is shown at the top of the gel. (B) Quantification of the CENP-C-M18BP1 in vitro binding assay represented in A. (C) hCENP-C (hC) binds directly to hM18BP1. Representative gel from an hCENP-C-hM18BP1 in vitro binding assay. [ 35 S]methionine-labeled in vitro translated hM18BP1 was mixed with unlabeled in vitro translated hCENP-C and immunoprecipitated with IgG or -hCENP-C antibodies. Samples were run on an SDS-PAGE gel, and bound hM18BP1 protein was detected by autoradiography. mCherry-tubulin was used as a negative control. (D) Quantification of hCENP-C-hM18BP1 in vitro binding assay represented in C. (E) Schematic of CENP-C domains. (F) M18BP1 binding to CENP-C requires the CENP-C motif (amino acids 1,204-1,226) and the N-terminal half of the cupin domain (amino acids 1,311-1,397). In vitro binding assays with CENP-C fragments were performed as described in A. Domain mapping of the entire protein is shown to the left of the black vertical line, and fine-scale mapping of the C terminus is shown to the right. Error bars show SEM; n = 3. IP, immunoprecipitation. CENPA assembly. CENPC binds directly to M18BP1 in vitro, and this interaction requires the highly conserved CENPC motif and the Nterminal half of the CENPC cupin domain. The CENPC box is an evolutionarily conserved motif found in all CENPC homologues that is required for CENPC targeting to centromeres (Meluh and Koshland, 1995;Fukagawa et al., 2001;Heeger et al., 2005;Milks et al., 2009), and the Cterminal cupin domain mediates dimerization of CENPC (Sugimoto et al., 1997;Cohen et al., 2008). It will be interesting to deter mine whether CENPC dimerization is required for the recruit ment of M18BP1 to CENPA chromatin and how the CENPC box and cupin domain influence CENPC binding to both CENPA nucleosomes and M18BP1. Both isoforms of M18BP1 bind to CENPC in vitro, but in egg extract, we primarily detect an interaction with M18BP11 that is specific to metaphase. This suggests that the interaction between M18BP12 and CENPC is inhibited in extracts and that the interaction between M18BP11 and CENPC is cell cycle regulated. Currently, we do not know whether the interaction between M18BP1 and CENPC is regulated through CENPC, M18BP1, or both. Previous studies have demonstrated that CENPC is regulated differently in mitosis and interphase. In human cells, CENPC stably associates with mitotic centro meres but is dynamic in interphase (Hemmerich et al., 2008), and in chicken DT40 cells, CENPC targeting to centromeres depends on CENPH/I in interphase but not metaphase (Kwon et al., 2007). Understanding the cell cycle-dependent mecha nisms that regulate the interaction between M18BP1 and CENPC is likely to provide important insight into the process of new CENPA nucleosome assembly. We have shown that human M18BP1 and CENPC di rectly interact, but unlike xM18BP1, human M18BP1 does not localize to metaphase centromeres. In contrast to xM18BP11, purification of M18BP1 from asynchronous HeLa cell extracts did not identify CENPC as an M18BP1interacting protein (Fujita et al., 2007;Lagana et al., 2010). The interaction be tween M18BP1 and CENPC in human cells may have been below the detection limit in these experiments because the interaction is mitosis specific, and only a fraction of the total M18BP1 in extract is associated with CENPC. It will be im portant to understand whether CENPC plays a role in M18BP1 targeting to human centromeres, whether the human CENPC-M18BP1 interaction is regulated by the cell cycle, and how this interaction relates to the isoform specific interactions we have described in Xenopus. Regulation of M18BP1 localization in interphase In the absence of CENPC, M18BP1 accumulates at interphase centromeres precociously and to higher levels. This demon strates a role for CENPC in regulating the interphase local ization of M18BP1 and indicates that there exists a second, CENPC-independent mechanism for targeting M18BP1 to centromeres. We have previously shown that CENPK does not localize to centromeres in CENPC-depleted extract (Milks et al., 2009). Moreover, in human cells, removal of CENPC causes the loss of multiple CCAN proteins from the centromere, including CENPH, I, K, and T (Carroll et al., 2010;Gascoigne et al., 2011). Thus, one possibility is that CENPA chromatin itself recruits M18BP1 to interphase centromeres. M18BP1 is required for CENPA assembly during the exit from mitosis, but it is possible that it performs an additional function during interphase. The GTPaseactivating protein MgcRacGAP was recently identified as an M18BP1associated protein that is required for the maintenance of newly assembled CENPA nucleosomes at interphase centromeres (Lagana et al., 2010). It will be interesting to test whether M18BP1 might function in interphase to recruit MgcRacGAP to stabilize newly assembled CENPA. Collectively, our observations demonstrate that CENPC plays an essential role in CENPA assembly by localizing M18BP1 and HJURP to centromeres. By directly linking CENPA nu cleosomes with M18BP1, CENPC may provide a mechanism to reinforce new CENPA assembly at the site of preexisting centromeres and, thereby, ensure accurate centromere propaga tion. An important future goal will be to understand how the factors required for new CENPA assembly function in con cert to transfer soluble CENPA into chromatin. The ability to assemble centromeric chromatin in vitro will provide a valuable tool for dissecting the biochemical and cell cycle regulatory mechanisms that control new CENPA assembly and ensure faithful centromere propagation. Materials and methods Cloning and antibody generation xHJURP (available from GenBank/EMBL/DDBJ under accession no. HQ148662) and M18BP1 were identified through BLAST (Basic Local Alignment Search Tool) analysis and were amplified from a Xenopus ovary cDNA library by PCR. We identified two separate, but closely related, isoforms of xM18BP1, which we have designated xM18BP1-1 (GenBank accession no. HQ148660) and xM18BP1-2 (GenBank accession no. HQ148661). For in vitro production of RNA or proteins, genes were cloned into the AscI and PacI sites of modified pCS2+ vectors containing insertions of epitope tag sequences adjacent to the multiple cloning site (MCS). For expression of hM18BP1, xCENP-C, xHJURP, and hHJURP in rabbit reticulocyte lysate, the xCENP-C, xHJURP, and hHJURP sequences were codon optimized for Escherichia coli and rabbit reticulocyte expression by gene synthesis (DNA 2.0). Plasmids used in this study are listed in Table I. Antibodies to M18BP1 and HJURP were produced in rabbits (Cocalico Biologicals). For xM18BP1 antibody production, a fragment of xM18BP1-2 spanning amino acids 161-415 was expressed as a GST fusion in E. coli and affinity purified on glutathione agarose (Sigma-Aldrich) according to the manufacturer's instructions. To generate an affinity column for antibody purification, a fragment of xM18BP1-1 spanning amino acids 161-375 was expressed as a GST fusion in E. coli and purified using glutathione agarose. Rhinovirus 3C protease was used to cleave xM18BP1-1 161-375 from the GST tag after fusion protein binding to the glutathione agarose column. xM18BP1-1 161-375 was further purified by chromatography on S-Sepharose by using a linear gradient from 25 mM to 1 M NaCl in 20 mM MES, pH 6.0, 10% glycerol, and 1 mM DTT. The protein was then coupled to N-hydroxy succinimide-activated Sepharose 4 Fast Flow (GE Healthcare). For xHJURP antibody production, xHJURP was expressed as a GST fusion protein and purified using glutathione agarose. To generate an affinity column for antibody purification, a fragment of xHJURP spanning amino acids 42-194 was expressed with an N-terminal 6His tag in E. coli, affinity purified on nickel-nitrilotriacetic acid agarose (QIAGEN) according to the manufacturer's instructions, and coupled to N-hydroxy succinimide-activated Sepharose 4 Fast Flow. Other antibodies used in this study that are not commercially available are -xCENP-A (Maddox et al., 2003), -xCENP-C (Milks et al., 2009), -xAnillin of excess buffer, eggs were centrifuged in a rotor (SW50.1; Beckman Coulter) for 15 min at 10,000 rpm. The soluble cytoplasmic material was removed from the centrifuge tube and supplemented with energy mix (7.5 mM creatine phosphate, 1 mM ATP, and 1 mM MgCl 2 ), 50 mM sucrose, 10 µg/ml LPC, and 10 µg/ml cytochalasin D. Cycloheximide (Sigma-Aldrich) was added to extracts to 100 µg/ml, BrdU (Sigma-Aldrich) was added to extracts to 40 µM, and aphidicolin (Sigma-Aldrich) was added to extracts to 50 µg/ml. Xenopus tissue-culture (S3) cells were grown in 70% Leibovitz's L-15 media (Invitrogen) supplemented with 15% heat-inactivated fetal bovine serum, 100 U/ml penicillin, and 0.1 mg/ml streptomycin. CENP-A assembly assay in Xenopus extract The TnT Sp6 Coupled Rabbit Reticulocyte System (Promega) was used for in vitro transcription/translation (IVT) of plasmid DNA according to the manufacturer's protocol, except twice the recommended amount of DNA was added. RNA was produced using the either the mMESSAGE mMACHINE SP6 kit (Invitrogen) or the RiboMAX SP6 Large Scale RNA Production System (Straight et al., 2005), and -myc (Milks et al., 2009). The supernatant of 9E10 mouse hybridoma cells was used directly for myc immunofluorescence. The phospho-wee1 antibody was provided by J.E. Ferrell (Stanford University, Stanford, CA). Antibodies used in this study are listed in Table II. Immunodepletions Depletion experiments were performed using affinity-purified antibodies bound to protein A beads (Dynabeads; Invitrogen). For 100 µl extract, 2.5 µg -xM18BP1 antibody or 0.6 µg -CENP-C antibody was bound to 33 µl of beads in 10 mM Tris-HCl, pH 7.4, 150 mM NaCl, and 0.1% Triton X-100 for 1 h at 4°C. An equivalent amount of whole-rabbit IgG was used for control depletions. The beads were then washed and resuspended in extract for 1 h at 4°C. Beads were removed from the extract by two 5-min rounds of exposure to a magnet. Immunoblotting Samples were separated by SDS-PAGE and transferred onto polyvinylidene fluoride membrane (Bio-Rad Laboratories). Samples were transferred in CAPS transfer buffer (10 mM 3-(cyclohexylamino)-1-propanesulfonic acid, pH 11.3, 0.1% SDS, and 20% methanol) for CENP-A, CENP-C, H2A, and H4 or Tris-glycine transfer buffer (20 mM Tris-HCl and 200 mM glycine) for xHJURP, hHJURP, and M18BP1. Alexa Fluor 488-or Alexa Fluor 647-conjugated goat anti-rabbit or anti-mouse antibodies (1:2,500; Invitrogen) were used for detection by fluorescence using a gel imager (Typhoon 9400 Variable Mode Imager; GE Healthcare). For detection of xM18BP1, a tertiary amplification step was performed using Dylight 649conjugated bovine anti-goat (1:2,500; Jackson ImmunoResearch Laboratories, Inc.). Fluorescence of detected bands was background corrected and quantified using ImageJ (National Institutes of Health). Background correction was performed by measuring the integrated intensity of a region encompassing the detected band and then subtracting from that the intensity as measured in an identical region below the band in the same lane. For egg extract experiments, 2 µl extract was loaded into each lane. Isolation of chromatin after CENP-A assembly reactions CENP-A assembly reactions were prepared as described under the CENP-A assembly assay in Xenopus extract subheading, except reactions were prepared to a final volume of 500 µl. 60 min after calcium addition, assembly reactions were diluted into 2 ml dilution buffer (BRB-80 [80 mM K-Pipes, pH 6.8, 1 mM MgCl 2 , and 1 mM EGTA], 30% glycerol, 0.5% Triton X-100, and 300 mM KCl) and incubated for 5 min at room temperature. Reactions were then overlaid onto 5 ml of cushion (40% glycerol/80 mM Pipes, pH 6.8, 1 mM MgCl 2 , and 1 mM EDTA) in a 15-ml conical tube and centrifuged in a rotor (JS4.2; Beckman Coulter) for 20 min at 3,500 rpm. and Ribo m 7 G cap analogue (Promega), and RNA was added to the extract to a final concentration of 2 µg RNA per 100 µl extract. For standard CENP-A assembly assays, 2 µl HJURP IVT protein was added per 20 µl assembly reaction. HJURP IVT protein and myc-xCENP-A RNA were added to CSF extract, reactions were incubated for 30 min at 16-20°C to allow for translation of myc-xCENP-A, and then cycloheximide was added to block further translation. Subsequently, demembranated sperm (final concentration of 3 × 10 5 sperm per 100 µl extract) and CaCl 2 (final concentration of 0.75 mM) were added, and the reaction was incubated for an additional 60-75 min to allow for release into interphase. Reactions were then processed for immunofluorescence as described under the Immunofluorescence subheading. For experiments to test replication dependence of CENP-A assembly, DMSO or aphidicolin was added concurrent with xHJURP IVT protein and myc-CENP-A RNA. For CENP-A assembly assays using M18BP1-depleted extracts (Fig. 3, C and D), 1 µl HJURP IVT protein was added per 20 µl assembly reaction. For CENP-A assembly assays using CENP-C-depleted extracts (Fig. 6, A and B), HJURP RNA was used instead of the HJURP IVT protein. Protocol adjustments were made so that the volume of IVT lysate added was never >10% of the total reaction volume. The localization of M18BP1 and CENP-A on demembranated sperm nuclei was assayed by adhering sperm to poly-l-lysine-coated acid-washed coverslips without fixative, treating sperm with 5 µM recombinant Xenopus Nap1 in sperm dilution buffer (10 mM K + Hepes, pH 7.7, 1 mM MgCl 2 , 100 mM KCl, and 150 mM sucrose) for 30 min, and then fixing sperm for 5 min using 2% formaldehyde. Sperm chromatin from egg extract experiments were washed in dilution buffer (BRB-80 [80 mM K-Pipes, pH 6.8, 1 mM MgCl 2 , and 1 mM EGTA], 30% glycerol, 0.5% Triton X-100, and 150 mM KCl) for 5 min, fixed for 5 min using 2% formaldehyde, and spun through 40% glycerol-BRB-80 cushions onto coverslips. For experiments to test for stable incorporation of myc-CENP-A into chromatin, reactions were washed for 5 min in dilution buffer with increasing concentrations of KCl (0 mM-1 M) before fixation. Sperm immunostained for BrdU were postfixed with cold methanol, treated with 2 N HCl/0.5% Triton X-100 to denature DNA, and then neutralized with 0.1 M borate, pH 8.5, before immunostaining. After fixation, coverslips were blocked in antibody dilution buffer and then immunostained with the antibodies indicated in Fig. S1. Alexa Fluor-conjugated secondary antibodies were used according to the manufacturer's specifications (Invitrogen). When required, coverslips were blocked using 1 mg/ml whole-rabbit IgG or whole-mouse IgG (Jackson ImmunoResearch Laboratories, Inc.). change the results. The validity of the segmentation method was verified by manual analysis of a subset of images (in which centromeres were identified by drawing circles around them). No significant difference between the two methods could be detected. Online supplemental material Fig. S1 characterizes the properties of xHJURP-mediated CENP-A assembly in the Xenopus egg extract. Fig. S2 shows the localization of xHJURP to interphase centromeres and its ability to stimulate myc-CENP-A assembly. Fig. S3 shows xHJURP's ability to promote the assembly of the endogenous CENP-A present in extract. Fig. S4 characterizes the cell cycle requirements for CENP-A assembly. Fig. S5 provides additional characterization of M18BP1. Online supplemental material is available at http://www.jcb .org/cgi/content/full/jcb.201106079/DC1. The authors would like to thank the Straight Laboratory members for support and helpful comments and James E. Ferrell for reagents. B. Moree was supported by T32GM007276, C.B. Meyer was supported by grants from the National Science Foundation and the National Defense Science and Engineering Graduate Fellowship Program, and C.J. Fuller was supported by a Stanford Graduate Fellowship. This work was supported by National Institutes of Health R01GM074728 to A.F. Straight. Submitted: 13 June 2011 Accepted: 17 August 2011 Microscopy All coverslips were mounted onto glass slides with mounting media (0.5% p-phenylenediamine, 20 mM Tris, pH 8.8, and 90% glycerol) and sealed with clear nail polish. Immunofluorescence microscopy images were acquired at room temperature using a 60× 1.4 NA Plan Apochromat VC oil immersion lens (Nikon), a Sedat quad filter set (Chroma Technology Corp.), and a charge-coupled device camera (CoolSNAP HQ; Photometrics) mounted on a microscope (Eclipse 80i; Nikon). Wavelength selection was performed with a Lambda 10-3 controller and 25-mm high-speed filter wheels (Sutter Instrument). Z sections were acquired with a z-axis drive (MFC2000; Applied Scientific Instrumentation) at 0.2-µm intervals. Microscope instrumentation was controlled via Metamorph imaging software (Molecular Devices). Immunofluorescence microscopy images were also acquired at room temperature using a 60× 1.4 NA Plan Apochromat oil immersion lens (Olympus), a Sedat quad filter set (Semrock, Inc.), and a charge-coupled device camera (CoolSNAP HQ) mounted on a microscope (IX70; Olympus) outfitted with a Deltavision Core system (Applied Precision). Microscope instrumentation was controlled via softWoRx 4.1.0 software (Applied Precision). Maximum intensity projections were generated from the z sections and then analyzed with custom-written software. Images of Xenopus S3 cells were deconvolved using softWoRx 4.1.0. All images were postprocessed in Photoshop (Adobe) after data analysis for display purposes. Images were cropped and contrast adjusted. No changes to image  were performed. Quantification of myc-CENP-A loading percentage To quantify the proportion of centromeres that had loaded myc-CENP-A, we fit a Gaussian distribution to a histogram of the background subtracted per pixel intensity values at every centromere in the no-HJURP condition (best fit parameters: mean = 21.8, standard deviation = 52.39, and R 2 = 0.99). We then chose a cutoff for myc-CENP-A loading as the background-subtracted intensity value, in which the cumulative distribution function of the fitted Gaussian was equal to 0.992 (corresponding to a false positive rate of 1 in 125). For each condition and each experiment, we totaled the number of centromeres whose intensity was above this cutoff and divided by the total number of centromeres counted in that condition and experiment to determine the proportion of centromeres loading myc-CENP-A. At least 250 centromeres were counted in each condition in each experiment. Image analysis An automated image analysis was performed using custom-written software in C++. Source code and documentation for all programs used in this study are available under the Mozilla Public License as previously described (Fuller and Straight, 2010). The analysis code consists of four steps: image normalization, centromere finding, cell clustering, and image quantification. In brief, images were normalized by median filtering the image and then dividing the original image by the filtered intensity value. Centromeres were found using Otsu's thresholding method (Otsu, 1979) modified to recursively divide regions larger than centromeres (Xiong et al., 2006) and then size filtered to remove objects much larger or smaller than centromeres. Centromeres were grouped into cells using Gaussian mixture model clustering (McLachlan and Basford, 1988). The initial guess for clustering the image was made by first heavily Gaussian blurring the image, thresholding the image for nonzero pixel intensities, and using the resulting regions as a first cluster approximation. These initial clusters were then refined by randomly subdividing the image and then iteratively Gaussian mixture model clustering until no new successful subdivisions were made. Centromere intensities were then quantified in the original image, and the mean over the entire image was calculated. The images were manually examined to exclude misidentified centromeres, but the automated analysis was accurate enough that the manual examination did not measurably
2014-10-01T00:00:00.000Z
2011-09-19T00:00:00.000
{ "year": 2011, "sha1": "0be7608b3fbf3cb18ff1bfc145fec3e3178c97a0", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/194/6/855.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "912be04f0d9d23b09f375efc6e3a6706809c5a31", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244276486
pes2o/s2orc
v3-fos-license
Software Fault-Proneness Analysis based on Composite Developer-Module Networks Existing software fault-proneness analysis and prediction models can be categorized into software metrics and visualized approaches. However, the studies of the software metrics solely rely on the quantified data, while the latter fails to reflect the human aspect, which is proven to be a main cause of many failures in various domains. In this paper, we proposed a new analysis model with an improved software network called Composite Developer-Module Network. The network is composed of the linkage of both developers to software modules and software modules to modules to reflect the characteristics and interaction between developers. After the networks of the research objects are built, several different sub-graphs in the networks are derived from analyzing the structures of the sub-graphs that are more fault-prone and further determine whether the software development is in a bad structure, thus predicting the fault-proneness. Our research shows that the different sub-structures are not only a factor in fault-proneness, but also that the complexity of the sub-structure can affect the production of bugs. I. INTRODUCTION Program failure has always been a major concern when it comes to software development [131], [134]. Nevertheless, as the trend of software scale and complexity continuously increase, the cost of software failure exaggerates riotously. According to a study by the National Institute of Standards and Technology, the annual cost of software bugs in the U.S. reached $59.5 billion in 2002 [85]. To achieve more cost-effective management, some software revalidation and testing techniques, such as fault 1 localization, have become an indispensable part of the software development and evolution process [12]. Although the techniques of fault localization are becoming more and more complete, it is still expensive to operate with a higher degree of precision. Therefore, applying fault-proneness prediction beforehand can be a clever step to ensuring software quality [18], [30], [48], [51], [62], [68], [91], [94], [128]. In this paper, we propose a new approach to build the fault-proneness analysis model that integrates human aspects by using complex network techniques. Recent studies [34], [141] have shown that certain structural patterns in the software network have high correlations with bugs or unexpected system performances. On top of that, our research aims to integrate the pattern of developers' activity into the software network, and by doing so, we will have a more comprehensive view on deeper and indirect developermodule dependency, and thus can further help to analyze the relationship between developers' behaviors and bugs they introduce. The rest of the paper is structured as follows. Section II introduces the basic idea of software fault-proneness prediction using various approaches, how the developers' aspects can be included in the software fault-proneness prediction model, and the implementation of complex networks on software systems. The proposed Composite Developer-Module Networks and their attributes are presented in Section III. Our case studies and experiment results are given in Section IV. Section V discusses some threats to the validity of our study, and some related work is introduced in Section VI. Finally, the conclusions of our research and future work are listed in Section VII. II. BACKGROUND In this section, we will provide a brief introduction to software metrics and how they are utilized in building software fault-proneness prediction models. Additionally, we will introduce the thoughts of integrating human aspects into the prediction model and how the idea can be implemented by software network analysis. There are two major categories of software metrics, namely, product metrics and process metrics [140]. Product metrics are used to measure the aspects of the software per se. The product metrics can be categorized into generic metrics and special metrics, and generic metrics consist of two categories: static metrics, which can be calculated by the attribute of source code, and dynamic metrics, which can only be collected during runtime. To name a few, static metrics include Line of Code [88], McCabe Complexity [71], and C.K. metrics [24], and dynamic metrics include dynamic coupling metrics [3], runtime cohesion metrics [76], and code coverage-based metrics [23], [77], [135]. Both static and dynamic metrics can be further divided into internal metrics, which examine the internal structure of a software module, and external metrics, which focus on the interactions between separate software modules [132], [133]. Special metrics such as [2] measure the attributes that are not directly related to the target software. Various commercial or free tools appear on the market that provide the feature of collecting these metrics, such as Scitool Understand [108]. Process metrics are used to analyze the process of software development and evolution, and some examples are: code churn [81], [112], social network analysis-based metrics [9], [13]- [15], [20], [122], developer-based metrics [95], [127], [137], and organizational metrics [8], [79], [83]. FIGURE 1 shows the hierarchy of the software metrics. In this paper, although we do not use a single measure of metrics on fault-proneness prediction, we still employ the concept of static product metrics (module network), process metrics (developer network), and the idea of the external structure of software module to build our model. B. METHODS TO BUILD FAULT-PRONENESS PREDICTION MODELS Once we have software metrics as indicators that reflex the characteristics of software, a prediction model can be built to produce beneficial results that can be used to predict faultproneness. To operate an effective fault-proneness prediction, loads of factors need to be considered during model construction [59]. Regression analysis is a straightforward method that measures the dependency of variables [17]. A regression model is given to estimate the value of some dependent variable through a regression equation that takes the value of several independent variables as inputs. Regarding software fault-proneness prediction, many forms of regression can be applied as prediction models. Take linear regression models as examples: the inputs can be a bunch of software metrics, and the output should be the tendency of fault-proneness [6], [38], [41], [64], [121]. Alternatively, logistic regression, another commonly used regression method, takes one of two different values on the dependent variable to divide the ssoftware modules into fault-prone or non-fault-prone classes [4], [29], [31], [58], [78], [114], [115], [143], [144]. Machine learning is a comparatively advanced approach used in software fault-proneness prediction that is based on various learning algorithms. With a given training set of input values (software metrics), the desired training output (index of fault-proneness) can be approached by iterative learning procedures, among which the fault-proneness prediction models can be built. Below are some samples of commonly used techniques and their brief introductions: • Artificial neural networks (ANNs) [7], [52], [120], [142]: using a pre-defined network topology to process learning algorithm. Different designs of a network can considerably affect the outcome of training results. • Decision tree-based modeling [54]- [56], [100], [112]: recursive partition into smaller subset. The technique can be easier to interpret on larger data set. • Naïve Bayesian classifier [19], [75], [123] and the Bayesian Belief Network [1], [28], [93], [96]: both based on Bayes' theorem. the Naïve Bayesian classifier assumes that the value of attributes is independent. If dependencies exist between attributes, a BBN can be used instead. • Support vector machine [32], [39], [53], [138]: maps the training tuples into a higher dimension and searches for the linear optimal separating hyperplane to separate the new tuples into different classes. • Discriminant analysis [35], [50], [92], [119]: a discriminant function is built from the training dataset to assign data points into either the desired (fault-prone) or the non-desired (non-fault-prone) group. C. DEVELOPER ASPECTS Since the software is a pure cognitive product of human developers, the flaws in it are significantly caused by erroneous behaviors of humans involved [44]- [47], [66], [126], [127]. Therefore, several studies have tried to clarify the accurate association of human aspects and software bugs. One of the approaches separates the consistent characteristics of a human individual and evaluates their behavior, which can be used to predict his/her future performance [11], [43], [60], [70], [95]. Another approach tries to classify specific working environments and activities during the development process that could affect the performance of developers [9], [33], [83], [113]. Since the later approach focuses more on human activities and interaction than characteristics, it can be useful when there is no sufficient historical data on individual developers and is closer to our approach. D. SOFTWARE NETWORK A modern software system is composed of various elements, such as functions and variables in software modules, and commit records in development. These elements are all dependent on each other in some way. Therefore, we can construct an abstract model in a complex network if we regard the elements as nodes and relationships as edges. Hence, the network is called a software network. Studies have shown that the theory of complex networks can be accommodating in the software reliability domain because of the small-world effects and scale-free properties [82], [124]. Also, the properties of high cohesion and low coupling in the software development process can modularize [117]. As an application in fault-proneness prediction, many techniques apply the topology characteristics and properties such as complexity and evolution of rules into quantitative indicators. Among them, network analysis-based metrics are derived from the network features such as closeness or betweenness centrality [75], [98]. Furthermore, the dependency for software modules can also work as a good gauge of fault-proneness in the system [87], [122], [145]. Although a complex network is a practical technique and relatively intuitive in describing the multifaceted relationship, the entire structure can grow enormously when the target model is from a large number of real-world objects and thus hold researchers back from investigating the network altogether. To tackle the issue, the researchers must identify the specific type of structures that can carry some important local properties in the network. Network Motif [110] serves as one of the properties that are recurrent and have some statistically significant patterns and is broadly implement in many domains such as biochemistry, neurobiology, and engineering, etc. Through these practices, various searching algorithms for certain types of network motifs and their application have been applied on software domains as well [67], [69], [141]. E. NETWORK WITH HUMAN ASPECTS Although the software network can be helpful when it comes to fault-proneness prediction, the influence of human aspects is often overlooked (as states in Section II-C). To further improve the accuracy of prediction results, several approaches try to comprise the human aspects of the network prediction model. One simple tactic is to express developer contributions with a Contribution Network (also called a Developer-Module Network in some studies) [10], [16], [22], [97]. In the contribution network, a contribution edge always refers to a commit on a module made by a developer. The weights of edges are defined as the number of commits for a developer to the specific module. FIGURE 2 depicts an example of the contribution network. FIGURE 4. A Developer Collaboration Network with four developers suggest that the module dependency should be used along with the contribution history of developers on faultproneness prediction. Thus, the network is called a Socio-Technical Network, which is a hybrid of developer contribution network and a network of module dependency. There are two types of edges: the contribution edges are comparable to those in the contribution network but are now bidirectional; the module edges always have the weight of one. FIGURE 3 depicts an example of a Socio-Technical Network. Then again, another unique idea is to describe developers' collaborations by assigning edges to those that have worked on common modules. The network is called Developer Collaboration Network and involves developers [66], [129] solely. In the prediction model, the metrics can be generated by applying Social Network Analysis since the edges in the network are flexing some social relations. FIGURE 4 depicts an example of a Developer Collaboration network. A Tri-Relation Network (TRN) is the ultimate form of network that involves the human aspect [65]. The network combines three different kinds of relation, i.e., the developer contribution, module dependency, and developer collaboration. With those in hand, a more comprehensive view of activities in software development can be described. Moreover, Social Network Analysis can be applied to build a Despite the fact that all these approaches established the foundation of the human-aspect fault-proneness prediction model, they fail to illustrate developers' deeper and indirect dependency. Take the network in FIGURE 6 as an example. The developer relationship q is evident because both developer a and b have contributions to Module A. However, since there is a function call from Module A on Module B, the dependency r between developer a and developer c, who has some contributions on Module B, should be considered but often ignored in the previous studies. Our approach aims to cover such indirect dependencies on developers and hence portray a more comprehensive picture of developer relationships and activities. III. COMPOSITE DEVELOPER-MODULE NETWORKS In this section, the insights of the Composite Developer-Module Networks elements used in our research will be explained. The motivation of the Composite Developer-Module Networks is to include deeper and indirect dependencies between developer and software modules. Thus, one can better analyze the more comprehensive An edge representing a function call from F to G g(D,F) An edge representing a developer D has contributions on F name(X) Name label of object X Developer Function 4 The notations we used in the following section are listed in TABLE 1. A. DEFINITION OF COMPOSITE DEVELOPER-MODULE NETWORK A Composite Developer-Module Network of a specific version of release R is defined as below: is a set of all vertices, including all developers before release version R, denoted as Vd (R), and all functions in software modules of release version R, denoted as Vf (R). This implies that E(R) represents all edges in the network, including Ec(R): links from developers to the functions he/she contributes to in a software module before release version R (FIGURE 7), and Ef (R): represents links between every two functions in the software in release version R (FIGURE 8). This implies that Corollary 1. A Composite Developer-Module Network is a connected graph. If there exists a vertex v ∈ V, then (1) if v ∈ Vd (R), then v denotes the developer that should have at least one contribution in software module; (2) if v ∈ Vf (R), then v denotes the function which has either some contributing contributor or link with other function. Thus, all the existing v in V are connected. For a vertex vF ∈ Vf (R) that denotes a function F which exists in the software modules, it has attributes that are defined as: Where name(F) denotes the name label of the function F, path(M) denotes the path of software module where the function is located, and B∈{true, false} indicates whether the function contents bug. The function contents bug will be denoted in red. For example, for a function "atoi" in module "stdlib.h" which contents bug, then the vertex of the function vatoi is noted as: is defined as a directed edge from F to G when a function F has a function call on function G. The attributes of the e(F,G) are defined as: Where name(F) denotes the name label of function F, while name(G) denotes the name label of function G. Because the edge means "F has a function call on G," the direction should be F→G. FIGURE 10 shows the example of e(main,atoi). VOLUME XX, XXXX Corollary 2. The link between two vertices in a Module Network is weightless. In case that multiple function calls between two specific functions may exist, i.e., if a function F calls function G more than once in the source code, we simply define a link without weight to eliminate confusion. Where name(D) denotes the name label of the developer D, and name(F) denotes the name label of function F. T denotes the commit time of the contribution (last time is selected if D has multiple commits), and I ∈{true, false} represents whether this commit introduced a bug. Because the edge is defined as "D has a contribution on F," the direction should be D→F. Below is an example for such edge that a developer "a" made a commit on function "atoi": With the combination of Developer Network and Module Network, we could build a comprehensive view of a network that includes both relationships of (1) function-function dependency and (2) developer-function dependency. FIGURE 14 shows an example of a Composite Developer-Module Network. Corollary 6 A Composite Developer-Module Network is the union of Developer Network and Module Network. The definition of Developer Network is Nd (R) = (V(R), Ec (R)), and that of Module Network is Nm (R) = (Vf (R), Ef (R)). Since Vf (R) ∈ V(R) and E = Ec (R)∪Ef (R), so Nd (R)∪Nm (R) = (V(R), E(R)) = N (R). B. SUB-STRUCTURE OF COMPOSITE DEVELOPER-MODULE NETWORK A sub-structure of the Composite Developer-Module Network Nsub(R) is a sub-structure that can be sliced into an independent network that contents specific vertices Vsub(R) ⊂ V(R) and edges that connect all the vertices Esub(R) ⊂ E(R). A category of patterns will be identified whether it has a relationship with software bugs. FIGURE 15 shows an example of such sub-structure retrieval. In our research, we analyze some specific sub-structure of the network and identify the most fault-prone patterns by calculating the relationship between the sub-structures with the bug-introduced log that derived from several real-world projects. There are many possible sub-structures that can be pruned from a Composite Developer-Module Network. To compose an objective that is useful in our study, the sub-structure should obey the rules: 1) It consists of at least one developer vertex u'D ∈ Vdsub(R), and a function vertex v'F ∈ Vf-sub(R). 2) Every developer vertex in Vd-sub(R) should be connected to at least one function vertex Vf-sub(R) with an edge in E(R), and vice versa. 3) All edges in E(R) consist of vertices in Vsub(R) is in Esub(R), i.e., Esub(R) ⊂ E(R) and Esub(R)={e' | e' = Vsub(R) × Vsub(R) and e' ∈ E(R)}. 4) The sub-structure must be a connected graph. With respect to the above rules, we can break down two simple forms of sub-structure consists of three vertices below: A. EXPERIMENT METHODOLOGY In this section, we shall discuss the research methodology in more detail. The purpose of our study is to answer the question: • R (is different Sub-Structures in the CDMN a factor of fault-proneness?): Does the structure of Developers/Modules in a sub-graph in a software network effect introducing a bug in a software project? The answer to the question will determine whether our proposed Composite Developer-Module Network is applicable in building a software fault-proneness analysis model and thus can be used in a prediction model for software fault-proneness in the future. We use four open-source projects based on C/C++ language, namely: gedit [37], Nagios Core [84], NGINX [86], and redis [105], in our study. gedit is the default text editor in the GNOME desktop environment with a graphical interface. Nagios Core is a monitoring system for networks and infrastructure that has alert features. NGINX is a web server that is an asynchronous event-driven approach to handle requests. redis is an in-memory database system implementing distributed key-value storage. We derived the network into several categories of substructures in the study. The number of each sub-structure and the total faulty commits within the structure for all the releases of the target software project are collected. In addition, bugs that exist in each category of sub-structure (i.e., Total Bugs per Structure) are calculated. Bug numbers corresponding to the number of developers, functions, and points (developers and functions) are also calculated to evaluate the impact of these variables. To describe each Sub-Structure in a text format, we use the terminology of: All the categories of Sub-Structured we used in the study and will be used in the future are listed in the Error! Reference source not found. We chose 13 among the categories that have a sufficient amount of data to be collected in this study. B. RESULTS This section will present the collected data and analysis from the four projects in our study. For the integrity of our analysis, we leave the label on the tables, though they will not be considered in the data analysis. In the following data sets, six metrics are calculated to emphasize different perspectives on each structure: • Total Bugs per Structure: the total bug introduction commits per structure from the whole software project all the releases. 3 shows the data of each Sub-Structure we found in the gedit project from 314 releases of versions. Generally, the trend of fault-prone is growing as the complexity of the Sub-Structure increases, except for the most highly complex one that may be due to the insufficient amount of data. 12-4-D2(4)-F2(1) is the most fault-prone Sub-Structure in this case. The trend can also be seen in FIGURE 24. TABLE 4 shows the data of Nagios Core. Likewise, the more complex, the more fault-prone, especially the 4-D-F3 group, with more than one bug in each structure. Also, an interesting observation is that the more functions one developer, or a group of developers, works on, the more chance a bug will exist. However, the number of developers, who work on the same set of functions, does not have much effect on the exists of bugs. This may be due to the concept of personal workload, which can be considered as one of our following study topics. C. DATA ANALYSIS To answer our research question: • R (is different Sub-Structures in the CDMN a factor of fault-proneness?) We conducted an Analysis of variance (ANOVA) on the average bug introduction commits per structure to determine whether the dependency of the variant Sub-Structure and prove the result is not random, i.e., the difference in Sub-Structure can affect the fault-proneness. The null hypothesis H0 and alternative hypothesis H1 are: • H0: All the statistical results of the average bug introduction are identical. • H1: All the statistical results of the average bug introduction are not identical. The result of the test show for all the four projects, the Fstat, the F value calculated by the data is far greater than Fcritical, the F value by the degree of freedom with significant level α = 0.05. Therefore, we can safely say that the research data is valid, and Sub-Structures in the CDMN are a factor of fault-proneness. V. THREATS TO VALIDITY In this section, we will discuss some threats that may affect the validity of the study. First, all the programs we analyzed are based on C language. Therefore, there is a chance that our results will not be as effective in projects developed in other programming languages. VI. RELATED WORK Bird et al. [8], [10] mention that low ownership of certain software modules, i.e., the modules that have too many minor contributors, can result in a great possibility of software defects. The study suggests that developers should communicate and work carefully with experienced contributors on the objects regarding the desired modification. Ell [33] and Simpson [113] use a Failure Index (F.I.) as an indicator of the possibility of some specific developer pairs making a mistake, which appears as a primary instance of the appliance to the developer activities in software fault-proneness prediction. Ohira et al. [90] apply social network analysis on developers across different projects, and the results identify that expertise on particular subjects can affect software quality in the development process. Valverde et al. [124], [125] introduce the concept of network motif into the system software. They find that the frequent network motifs can be a consequence of network heterogeneity. On top of that, they propose a duplicationdivergence model that can explain the motif that appears in software evolution. Qian et al. [99] discovered that the nature of software networks could be split into three clusters that match with renowned super-families of network types. However, since the meaning of network motifs in software networks is still unclear, more studies are needed to support the theory as well as the implementation of the software fault-proneness prediction model. Nagappan et al. [83] propose eight metrics that aim to measure the complexity of software development from an organizational point of view, e.g., the ratio of engineers who left the organization that has modified the codes in some software module. These metrics have proven to be helpful in fault-proneness prediction. In [97], a Developer-Module Network, which only includes the contribution relationship of developers and software modules in their definition, is proposed to build a fault-proneness prediction model. In [145], they further introduce the model using Social Network Analysis on the dependency graph of developers. These studies show that the centrality regarding the developers' contribution has a high correlation to the failure of the software. On top of that, the technique is further extended by operating it on socio-technical networks [9]. Cataldo et al. [21], [22] propose a socio-technical congruence framework that portrays the coordination patterns of developers. The studies suggest that the resolution time of modification requests is drastically reduced for specific coordination patterns. Thus, the impact of working congruence on productivity in the process of software development can be significant. Their studies also indicate that logical dependency is related to product dependency and can have more influence on development quality compared to data dependencies. VII. CONCLUSION AND FUTURE WORK In this paper, we propose Composite Developer-Module Networks that feature both software module dependency and developer-module dependency. To build the network, we firstly express all functions of modules and developers involved in the software project into vertices, respectively. Secondly, two different types of dependencies are defined as edges in the network: the software module dependencies, which are decided by the function call between each module, and the developer-module dependencies, which are determined by functions in the software module the developer worked on. After the network is built, we evaluate the bug introduction with the sub-structure derived from the Composite Developer-Module Network and find the most fault-prone one that appears in the network that suggests the most fault-prone pattern during the software development process. We evaluate four open-sourced projects: gedit, Nagios Core, NGINX, and redis by constructing Composite Developer-Module Networks for every release version, respectively. Our analysis results have shown that the distinct Sub-Structures in the Composite Developer-Module Networks is a factor in fault-proneness, and the more complex structures can cause more faults in general. For our future work, we plan to further evaluate our research by applying the method to more software p with a variety of characteristics. More evidence of erroneous patterns can be found through a more comprehensive data set, thus constructing an accurate and effective faultproneness prediction model. Furthermore, we plan to apply machine learning techniques to our method to discover more potential vulnerable sub-structures automatically. Finally, as the goal of our works, we plan to integrate the technique into a practical software development environment and hope to benefit the massive software industry eventually. APPENDIX A table of Sub-Structures analyzed in the study is attached at the end of this paper. VOLUME XX, XXXX
2021-11-18T16:12:27.541Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "4dce5e8b89ed58788d87951d7918c74df7c1b59a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1109/access.2021.3128438", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "d620acd548c63026515b7d85418ec57841f9bffd", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
6750
pes2o/s2orc
v3-fos-license
Neural NILM: Deep Neural Networks Applied to Energy Disaggregation Energy disaggregation estimates appliance-by-appliance electricity consumption from a single meter that measures the whole home's electricity demand. Recently, deep neural networks have driven remarkable improvements in classification performance in neighbouring machine learning fields such as image classification and automatic speech recognition. In this paper, we adapt three deep neural network architectures to energy disaggregation: 1) a form of recurrent neural network called `long short-term memory' (LSTM); 2) denoising autoencoders; and 3) a network which regresses the start time, end time and average power demand of each appliance activation. We test the performance of these algorithms on real aggregate power data from five appliances against seven metrics. Tests are performed against houses seen during training and houses not seen during training. We find that all three neural nets achieve better F1 scores (averaged over all five appliances) than either combinatorial optimisation or factorial hidden Markov models. INTRODUCTION Energy disaggregation (also called non-intrusive load monitoring or NILM) is a computational technique for estimating the power demand of individual appliances from a meter which measures the combined usage of multiple appliances. One use-case is the production of itemised electricity bills from a single, whole-home smart meter. The ultimate aim might be to help users reduce their energy consumption; or to help operators to manage the grid; or to identify faulty appliances. Research on energy disaggregation started with the seminal work of George Hart [1,2] in the mid-1980s. As with many machine learning tasks, it is necessary to consider which features to extract from the data. Hart described a 'signature taxonomy' of features ( Figure 1) and, according to [2], his earliest work in 1984 describes experiments of extracting more detailed features 1 . However, Hart decided to focus on extracting only transitions between steady-states. Many NILM algorithms designed for low frequency data (1 Hz or slower) follow Hart's lead and only extract a small number of features. In high frequency NILM (where we might be sampling at kHz or even MHz), there are numerous examples in the literature of manually engineering feature extractors (e.g. [3,4]). Humans can learn to detect appliances in aggregate data by eye, especially appliances with feature-rich signatures such as the washing machine signature shown in Figure 2. Humans almost certainly make use of a variety of features such as the rapid on-off cycling of the motor (which produces the rapid ∼ 200 watt oscillations), the ramps towards the end as the washer starts to rapidly spin the clothes etc. We could consider hand-engineering feature extractors for these rich features. But this would be time consuming and the resulting feature detectors may not be robust to noise and artefacts. Two key research questions emerge: Could an algorithm automatically learn to detect these features? Can we learn anything from neighbouring machine learning fields such as image classification? Before 2012, the dominant approach to extracting features for image classification was to use hand-engineered feature detectors such as scale-invariant feature transform [5] (SIFT) and difference of Gaussians (DoG). Then, in 2012, Krizhevsky et al.'s winning algorithm [6] in the ImageNet Large Scale Visual Recognition Challenge achieved a substantially lower error score (15%) than the second-best approach (26%). Krizhevsky et al.'s approach did not use hand-engineered feature detectors. Instead Krizhevsky et al. used a deep neural network to automatically learnt to extract a hierarchy of features from the raw image. Deep learning is now a dominant approach not only in image classification but also fields such as automatic speech recognition [7], machine Taken from [2]. translation [8], even learning to play computer games from scratch [9]! In this paper, we investigate whether deep neural nets can be applied to energy disaggregation. The use of 'small' neural nets on NILM dates back at least to Roos et al. 1994 [10] (although that paper was just a proposal) and continued with [11,12,13,14] but these small nets do not appear to learn hierarchy of feature detectors. A big breakthrough in image classification came when the compute power (courtesy of GPUs) became available to train deep neural networks on large amounts of data. In the present research, we want to see if deep neural nets can deliver good performance on energy disaggregation. Our main contribution is to adapt three deep neural network architectures to NILM. We compare two benchmark disaggregation algorithms (combinatorial optimisation and factorial hidden Markov model) to the disaggregation performance of our three deep neural nets using seven metrics. We also examine how well our neural nets generalise to appliances in houses not seen during training because, ultimately, when NILM is used 'in the field' we very rarely have ground truth appliance data for the houses for which we want to disaggregate. So it is essential that NILM algorithms can generalise to unseen houses. The paper is structured as follows: In Section 2 we provide a very brief introduction to artificial neural nets. In Section 3 we describe how we prepare the training data for our nets and how we 'augment' the training data by synthesising some aggregate data. In Section 4 we describe how we adapted three neural net architectures to NILM. In Section 5 we describe how we do disaggregation with our nets. In Section 6 we present the disaggregation results of our three neural nets and two benchmark NILM algorithms. Then, in Section 7 we discuss the results and finally, in Section 8 we offer our conclusions and describe some possible future directions for research. INTRODUCTION TO NEURAL NETS Artificial neural networks (ANNs) consist of a network where the nodes are artificial neurons and the edges allow information from one neuron to pass to another neuron (or the same neuron in a future time step). Neurons are typically arranged into layers such that each neuron in layer l connects to every neuron in layer l + 1. Connections are weighted and it is through modification of these weights that ANNs learn. ANNs must always have an input layer and an output layer. Any layers in between are called hidden layers. The forward pass of an ANN is where information flows from the input layer, through any hidden layers, to the output. Learning (updating the weights) happens during the backwards pass. Forwards pass Each artificial neuron calculates a weighted sum of its inputs, adds a learnt bias and passes this sum through an activation function. Consider a neuron which receives I inputs. The value of each input is represented by input vector x. The weight on the connection from input i to neuron h is denoted by w ih (so w is the 'weights matrix'). The weighted sum (also called the 'network input') of the inputs into neuron h can be written a h = I i=1 xiw ih . The network input a h is then passed through an activation function θ to produce the neuron's final output b h where b h = θ(a h ). In this paper, we use the following activation functions: linear: Multiple nonlinear hidden layers can be used to re-represent the input data (hopefully by learning a hierarchy of feature detectors), which gives deep nonlinear networks a great deal of expressive power [15,16]. Backwards pass The basic idea of the backwards pass it to first do a forwards pass through the entire network to get the network's output for a specific network input. Then compute the error of the output relative to the target (in all our experiments we use the mean squared error (MSE) as the objective function). Then modify the network weights to try to reduce the error. In practice, the forward pass is often computed over a batch of randomly selected input vectors. In our work, we use a batch size of 64 sequences per batch for all but the largest recurrent neural network (RNN) experiments. In our largest RNNs we use a batch size of 16 (to allow the network to fit into the 3GB of RAM on our GPU). How do we modify each weight to reduce the error? It would be computationally intractable to enumerate the entire error surface. MSE gives a smooth error surface and the activation functions are differentiable hence we can use gradient descent. The first step is to compute the gradient of the error surface at the position for current batch by calculating the derivative of the objective function with respect to each weight. Then we modify each weight by adding the gradient multiplied by a 'learning rate' scalar parameter. To efficiently compute the gradient (in O(W ) time) we use the backpropagation algorithm [17,18,19]. In all our experiments we use stochastic gradient descent (SGD) with Nesterov momentum of 0.9. Convolutional neural nets Consider the task of identifying objects in a photograph. No matter if we hand engineer feature detectors or learn feature detectors from the data, it turns out that useful 'low level' features concern small patches of the image and include features such as edges of different orientations, corners, blobs etc. To extract these features, we want to build a small number of feature detectors (one for horizontal lines, one for blobs etc.) with small receptive fields (overlapping subregions of the input image) and slide these feature detectors across the entire image. The intuition behind convolutional neural nets [20,21,22] (CNNs) is that they build a small number of filters, each with a small receptive field, and these filters are duplicated (with shared weights) across the entire input. Similarly to computer vision tasks, in time series problems we often want to extract a small number of low level features with a small receptive fields across the entire input. All of our nets use at least on one-dimensional convolutional layer at the input. TRAINING DATA Deep neural nets need a lot of training data because they have a large number of trainable parameters (the network weights and biases). The nets described in this paper have between 1 million to 150 million trainable parameters. Large training datasets are important but are not the end of the story. It is also common practice in deep learning to increase the effective size of the training set by duplicating the training data many times and applying realistic transformations to each copy. For example, in image classification, we might flip the image horizontally; or apply slight affine transformations; or crop slightly into the source image. A related approach to creating a large training dataset is to generate simulated data. For example, Google DeepMind train their algorithms on computer games because they can generate an effectively infinite amount of training data. Realistic synthetic speech audio data or natural images are harder to produce. In energy disaggregation, we have the advantage that generating effectively infinite amounts of synthetic aggregate data is relatively easy by randomly combining real appliance activations. (We define an 'appliance activation' to be the power drawn by a single appliance over one complete cycle of that appliance. For example, Figure 2 shows a single activation for a washing machine.) We trained our nets on both synthetic aggregate data and real aggregate data in a 50:50 ratio. We found that synthetic data acts as a regulariser. In other words, training on a mix of synthetic and real aggregate data rather than just real data appears to improve the net's ability to generalise to unseen houses. For validation and testing of the nets we use only real data (not synthetic). We used UK-DALE [23] as our source dataset. Each submeter in UK-DALE samples once every 6 seconds. All houses record aggregate apparent mains power once every 6 seconds. Houses 1, 2 and 5 also record active and reactive mains power once a second. In these houses, we downsampled the 1 second active mains power to 6 seconds to align with the submetered data and used this as the real aggregate data from these houses. Any gaps shorter than 3 minutes are assumed to be due to RF issues and so are filled by forward-filling. Any gaps longer that 3 minutes are assumed to be due to the appliance and meter being switched off and so are filled with zeros. We manually checked a random selection of appliance activations from every house. The UK-DALE metadata shows that House 4's microwave and washing machine share a single meter (a fact that we manually verified) and hence these appliances from House 4 are not used in our training data. We train one network per target appliance. The target (i.e. the desired output of the net) is the power demand of the target appliance. The input to every net we describe in this paper is a window of aggregate power demand. The window width is decided on an appliance-by-appliance basis and varies from 128 samples (13 minutes) for the kettle to 1536 (2.5 hours) for the dish washer. We found that increasing the size of the window would hurt disaggregation performance for short-duration appliances (for example, using a sequence length of 1024 for the fridge resulted in the autoencoder (AE) failing to learn anything useful and the 'rectangles' net achieved an F1 score of 0.68; reducing the sequence length to 512 allowed the AE to get an F1 score of 0.87 and the 'rectangles' net got a score of 0.82) but, on the other hand, it is important to ensure that the window width is long enough to capture the majority of the appliance activations. For each house, we reserved the last week of data for testing and used the rest of the data for training. The number of appliance training activations is show in Table 1 and the number of testing activations is shown in Table 2. The specific houses used for training and testing is shown in Table 3. Extract activations Appliance activations are extracted using NILMTK's [24] Electric.get_activations() method. On simple appliances such as toasters, we extract activations by finding strictly consecutive samples above some threshold power. We then throw away any activations shorter than some threshold duration (to ignore spurious spikes). For more complex appliances such as washing machines whose power demand can drop below threshold for short periods during a cycle, NILMTK ignores short periods of sub-threshold power demand. Select windows of real aggregate data First we locate all the activations of the target appliance in the home's submeter data for the target appliance. Then, for each training example, the code decides with 50% probability whether this example should include the target appliance or not. If the code decides not include the target appliance then it finds a random window of aggregate data in which there are no activations of the target appliance. Otherwise, the code randomly selects a target appliance activation and randomly positions this activation within the window of data that will be shown to the net as the target (with the constraint that the activation must be captured completely in the window of data shown to the net, unless the window is too short to contain the entire activation). The corresponding time window of real aggregate data is also loaded and shown to the net and its input. If other activations of the target appliance happen to appear in the aggregate data then these are not included in the target sequence; the net is trained to focus on the first complete target appliance activation in the aggregate data. Synthetic aggregate data To create synthetic aggregate data we start by extracting a set of appliance activations for five appliances across all training houses: kettle, washing machine, dish washer, microwave and fridge. To create a single sequence of synthetic data, we start with two vectors of zeros: one vector will become the input to the net; the other will become the target. The length of each vector defines the 'window width' of data that the network sees. We go through the five appliance classes and decide whether or not to add an activation of that class to the training sequence. There is a 50% chance that the target appliance will appear in the sequence and a 25% chance for each other 'distractor' appliance. For each selected appliance class, we randomly select an appliance activation and then randomly pick where to add that activation on the input vector. Distractor appliances can appear anywhere in the sequence (even if this means that only part of the activation will be included in the sequence). The target appliance activation must be completely contained within the sequence (unless it is too large to fit). Of course, this relatively naïve approach to synthesising aggregate data ignores a lot of structure that appears in real aggregate data. For example, the kettle and toaster might often appear within a few minutes of each other in real data, but our simple 'simulator' is completely unaware of this sort of structure. We expect that a more realistic simulator might increase the performance of deep neural nets on energy disaggregation. Implementation of data processing All our code is written in Python and we make use Pandas, Numpy and NILMTK for data preparation. Each network receives data in a mini-batch of 64 sequences (except for the large RNN sequences, in which case we use a batch size of 16 sequences). The code is multi-threaded so the CPU can be busy preparing one batch of data on the fly whilst the GPU is busy training on the previous batch. Standardisation In general, neural nets learn most efficiently if the input data has zero mean. First, the mean of each sequence is subtracted from the sequence to give each sequence a mean of zero. Every input sequence is divided by the standard deviation of a random sample of the training set. We do not divide each sequence by its own standard deviation because that would change the scaling and the scaling is likely to be important for NILM. Forcing each sequence to have zero mean throws away information. Information that NILM algorithms such as combinatorial optimisation and factorial hidden Markov models rely on. We have done some preliminary experiments and found that neural nets appear to be able to generalise better if we independently centre each sequence. But there are likely to be ways to have the best of both worlds: i.e. to give the network information about the absolute power whilst also allowing the network to generalise well. One big advantage of training our nets on sequences which have been independently centred is that our nets do not need to consider vampire (always on) loads. Targets are divided by a hand-coded 'maximum power demand' for each appliance to put the target power demand into the range [0, 1]. NEURAL NETWORK ARCHITECTURES In this section we describe how we adapted three different neural net architectures to do NILM. Recurrent Neural Networks In Section 2 we described feed forward neural networks which map from a single input vector to a single output vector. When the network is shown a second input vector, it has no memory of the previous input. Recurrent neural networks (RNNs) allow cycles in the network graph such that the output from neuron i in layer l at time step t is fed via weighted connections to every neuron in layer l (including neuron i) at time step t + 1. This allows RNNs, in principal, to map from the entire history of the inputs to an output vector. This makes RNNs especially well suited to sequential data. In our work, we train RNNs using backpropagation through time (BPTT) [25]. In practice, RNNs can suffer from the 'vanishing gradient' problem where gradient information disappears or explodes as it is propagated back through time. This can limit an RNN's memory. One solution to this problem is the 'long short-term memory' (LSTM) architecture [26] which uses a 'memory cell' with a gated input, gated output and gated feedback loop. The intuition behind LSTM is that it is a differentiable latch (the fundamental unit of a digital computer's RAM). LSTMs have been used with success on a wide variety of sequence tasks including automatic speech recognition [7,27] and machine translation [8]. An additional enhancement to RNNs is to use bidirectional layers. In a bidirectional RNN, there are effectively two parallel RNNs, one reads the input sequence forwards and the other reads the input sequence backwards. The output from the forwards and backwards halves of the network are combined either by concatenating them or doing an element-wise sum (we experimented with both and settled on concatenation, although element-wise sum appeared to work almost as well and is computationally cheaper). We experimented with both 'vanilla' RNNs and LSTMs and settled on the following architecture for energy disaggregation: At each time step, the network sees a single sample of aggregate power data and outputs a single sample of power data for the target appliance. In principal, the convolutional layer shouldn't be necessary (because the LSTMs should be able to remember all the context). But we found the addition of a convolution layer at the start to slightly increase performance (the conv layer convolves over the time axis). We also experimented with adding a conv layer between the two LSTM layers with a stride > 1 to implement hierarchical subsampling [28]. This showed promise but we did not use it for our final experiments. On the backwards pass, we clip the gradient at [-10, 10] as per Alex Graves in [29]. To speed up computation, we propagate the gradient backwards a maximum of 500 time steps. Figure 3 shows an example output of our LSTM network. Denoising Autoencoders In this section, we frame energy disaggregation as a 'denoising' task. Typical denoising tasks include removing grain from an old photograph; or removing reverb from an audio recording; or even in-filling a masked part of an image. Energy disaggregation can be viewed as an attempt to recover the 'clean' power demand signal of the target appliance from the background 'noise' produced by the other appliances. A successful neural network architecture for denoising tasks is the 'denoising autoencoder'. An autoencoder (AE) is simply a network which tries to reconstruct the input. Described like this, AEs might not sound very useful! The key is that AEs first encode the input to a compact vector representation (in the 'code layer') and then decode to reconstruct the input. The simplest way of forcing the network to discover a compact representation of the data is to have a code layer which has less dimensions than the input. In this case, the AE is doing dimensionality reduction. Indeed, a linear AE with a single hidden layer is almost equivalent to PCA. But AEs can be deep and nonlinear. A denoising autoencoder (dAE) [30] is an autoencoder which attempts to reconstruct a clean target from a noisy input. dAEs are typically trained by artificially corrupting a signal before it goes into the net's input, and using the clean signal as the net's target. In NILM, we consider the corruption as being the power demand from the other appliances. So we do not add noise artificially. Instead we use the aggregate power demand as the (noisy) input to the net and ask the net to reconstruct the clean power demand of the target appliance. The first and last layers of our NILM dAEs are 1D convolutional layers. We use convolutional layers because we want the network to learn low level feature detectors which are applied equally across the entire input window (for example, a step change of 1000 watts might be a useful feature to extract, no matter where it is found in the input). The aim is to provide some invariance to where exactly the activation is positioned within the input window. The last layer does a 'deconvolution'. The exact architecture is as follows: Layer 4 is the middle, code layer. The entire dAE is trained end-to-end in one go (we do not do layer-wise pre-training as we found it to not add to the performance). We do not tie the weights as this also appears to not enhance performance. An example output of our NILM dAE is shown in Figure 4. Regress Start Time, End Time & Power Many applications of energy disaggregation do not require a detailed second-by-second reconstruction of the appliance power demand. Instead, most energy disaggregation use cases require, for each appliance activation, the identification of the start time, end time and energy consumed. In other words, we want to draw a rectangle around each appliance activation in the aggregate data where the left side of the rectangle is the start time, the right side is the end time and the height is the average power demand of the appliance between the start and end times. Deep neural networks have been used with great success on related tasks. For example, Nouri used deep neural networks to estimate the 2D location of 'facial keypoints' in images of faces [31]. Example 'keypoints' are 'left eye centre' or 'mouth centre top lip'. The input to Nouri's neural net is the raw image of a face. The output of the network is a set of x, y coordinates for each keypoint. Our idea was to train a neural network to estimate three scalar, real-valued outputs: the start time, the end time and mean power demand of the first appliance activation to appear in the aggregate power signal. If there is no target appliance in the aggregate data then all three outputs should be zero. If there is more than one activation in the aggregate signal then the network should ignore all but the first activation. All outputs are in the range [0, 1]. The start and end times are encoded as a proportion of the input's time window. For example, the start of the time window is encoded as 0, the end is encoded as 1 and half way through the time window is encoded as 0.5. For example, consider a scenario where the input window width is 10 minutes and an appliance activation starts 1 minute into the window and ends 1 minute before the end of the window. This activation would be encoded as having a start location of 0.1 and an end location of 0.9. An example output is shown in Figure 5. The three target values for each sequence are calculated during data pre-processing. As for all of our other networks, the network's objective is to minimise the mean squared error. The exact architecture is as follows: 1. Input (length determined by appliance duration) Neural net implementation We implemented our neural nets in Python using the Lasagne library 2 . Our recurrent neural nets were implemented using the recurrent branch of Colin Raffel's fork of Lasagne 3 . Lasagne is built on top of Theano [32,33]. We trained our nets on an nVidia GTX 780Ti GPU with 3 GB of RAM (but note that Theano also allows code to be run on the CPU without requiring any changes to the user's code). On this GPU, our nets typically took between 1 and 12 hours to train per appliance. We manually defined the number of weight updates to perform during training for each experiment. For the RNNs we performed 10,000 updates, for the denoising autoencoders we performed 100,000 and for the regression network we performed 300,000 updates. Neither the RNNs nor the AEs appeared to continue learning past this number of updates. The regression networks appear to keep learning no matter how many updates we perform! The nets have a wide variation in the number of trainable parameters. The largest dAE nets range from 1M to 150M (depending on the input size); the RNNs all had 1M parameters and the regression nets varied from 28M to 120M parameters (depending on the input size). All our network weights were initialised randomly using Lasagne's default initialisation. All of the experiments presented in this paper trained end-to-end from random initialisation (no layerwise pretraining). DISAGGREGATION How do we disaggregate arbitrarily long sequences of aggregate data given that each net has an input window duration of, at most, a few hours? We first pad the beginning and end of the input with zeros. Then we slide the net along the input sequence. As such, the first sequence we show to the network will be all zeros. Then we shift the input window STRIDE samples to the right, where STRIDE is a manually defined positive, non-zero integer. If STRIDE is less than the length of the net's input window then the net will see overlapping input sequences. This allows the network to have multiple attempts at processing each appliance activation in the aggregate signal, and on each attempt each activation will be shifted to the left by STRIDE samples. Over the course of disaggregation, the network produces multiple estimated values for each time step because we give the network overlapping segments of the input. For our first two network architectures, we combine the multiple values per timestep simply by taking the mean. Combing the output from our third network is a little more complex. We layer every predicted 'appliance rectangle' on top of each other. We measure the overlap and normalise the overlap to [0, 1]. This gives a probabilistic output for each appliance's power demand. To convert this to a single vector per appliance, we threshold the and probability RESULTS The disaggregation results on an unseen house are shown in Figure 6. The results on houses seen during training are shown in Figure 7. We used the benchmark implementations from NILMTK [24] of the combinatorial optimisation (CO) and factorial hidden Markov model (FHMM) algorithms. On the unseen house ( Figure 6), both the denoising autoencoder and the net which regresses the start time, end time and power demand (the 'rectangles' architecture) outperform CO and FHMM on every appliance on F1 score, precision score, proportion of total energy correctly assigned and mean absolute error. The LSTM out-performs CO and FHMM on two-state appliances (kettle, fridge and microwave) but falls behind CO and FHMM on multi-state appliances (dish washer and washing machine). On the houses seen during training (Figure 7), the dAE outperforms CO and FHMM on every appliance on every metric except relative error in total energy. The 'rectangles' architecture outperforms CO and FHMM on every appliance (except the microwave) on F1, precision, accuracy, proportion of total energy correctly assigned and mean absolute error. relative error in total energy = |Ê − E| max(E,Ê) (15) proportion of total energy correctly assigned = The proportion of total energy correctly assigned is taken from [34]. DISCUSSION It is worth noting that UK-DALE only has a total of five houses, of which one (House 3) only has one appliance that was used in this study (the kettle). For many of these appliances, we trained the nets on only two houses and tested on a third house. Any machine learning algorithm is only able to generalise if given enough variety in the training set. For example, Figure 8 shows the autoencoder disaggregating a dish washer from House 5. House 5's dish washer sometimes has four activations of its heater (the four high peaks in the target trace) but the dish washers in the two training houses (1 and 2) only ever have two peaks. Hence the autoencoder completely ignores the first two peaks of House 5's dish washer! It would be very interesting to try training the nets across a lot more data. It is also worth noting that the comparison between each architecture is not entirely fair because the architectures have a wide range of trainable parameters. For example, every LSTM we used had 1M parameters whilst the larger dAE and rectangles nets had over 100M parameters (we did try training an LSTM with more parameters but it did not improve performance). Figure 7: Disaggregation performance on houses seen during training (although the time window used for testing is different to that used for training). We must also note that the FHMM implementation is not 'state of the art' and neither is it especially tuned. Other FHMM implementations are likely to perform better. We encourage other researchers to download 4 our disaggregation estimates and ground truth data and directly compare against our algorithms! CONCLUSIONS & FUTURE WORK We have adapted three neural network architectures to NILM. The denoising autoencoder and the 'rectangles' architectures perform well, especially on unseen houses. However, this work represents just a first step towards adapting the vast number of techniques from the deep learning community to NILM. There is plenty of work still to do, for example: • Train on more data! • Combine all three approaches: pre-train a 'rectangles' net on unlabelled data as an autoencoder. Then attach an RNN to the output to capture detailed temporal patterns. • Experiment with more permutations of the nets. • Experiment with dropout and batch normalisation. • Try training one large net to do multiple appliances. • Improve 'rectangle' method to output multiple states per appliance. • Try other input features: time of day, day of week, season, temperature etc. • Build more sophisticated synthesiser of aggregate data. • Experiment with ways to allow give the network information about the absolute power (instead of independently centring each input sequence) whilst also allowing the network to generalise well. 4 Data available from www.doc.ic.ac.uk/∼dk3810/neuralnilm • Try variational autoencoders. • Generate a probabilistic output (either using existing 'layering' approach or mixture density networks or variational approaches). • Do fully integrated, multi-appliance disaggregation: use discrete optimisation to find most likely set of appliances. Or an RNN which seens aggregate data as well as output of upstream appliance classifier. ACKNOWLEDGMENTS Jack Kelly's PhD is funded by the EPSRC and by Intel via their EU Doctoral Student Fellowship Programme.
2015-07-23T20:18:49.000Z
2015-07-23T00:00:00.000
{ "year": 2015, "sha1": "7e629447855f235857cc2ae8d155a215f03fbf87", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1507.06594", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "806dc1c8d571e7fe692b67172dc2e89360777ea7", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
66545274
pes2o/s2orc
v3-fos-license
A control method to save machine energy in production Nowadays, energy saving is one of the most talked about issues in our life, it is also increasingly important in the manufacturing industry. This research considers the dynamic flexible flow shop scheduling (DFFS) problem, which is an extended version of the classical flow-shop scheduling problem. A flexible flow shop has multiple stages with multiple machines at each stage for processing multiple products. Previous research on DFFS aimed to achieve just-in-time production, or reducing difference between the actual completion time and the due date of each job. However, little research has been made on energy saving of machines in production. To address such a need, this paper proposes a method that dynamically turns on and off machines so as to reduce energy consumption while achieving JIT production. The proposed method has been tested on different environments, and the results show that it is high performing for both JIT production and energy saving. Introduction Energy and environmental issues are one of the most talked about issues in the 21st century. As a result of climate change, regulation of carbon dioxide emissions has been imposed globally, which has put much pressure on the manufacturing industry to reduce energy (particularly electric energy) [1]. Environmental protection, solar, wind and other new energy sources are being widely used in life and production. Sustainable development is becoming a trend to promote energy saving. With the need for energy efficiency and low-carbon production, it is a big challenge for high energy consumption enterprises to reduce energy use [2]. Therefore, effective energy and utility management is a key factor to enhance the competitive advantage of organizations, and to promote green and sustainable practices [3]. The factory as a major energy consumer, energy conservation is particularly important. Current studies have been highlighting the importance of energy efficiency on the environmental impact of manufacturing processes. They primarily focus on the evaluation of manufacturing processes based on operating state and components of machine tools [4], thermodynamics approaches [5], and empirical modelling [6]. What's more, cutting parameters optimization for material removal processes is also one of the important energy-saving strategies using by most research. Anderberg et al. analyzed the relationship between cutting parameters, machining costs, and energy consumption in a CNC machining environment [7]. Diaz et al. analyzed the effect of material removal rate and workpiece material on cutting power and energy consumption [8]. Mativenga and Rajemi proposed a method for selecting optimum cutting parameters for minimum energy consumption in turning a cylindrical steel billet [9]. These methods have been developed which enable consideration of energy consumption for the selection of optimal cutting parameters. Based on machine level, this paper presents a method called 2 1234567890''"" dynamic machine on/off method which can dynamic turn on and turn off machines when needed in order to save energy consumption of machines. Similar to machine level, at shop floor level, most of the research related to energy efficient shop floor scheduling have shown improvements in reducing energy consumption for manufacturing. Multiobjective scheduling models have been proposed to optimize both makespan and energy consumption [10], [11] presented a general multi-objective mixed-integer programming formulation for flow shop scheduling problem that considers makespan, peak power demand, and carbon footprint. The proposed scheduling problem considers the operation speed as an independent variable, which can be changed to affect the peak load and energy consumption. Bruzzone et al. [12] presented an energy-efficient scheduling method to modify the original timetable of the jobs in FFS in order to reduce the shop floor's peak power demand while some level of tardiness and makespan increase are accepted. The just-in-time (JIT) production and flexible flow shop (FFS) type of manufacturing system are selected as the research object for the following reasons: With the increase in inventory costs, more and more factories tend to JIT production rather than reduce the maximum completion time of all jobs. The objective of JIT production is to reduce both inventory costs of jobs completed before due dates and delay costs of jobs completed after due dates. The due date is the date that the factory expects to deliver the product to the customer. There is a complex manufacturing environment called FFS which is a further development of the classical flowshop scheduling. FFS has multiple stages with multiple machines at each stage for processing multiple products. All jobs must go through all the stages to become finished products, which means that there are many different flow lines (routes) available for each job to go through the system. There is a previous study [13] concentrating on JIT production for FFS problem. It detailed three distributedintelligence approaches to achieve the JIT objective. We add the proposed dynamic machine on/off method in the best-performing approach among the previous study, which is named stage to stage (S2S) feedback approach. This research intends to minimize both total inventory costs and delay costs of all jobs and total energy consumption of all machines. The remainder of this paper is organized as follows. We give the problem description in chapter 2. After that, we present the dynamic machine on/off method proposed in this paper in chapter 3, and give the computational simulation results and the conclusions in chapters 4 and chapter 5 respectively. Problem description This paper considers the dynamic FFS problem in which the manufacturing environment consists of multiple stages with multiple machines at each stage. All jobs must go through all stages in sequence in order to become a product. If job j is finished before its due date , it has an earliness . On the other hand, if job j is finished after its due date , it has a tardiness . Let denote the completion time of job j, then the earliness of a job completed before its due date is given as iven 䁕 , and the tardiness of a job completed after its due date is given as iven 䁕 . Both early jobs and tardy jobs have penalties but earliness penalties are always smaller than tardiness penalties. Each stage has a waiting queue before it, with jobs inside the queue processed in an Earliest-Due-Date (EDD) sequence. The objective function is equation (1) where p(j) is product type of job j and J is the total number of jobs, is earliness of job j and t is unit penalty of earliness, is tardiness of job j and t is unit penalty of tardiness, i is sum of processing time of jobs operated on machine m, t i is energy cost per unit time for machine m in processing mode, t i is sum of idling time for machine m, t i is energy cost per unit time for Besides, the following conditions are assumed in this paper: 1. One machine can only process one job at a time. 2. The processing of each job is nonpreemptive. 3. Machines do not break down. 4. Machines have the same energy cost per unit time when they are in the same mode. 5. The processing time for each product on each machine is different but fast machines process all types of products faster than slow machines. 6. The waiting queue of each stage is unlimited. Proposed method We propose a method that dynamically turns off some machines without causing jobs to be tardy in order to save energy and turns on them again when they need to be used for reducing job tardiness. We assume that each machine has three modes: processing mode, idle mode and off mode [14]. Machines are in the processing mode when they are processing jobs and will switch to the idle mode automatically after they finish the jobs. The energy cost per unit time for the processing mode is bigger than that for the idle mode. There is also an energy cost for either turning on or turning off a machine. The proposed method aims to reduce the total energy consumption by dynamically turning on and off machines. In the following, we introduce how to dynamically turn off and turn on machines respectively. Design of turning off machines As mentioned before, machines are in the processing mode when they are processing jobs and will switch to the idle mode automatically after they finish the jobs. Since jobs come dynamically, the system does not always need all the machines to work when the system is not busy. We focus on these idle machines when trying to turn off a machine. The procedure of turning off the machines is shown in figure 1. At each stage, when a machine in the processing mode finishes a job, we use the estimated completion time of each job provided in the S2S feedback approach to check if any job in the waiting queue will be tardy if we turn off the machine. If no job is found to be tardy, we will compare idle energy consumption with machine on/off energy consumption, if the idle energy consumption is greater, we will compare the average processing time of the last job in the stage waiting queue with the waiting time of the last job in queue of current stage. The average processing time (APT) is calculated by equation (2) t i 1 tn 䁕i , where is the number of machines of each stage, tn 䁕i is the processing time of job j operated on machine m. The waiting time of a job means the time duration from this job enters the waiting queue of current stage to it is processed by machine. If the former is larger than the latter, the idle machine will be turned off. Each stage repeats this procedure until it finds at least one condition in figure 1 is not satisfied. Design of turning on machines The procedure of turning on the machines is shown in figure 2. When a new job enters the waiting queue of a stage, if there are some machines that are turned off at the stage and any of the waiting jobs is found to be tardy based on the estimated completion time provided by S2S in the stage, we will compare the average processing time of the last job in waiting queue of current stage with the waiting time of the last job in queue of current stage. After comparison, we will turn on the fastest off machine using the procedure shown in figure 2. Since jobs arrive dynamically, machines are turned off and on dynamically at each stage. Table 1 describes parameters design of simulation environment. The type of products is designed to be 1 and 5, the number of stages is designed to be 3, 5 and 10. The total number of machines in the system is designed to be 5, 10 and 20. Since each factor has more than one design, the number of combinations is equal to 18 (2*3*3=18). Then we let the problem instance for each feasible combination equals to 3, so the total number of problem instances is 48. When there are 10 stages, the number of machines must be at least 10, so there are 6 infeasible instances because the number of machine is 5 in system is unacceptable. In order to evaluate the results in a fair way, each instance has 2200 jobs, of which the first 200 jobs are used to warm up the system to a stable state, so we focus on only the remaining 2000 jobs for getting the results. These designs are intended to test whether the proposed dynamic machine on/off method can work well under all kinds of situations. In addition, we set the energy costs as follows after refer to a previous paper [9]:  Energy cost per unit time for a machine in processing mode: 1;  Energy cost per unit time for a machine in idle mode: 0.4;  Energy consumption for turning on or turning off a machine: 0.6. Simulation results In order to show the effect of this dynamic machine on/off method, Tables 2-4 give the average results of all instances in different designs. In the tables, Total Cost is sum of Power Consumption Cost and Earliness & Tardiness Cost, which means the total penalty including energy consumption, earliness and tardiness of all jobs. The data of tables are calculated by equation 香 䀀 × 1 , where is the result calculated by using machine on/off method and is the result without using method. Result analysis As revealed by tables 2-4, both energy consumption and total earliness and tardiness of all jobs are becoming better after we use dynamic machine on/off method, regardless of different factors or different types, so does the Total Cost, which means the total penalty including energy consumption, earliness and tardiness of all jobs. Table 2 shows the results become better when P increases from 1 to 5. It is probably because the dynamic machine on/off method can effectively reduce both job earliness and tardiness, the total cost will become better with the increase in the number of products. Table 3 shows that when S increase, the results become worse. The reason is that a job need more time to be processed to a product when the number of stage increase. When new order comes constantly, the number of waiting jobs will increase rapidly, which lead to the increase number of tardy jobs. Table 4 shows that the results become better when M increases. This is because there are more machines that can process jobs and machines are turned off from slow to fast, jobs have more chances to be assigned to faster machine. Conclusions This research aims at saving energy consumption for JIT production by using a dynamic machine on/off method based on a previously proposed S2S feedback approach. The results show that all problem instances'objective function values become better, thereby proving the effectiveness of the proposed method.
2019-02-18T14:05:27.259Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "6d9c34085fc5541ae5dc911edb24468082442793", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/383/1/012014", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ad12839ae4329fc21b68ecadd078cefa20c567e4", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science" ] }
249077649
pes2o/s2orc
v3-fos-license
Photocatalytic Reduction of CO2 with N-Doped TiO2-Based Photocatalysts Obtained in One-Pot Supercritical Synthesis The objective of this work was to analyze the effect of carbon support on the activity and selectivity of N-doped TiO2 nanoparticles. Thus, N-doped TiO2 and two types of composites, N-doped TiO2/CNT and N-doped TiO2/rGO, were prepared by a new environmentally friendly one-pot method. CNT and rGO were used as supports, triethylamine and urea as N doping agents, and titanium (IV) tetraisopropoxide and ethanol as Ti precursor and hydrolysis agent, respectively. The as-prepared photocatalysts exhibited enhanced photocatalytic performance compared to TiO2 P25 commercial catalyst during the photoreduction of CO2 with water vapor. It was imputed to the synergistic effect of N doping (reduction of semiconductor band gap energy) and carbon support (enlarging e−-h+ recombination time). The activity and selectivity of catalysts varied depending on the investigated material. Thus, whereas N-doped TiO2 nanoparticles led to a gaseous mixture, where CH4 formed the majority compared to CO, N-doped TiO2/CNT and N-doped TiO2/rGO composites almost exclusively generated CO. Regarding the activity of the catalysts, the highest production rates of CO (8 µmol/gTiO2/h) and CH4 (4 µmol/gTiO2/h) were achieved with composite N1/TiO2/rGO and N1/TiO2 nanoparticles, respectively, where superscript represents the ratio mg N/g TiO2. These rates are four times and almost forty times higher than the CO and CH4 production rates observed with commercial TiO2 P25. Introduction The use of photocatalytic technology to chemically reduce carbon dioxide (CO 2 ) into hydrocarbons not only transforms this greenhouse gas into reusable fuel, but also helps alleviate global warming [1]. However, CO 2 is an extremely stable compound, and photocatalytic CO 2 reduction with solar light still remains a challenge, mainly because of low solar energy conversion efficiency, backward reaction phenomenon, uncontrolled product selectivity and rapid electron-hole recombination rate of the photocatalyst [2]. As in many other environmental and energy applications, titanium dioxide has been the photocatalyst more widely used for the conversion of CO 2 to fuel, mainly due to its photoactivity, high stability, low cost, and safety [3,4]. However, its application is limited because of its relative wide band gap (3-3.2 eV) and rapid recombination rate of photo-induced electron-hole pairs [5]. To overcome these drawbacks, different strategies have been proposed, such as doping with transition metal cations [6], using enhanced geometries [7] or photocatalyst supporting on carbon materials [8,9]. Non-metal doping is another approach suggested to improve TiO 2 performance. Compared with metal doping, non-metal dopants lead to catalysts with higher photostability, less environmental contamination, and lower cost [10]. Particularly, doping with N Nanomaterials 2022, 12, 1793 3 of 25 Synthesis of Catalysts The synthesis of rGO was performed by the Hummers' method, as described in a previous work [9]. In the case of N-doped TiO 2 nanoparticles, the one-pot reaction was developed by adding the titanium precursor (TTIP, 1.39 g), the hydrolysis agent (ethanol, 8 mL) and 1-4 mL TEA (0.8-2.9 g) or urea (0.3 g) as nitrogen precursor [20]. The reactions took place in a stainless-steel reactor (volume 100 mL) using scCO 2 as solvent at a pressure of 200 bar and a temperature of 300 • C. This procedure was described in more detail in previous works [24]. Later, the solid obtained was dried at 105 • C for 24 h and calcined at 400 • C for 3 h. The catalyst was named N X /TiO 2, X being the N content (mg N/g TiO 2 ) in the synthetized N-doped TiO 2 nanoparticles. All catalysts were obtained in triplicate. A similar procedure was used for synthesizing composites of N-doped TiO 2 over CNT or rGO (N X /TiO 2 /CNT and N X /TiO 2 /rGO, respectively). The quantities of TTIP and ethanol added were the same as those used for N-doped TiO 2 nanoparticles but 390 mg of either CNT or GO were also aggregated, thus keeping the TiO 2 :carbon support mass ratio equal to 1 [8,9]. In this case, 1-4 mL (0.8-2.9 g) of TEA or 0.3 g of urea were employed. The conditions of the drying process of the composites were also the same. However, whereas N-doped TiO 2 /CNT composites were calcinated at 400 • C for 3 h [8], the conditions for N-doped TiO 2 /rGO composites were 500 • C for 3 h in a nitrogen atmosphere [9]. Characterization of Catalysts The synthesized photocatalysts were characterized with different analytical techniques. The N content was determined in an elemental analyzer (CHNS-932, LECO, Geleen, The Netherlands). A transmission electron microscope (TEM, 2100, Jeol, Croissy-sur-Seine, France) was used to obtain information about the morphology of the catalysts. An X-ray powder diffractometer (XRD, X'Pert MDP, Phillips, Amsterdam, The Netherlands) was used to determine crystallinity, crystallite size and crystalline phases of the catalysts. The specific surface area of the powders was measured using a BET area analyzer (Nova Touch LX2, Quantachrome, Graz, Austria). The presence of certain functional groups was determined by Fourier Transform Infrared spectroscopy (FTIR) analysis with a Spectrum 100 FTIR spectroscope (Perkin-Elmer, Madrid, Spain). The X-ray photoelectron spectroscopy measurement (XPS) was made in an XPS-AES spectrometer (AXIS UltraDLD, Kratos, Manchester, UK). A diffuse reflectance UV-vis spectrophotometer (DRS, V650, Jasco, Croissy-sur-Seine, France) was employed to obtain absorbance thresholds and band gap energies. Electrochemical impedance spectroscopy (EIS) experiments were performed using a PGSTAT302N potentiostat (AUTOLAB, Utrecht, The Netherlands), a 0.1 M KHCO 3 solution was used as electrolyte, a calomel electrode as reference electrode, a Pt electrode as counter electrode and a frequency range of 0.005-10,000 Hz. Photocatalytic Reaction Tests For all synthesized catalysts, the photocatalytic reduction of CO 2 in gas phase with water vapor was performed, as described in previous works [8,9]. In short, the catalyst (50 mg) was immobilized in a filter and placed inside a stainless-steel reactor with a quartz window. Then, the reactor was filled with the mixture of water vapor and CO 2 until the operating conditions were reached. Next, it was illuminated using a Xe arc lamp (450 W, Oriel, Irvine, CA, USA) with an Air Mass 1.5 Global filter to simulate sunlight. Once the experiment was finished, after 3 h of reaction, the reduction products were determined with a GC (490 Micro GC, Agilent, Santa Clara, CA, USA) connected to the reactor. Control experiments were performed for the evaluation of photocatalysts. Results from these tests showed no appreciable amounts of reduced products in the absence of catalysts or light irradiation, illustrating that the process occurring in the reactor was photocatalytic in nature. Additionally, no reduction products were detected when introducing He into the reactor, instead of CO 2 , or when adding CO 2 in the absence of water under light irradiation, indicating that the CO and CH 4 originated from CO 2 in the presence of water under light irradiation. N Content of the N-Doped Catalysts In the first place, when both supports (CNT and rGO) were doped with N using 1-4 mL of TEA, a N precursor first employed in supercritical doping by Lucky and Charpentier [20], the N loads reached a maximum of 2 mg N/g CNT and 10 mg N/g rGO, respectively (Table 1). When they were doped with 0.3 g of urea (the maximum amount allowed before it precipitates from the ethanol solution in the presence of TTIP), the resulting N concentrations were 2 mg N/g CNT and 42 mg N/g rGO, respectively. (1) : N X /CNT or N X /rGO are materials names, where X is N content in mg N/g carbon support. (a), (b) and (c) are employed to differentiate materials with the same N content. These results illustrate that larger N loads were achieved in the rGO support, regardless of the N precursor used for doping. Regarding N-doped photocatalysts (Table 2), when they were synthesized with TEA, it can be seen that the N content in N-doped TiO 2 nanoparticles, N-doped TiO 2 /CNT composites and N-doped TiO 2 /rGO composites was about 1 mg N/g TiO 2 . N content expressed in mg N/g TiO 2 in composites was estimated according to the results in Table 1 for N-doped supports and considering the results in Table 2 for N-doped TiO 2 nanoparticles. (1) : N X /TiO 2 , N X /TiO 2 /CNT and N X /TiO 2 /rGO are material names, where X is N content in mg N/g TiO 2 . (a), (b) and (c) are employed to differentiate materials with the same N content. On the other hand, it can be appreciated that the N content of the catalysts obtained with 0.3 g urea was always larger than those produced from TEA, this effect being especially noticeable in the presence of rGO. These results demonstrated the well-known fact that the N content of N-doped TiO 2based catalysts largely depends on the molecular structure of the source of nitrogen and the accessibility of nitrogen atoms to react with the titania precursor and support [25]. In this sense, primary and secondary amines are likely to provide a N-richer catalyst material. For this reason, most works have used urea, since its primary amine structure is likely to introduce the highest amount of N to the catalyst. Triethylamine, as a tertiary amine, was expected to provide less nitrogen atoms to titania than urea. However, it should be noted that this does not necessarily imply that the photoactivity of TEA-doped catalysts will be lower [26,27], as will be shown below. In the following sections the nature of the nitrogen present in the synthesized catalysts and the catalyst properties it affects will be discerned. Figure 1 shows the CO 2 conversion rates obtained with different synthesized catalysts during its photoreduction with water vapor in the presence of simulated sunlight. The average value and standard deviation of 3 replications for each material are presented. They are expressed in terms of µmol of product per hour and gram of TiO 2 , which is the photo-active species. As was the case with undoped and metal-doped TiO 2 particles [6], TiO 2 /CNT [8], and TiO 2 /rGO composites synthesized in supercritical medium [9], the only reduction products detected were CO and methane. Moreover, all the prepared catalysts showed higher photocatalytic activity than the previous bare TiO 2 nanoparticles synthesized in supercritical medium, indicating that the presented one-pot method for doping + synthesizing + supporting the photocatalyst is an efficient way to produce photocatalysts with higher activity than that exhibited by TiO 2 . Previously, similar experiments with all synthesized N-doped supports were performed, with smaller CO and methane concentrations obtained than the detection limits in all cases. Photocatalytic Activities Specifically, the results obtained in this work can be differentiated into 3 groups. On Previously, similar experiments with all synthesized N-doped supports were performed, with smaller CO and methane concentrations obtained than the detection limits in all cases. Specifically, the results obtained in this work can be differentiated into 3 groups. On the one hand, the N-doped TiO 2 nanoparticles (N 1 /TiO 2 ) doubled the CO 2 total conversion rates of non-doped TiO 2 nanoparticles obtained at supercritical conditions (3.2 µmol products/h/g TiO 2 ) ( Figure 1) and tripled that of P25 (2.1 µmol products/h/g TiO 2 ) [8]. On the other hand, it can be appreciated that the results (CO 2 conversion and selectivity towards CH 4 ) were slightly lower for the catalysts with higher N content synthesized using urea as N source. Particularly, only the methane selectivity of composite N 2 /TiO 2 /rGO (synthesized from urea) (22%) was slightly higher than that of composite N 1 /TiO 2 /rGO, a composite obtained from TEA. In conclusion, it was shown that, whereas supporting on CNT did not improve the results of TiO 2 nanoparticles, supporting on rGO improved the CO 2 conversion but not CH 4 selectivity, when similar N content (in terms of mg N/g TiO 2 ) was employed. Obviously, the results obtained with CNT composites were unexpected and may be imputed to the lower crystallinity and crystal size of N-doped TiO 2 /CNT composites when compared with TiO 2 /CNT composites, as will be shown in corresponding sections. Moreover, XPS analysis suggested that N incorporation took place mainly in the bulk, but not on the surface, in N-doped TiO 2 /CNT composites, as the N-containing active sites were less accessible. Both phenomena lead to lower charge transfer and, consequently, lower photocatalytic activity. In the following sections we will further explain these results considering the characteristics of the different catalysts. However, before this, they will be compared to those obtained in similar studies with N-doped catalysts synthesized with traditional methods. In this sense, it can be seen in Table 3 that the results from this work are far higher than those reported in the bibliography for N-doped TiO 2 nanoparticles and N-doped TiO 2 /carbon support composites. The photocatalytic mechanism of N-doped TiO 2 catalysts may be described as follows [14] (Figure 2a). The N 2p energy level, situated above the VB of TiO 2 , forms a narrower band gap than that of bare TiO 2 , which extends the absorption of N-doped TiO 2 into the visible region. Under solar light irradiation, electron-hole pairs are generated by two different routes. Specifically, electrons from the N 2p level are excited to the TiO 2 conduction band (CB) by visible light, while those from the TiO 2 valence band (VB) may be excited to the CB of the semiconductor by UV irradiation. According to Wu et al. (2021) [10], the photocatalytic reduction of CO 2 in the gas phase with N-doped TiO 2 nanoparticles begins with the adsorption of CO 2 molecules on the surface of the catalyst to form carbonate species. Then, electrons produced by the photocatalytic mechanism described above may reduce these adsorbed CO 2 molecules to product CO through the protonation of ·COOH intermediate. Density functional theory (DFT) calculations found that enhanced surface polarization, due to N doping and oxygen vacancy, gives rise to significant charge accumulation on CO 2 molecules, leading to the activation of CO 2 , which reduces the energy barrier to generate intermediate products and facilitate electron transfer at the interface [8]. In the case of CH 4 , the proposed mechanism implies the reaction of adsorbed HCO 3 − with an electron to form C· radicals, which can convert into CH 4 after successive reactions with H· radicals, via CH 3 · radical intermediate [2]. Gas-closed circulation system, 780 mL, 0.1 g catalyst, 300 W Xe arc lamp [31] Tr: Traces. The gray background represents the same type of catalysts. To sum up, N-doping reduces the energy necessary to reduce CO2 into CO and CH (i.e., it enhances the light absorption of photocatalysts in the visible region), wherea carbon support enlarges the time required for charge recombination. Both measures lea to higher photocatalytic activity of the synthesized catalysts. Regarding selectivit towards CH4, it seems to be influenced by the availability of H· radicals coming from H2 oxidation. In the case of N-doped TiO 2 /carbon support nanocomposites (Figure 2b), as some studies [13,31] have hypothesized, under simulated sunlight radiation electrons and holes may be generated and transferred between the interface of support and N-doped TiO 2 , leading to charge recombination possibly being effectively retarded in the N/TiO 2 /CNT and N/TiO 2 /rGO composites. The holes formed in the VB of N-doped TiO 2 may oxidize H 2 O molecules absorbed on the surface of particles to generate O 2 and protons. The photogenerated electrons could be transferred from the CB of N-doped TiO 2 to the carbon support via a percolation mechanism [14], where they could reduce CO 2 molecules to CO and methane [31]. The absence of CH 4 in some supported catalysts (as in our N-doped TiO 2 /CNT and TiO 2 /rGO composites) suggests that the protons from water may fail to capture the photogenerated electrons to form H· radicals, because they fall into electronrich aromatic cycles of support, where they could be stabilized, as it is hard for them to participate in the production of CH 4 . For this reason, in the case of unsupported N-doped TiO 2 , due to the absence of the conjugated aromatic system, H + or H· radicals generated in the photocatalytic reaction may quickly be consumed by CO 2 in the photocatalytic process, and CH 4 and CO simultaneously detected [31]. To sum up, N-doping reduces the energy necessary to reduce CO 2 into CO and CH 4 (i.e., it enhances the light absorption of photocatalysts in the visible region), whereas carbon support enlarges the time required for charge recombination. Both measures lead to higher photocatalytic activity of the synthesized catalysts. Regarding selectivity towards CH 4 , it seems to be influenced by the availability of H· radicals coming from H 2 O oxidation. Surface Morphology Analysis (TEM) TEM was carried out to analyze the structure and morphology of the samples and the results are shown in Figure 3. In the case of N-doped TiO 2 nanoparticles (Figure 3a), aggregates of polyhedral particles with crystallite sizes in the range of 11-14 nm and well-defined lattice fringes, suggesting a highly crystallized anatase structure, are observed [10]. The morphology is similar to undoped TiO 2 [24], except that the crystallite sizes are larger [2]. Lucky and Charpentier (2010) observed that alkylamines used as N dopant in supercritical synthesis can form amine complexes with metal alkoxides, thus favoring the aggregation of the metal oxide particles [20]. All these findings will be corroborated by the results obtained in the next sections. Regarding the supports (Figure 3b,c), neither N-doped CNT nor N-doped rGO show any difference from undoped supports [8,9]. When these supports are mixed with titania precursor, hydrolysis agent and N precursor in supercritical media, it is evident that Ndoped TiO 2 nanoparticles are successfully deposited on both CNT and rGO (Figure 3d,e). Figure 3d shows that TiO 2 nanoparticles are uniformly dispersed over CNT, as was found in previous works on N-doped TiO 2 /CNT composites synthesized with traditional synthesis [32] and undoped TiO 2 /CNT composites synthesized with supercritical fluids [8]. The crystallite size was about 10 nm, showing a narrower distribution than unsupported N-doped TiO 2 nanoparticles (14 nm). In this sense, some works [33] explain that nitrogencontaining groups in the carbon support may serve as favorable nucleation and anchor sites for TiO 2 nanocrystals. The smaller size of TiO 2 nanoparticles in composites might be due to stronger coupling between TiO 2 and N-doped sites on the support [33]. When TiO 2 /rGO composites are analyzed (Figure 3e), there is no sign of agglomeration of TiO 2 nanoparticles, which are well distributed over the rGO support. The crystallite size of these particles is about 13-14 nm, like those of undoped TiO 2 /rGO composites synthesized with supercritical fluids (13 nm) [9]. Daraee et al. (2020) reached similar results when performing traditional synthesis [34]. If the influence of TiO 2 crystallite size on photocatalytic activity is analyzed, it can be observed that photocatalytic activity may be directly related to crystallite size, since N-doped TiO 2 nanoparticles and N-doped TiO 2 /rGO composites lead to higher CO 2 reduction rates than N-doped TiO 2 /CNT composites. This phenomenon can be derived from a higher charge transfer, as will be shown in the corresponding section [9]. Before concluding this section, we should note that the photocatalysts N 1 /TiO 2 , N 1 /TiO 2 /CNT and N 1 /TiO 2 /rGO presented in this and the following sections were synthesized using 1 mL of TEA. This was done because, according to the results presented in Table 2, the N load in N-doped TiO 2 nanoparticles and in N-doped TiO 2 supported on CNT and rGO was always the same, regardless of the amount of TEA (N precursor) used in the synthesis (Table 2). Crystalline Structure Analysis (XRD) The crystal structure and phase identification of TiO 2 in the synthesized catalysts were investigated by using XRD technique. The XRD diffractograms are displayed in Figure 4. In all samples, no matter if they were bare TiO 2 [24] or supported on CNT [8] or rGO [9], the patterns were well matched with anatase-phase TiO 2 , indicating that the crystalline structure of synthesized TiO 2 was not affected by doping [2] and supporting during the one-pot synthesis process [34,35]. Changes were observed in the peak shape and intensity in XRD patterns of N-doped TiO 2 particles with respect to that of undoped TiO 2 [36]. The increase in peak intensity of N-doped TiO 2 catalysts, compared to that of undoped TiO 2 , indicates that N doping could enhance the crystallinity of TiO 2 particles [2]. The crystallite sizes of modified TiO 2 catalysts were estimated from XRD patterns using the Scherrer equation and are listed in Table 4. All synthesized catalysts possess smaller crystallite sizes than the reference P25 (20 nm) [37]. In our case, the crystallite sizes of N-doped TiO 2 nanoparticles increased from 11 to 14 nm with increasing amounts of nitrogen from 0 to 2 mg N/g TiO 2 . This trend was observed in N-doped TiO 2 catalysts obtained with traditional methods [37]. The values agree with those reported in a former study dealing with the synthesis of N-doped TiO 2 catalysts in supercritical fluids [20]. It can be concluded that the presence of nitrogen doping influences the crystallite size of TiO 2 grown during the doping process [30], as was observed in TEM images. Crystalline Structure Analysis (XRD) The crystal structure and phase identification of TiO2 in the synthesized catalysts were investigated by using XRD technique. The XRD diffractograms are displayed in Figure 4. In all samples, no matter if they were bare TiO2 [24] or supported on CNT [8] or rGO [9], the patterns were well matched with anatase-phase TiO2, indicating that the crystalline structure of synthesized TiO2 was not affected by doping [2] and supporting during the one-pot synthesis process [34,35]. Changes were observed in the peak shape and intensity in XRD patterns of N-doped TiO2 particles with respect to that of undoped TiO2 [36]. The increase in peak intensity of N-doped TiO2 catalysts, compared to that of undoped TiO2, indicates that N doping could enhance the crystallinity of TiO2 particles [2]. The crystallite sizes of modified TiO2 catalysts were estimated from XRD patterns using the Scherrer equation and are listed in Table 4. All synthesized catalysts possess smaller crystallite sizes than the reference P25 (20 nm) [37]. In our case, the crystallite sizes of N-doped TiO2 nanoparticles increased from 11 to 14 nm with increasing amounts of nitrogen from 0 to 2 mg N/g TiO2. This trend was observed in N-doped TiO2 catalysts obtained with traditional methods [37]. The values agree with those reported in a former study dealing with the synthesis of N-doped TiO2 catalysts in supercritical fluids [20]. It can be concluded that the presence of nitrogen doping influences the crystallite size of TiO2 grown during the doping process [30], as was observed in TEM images. No drastic shift or presence of new peaks were observed, indicating that N doping did not lead to the formation of any secondary and impurity phases in the host TiO2, rather than occupancy of oxygen sites or inclusion in TiO2 lattice [5], which will be discerned later. No drastic shift or presence of new peaks were observed, indicating that N doping did not lead to the formation of any secondary and impurity phases in the host TiO 2 , rather than occupancy of oxygen sites or inclusion in TiO 2 lattice [5], which will be discerned later. Regarding composites with CNT, the peaks in N-doped composites were wider than those in the undoped CNT-supported composite [8], which shows that degree of crystallization of TiO 2 is slightly weakened by ion implantation [38], contrary to what happened with unsupported catalysts. In this sense, crystallite size decreased from 16 to 10 nm when undoped TiO 2 /CNT composites were doped with 2 mg N/g TiO 2 . This finding could be related to stronger coupling between TiO 2 and N-doped sites on the support, as was explained in the previous section [33]. No characteristic peaks of CNTs were found in the composites, which may be the result of overlap between the intense peaks of CNTs and anatase at 25.9 • and 25.2 • , respectively [35]. This could also be attributed to the homogeneous coverage of TiO 2 on CNTs [8]. In the case of composites with rGO, it can be observed that peaks of N-doped TiO 2 /rGO composites were slightly narrower than those of undoped TiO 2 /rGO composites [9]. This is probably due to altering the crystallite size of base TiO 2 crystallites [34]. As a result, a small increase in crystallite size, up to 14 nm, was observed upon 1 mg N/g TiO 2 doping. When N content increased (as in N 2 /TiO 2 /rGO), crystallite size decreased. Just as happened with CNTs, the main characteristic peak of graphene at about 25 • is shadowed by the main peak of anatase TiO 2 , surely due to the homogeneous dispersion of TiO 2 on rGO [39]. To sum up, measurements of crystallize sizes agreed with the results observed by TEM methodology and supported the trends in photocatalytic activity presented in Section 3.2. Surface Area Analysis (BET) Specific surface area is another critical parameter in determining the photocatalytic activity of TiO 2 . If a catalyst exhibits large surface area, the adsorption of many molecules takes place on its surface and reactions are promoted [37]. However, a large surface area is generally related to more crystalline defects. An excess of defects could assist in recombination processes of charge carriers and induce poor photocatalytic activity. Thus, an adequate surface area is a prerequisite, but not a deciding factor for a higher activity [40]. We observed that all isotherms of bare and N-doped TiO 2 nanoparticles (depicted in Figure S1 in Supplementary Materials) displayed the typical structure of type IV isotherms with well-defined H1 hysteresis loops, indicating the characteristic of capillary condensation within uniform mesoporous structures, and confirming that mesoporous structures were well retained in TiO 2 nanoparticles during the simultaneous processes of synthesis and nitrogen doping [2]. Table 4 presents the BET areas of the synthesized materials. The values for N-doped TiO 2 nanoparticles agree with those reported by Lucky and Charpentier (2010) when this type of catalyst was obtained in supercritical medium [20]. As shown, in the presence of N a decrease in the specific surface area was observed [2]. This could be attributed to the higher crystallite sizes of N-doped TiO 2 catalysts described in the previous section and caused by N present in the form of interstitial N (Ti-O-N or Ti-N-O), or substitutional N (Ti-N), since the N 3− ion has a larger ionic radius (0.171 nm) than the O 2− ion (0.140 nm) [2]. The presence and abundance of these N species will be treated more deeply in Section 3.7. The supported catalysts exhibit similar or larger specific surface areas than TiO 2 nanoparticles due to the presence of carbon supports. The values coincide with those of composites obtained by both traditional [34,41] and high-pressure methods [8,9]. Moreover, there was reduction in support surface area in the TiO 2 /support composites that suggests the existence of a partial blockage of CNTs inner surface [8] and partial rGO surface coverage [9]. Finally, it can be verified that opposite trends of crystallite sizes and BET areas were fully met in our experimental results, both for unsupported and supported TiO 2 catalysts (Table 4). Surface Functional Groups Analysis (FTIR) The FTIR spectra of the synthesized materials are shown in Figure 5. In the case of N-doped TiO 2 nanoparticles, the broad band in the region 3600-3200 cm −1 can be ascribed to the stretching vibration of the surface-bonded Ti-OH groups, which may act as proton source to decrease CO 2 activation energy during the reduction process [10]. Moreover, this band broadens and shifts to a lower wavenumber in N-doped TiO 2 nanoparticles in contrast to undoped TiO 2 , due to the incorporation of N atoms and N-containing groups into TiO 2 [42]. The weak bands at about 2900 cm −1 can be related to the stretching-vibration mode of C-H bonds that could possibly derive from the residues produced during the calcination of the precursors involved [5]. The small peak at around 2340 cm −1 can be associated to the bending vibration modes of the H-H bond, and the peak around 1630 cm −1 to the bending vibration of O-H of the physisorbed water molecules [5]. The band around 750 cm −1 is assigned to the characteristic stretching-vibration mode of Ti-O-Ti bonds of anatase TiO 2 . This peak is sharper and suffers from a shifting to higher wavenumber in N-doped TiO 2 because of the O-Ti-N and N-Ti-N linkage [8,40]. The tiny peak around 1375 cm −1 could correspond to trace N atoms (N-H linkage) that are substituted into the lattices of TiO 2 , or be due to the presence of molecular residues from triethylamine [5,42]. In addition to the striking peak at 3400 cm −1 related to OH groups and mentioned in the two previous cases, N-doped rGO and TiO2/rGO composites exhibited signals at 2800, 1625 and 1500 cm −1 associated with the stretching of C-OH, the presence of C=O and the deformation of C-O groups of rGO [43]. The 2800 cm −1 signal could overlap with C-C signals at 2852 and 2919 cm −1 [44]. These surface oxygen-containing functional groups Regarding N-doped CNT and TiO 2 /CNT, the presence of OH groups and water on the surface of the catalysts was confirmed by the appearance of a broad band at about 3400 cm −1 [13]. As explained before, the presence of hydroxyl groups on the composite surfaces plays an important role in photocatalytic activity. The band due to the stretching and bending modes of Ti-O and O-Ti-O appears as a broad band at about 600 cm −1 in the spectra of the composites [13]. Some characteristic peaks of CNT are observed in composites due to the large percentage of CNT in the nanocomposites, such as peaks in the region 2980-2880 and 1000 cm −1 (C-C bonds), and at about 1600 cm −1 (carbonyl C=O bonds). The weakening of the intensity of the peaks in the nanocomposites is due to the breaking down of CNT walls to its graphitic fragments and the attachment of these graphitic fragments onto, and into, the TiO 2 nanocrystals [42]. This confirms the incorporation of CNT into the nanocomposites. The incorporation of N into carbon material was also demonstrated by the small C-N peak at 1325 cm −1 [42]. In addition to the striking peak at 3400 cm −1 related to OH groups and mentioned in the two previous cases, N-doped rGO and TiO 2 /rGO composites exhibited signals at 2800, 1625 and 1500 cm −1 associated with the stretching of C-OH, the presence of C=O and the deformation of C-O groups of rGO [43]. The 2800 cm −1 signal could overlap with C-C signals at 2852 and 2919 cm −1 [44]. These surface oxygen-containing functional groups render the possibility of covalent linkage of TiO 2 onto the rGO surface [45]. The main difference between N-doped TiO 2 /rGO composites and rGO is the band at 650 cm −1 related to the Ti-O-Ti bonds. The broadening of this peak may suggest the presence of a peak due to Ti-O-C bond. This confirms that TiO 2 nanoparticles could be strongly bonded to graphene sheets [34]. In addition, the peaks at about 1360 and 1550 cm −1 showed the possible presence of C-N bond from pyrrolic nitrogen (interstitial) and C=N from pyridinic nitrogen (substitutional), respectively [46]. Although they can also be related to C-O and C-C bond bands, respectively [47]. N 2 /TiO 2 /rGO catalyst does not show any difference with the composite with lower N content. Surface Chemical Analysis (XPS) The chemical state and surface composition of N-doped catalysts were investigated using the XPS technique. The full scan spectra are displayed in Figure 6 and show the existence of N, O, Ti and C in the samples. In the case of N-doped TiO 2 nanoparticles, C 1s peak can be ascribed to remnant organic precursors not completely removed during the calcination [37]. Surface Chemical Analysis (XPS) The chemical state and surface composition of N-doped catalysts were investigated using the XPS technique. The full scan spectra are displayed in Figure 6 and show the existence of N, O, Ti and C in the samples. In the case of N-doped TiO2 nanoparticles, C 1s peak can be ascribed to remnant organic precursors not completely removed during the calcination [37]. The narrow scan Ti 2p spectra of N-doped TiO2, TiO2/CNT and TiO2/rGO (Figure 7) identified two Ti characteristic peaks located at 458.5 and 464.2 eV. They correspond to typical binding energies of Ti 4+ (Ti 2p3/2 and Ti 2p1/2 of TiO2) [48]. However, these peaks were 0.65 eV lower than those of bare TiO2, which may be an indication of successful N doping. The N element is less electronegative than the O element. When N atoms are present in the TiO2 lattice, a part of Ti 4+ is reduced to Ti 3+ , which may lead to decrease in the binding energy of Ti 2p [38]. The absence of other non-Ti 4+ species or deconvoluted peaks of Ti could also be due to the resolution of XPS, which was unable to detect minor changes of TiO2 or because the Ti 3+ species exist in the subsurface or bulk, which is inaccessible by XPS [37]. In any case, according to our results it seems that the replacement of oxygen atoms with nitrogen atoms in the TiO2 structure of both TiO2 nanoparticles and nanocomposites may have not occurred [5]. To sum up, Figure 7 depicts that TiO2 is present in all catalysts, and this TiO2 may be interstitially doped with N. The narrow scan Ti 2p spectra of N-doped TiO 2 , TiO 2 /CNT and TiO 2 /rGO (Figure 7) identified two Ti characteristic peaks located at 458.5 and 464.2 eV. They correspond to typical binding energies of Ti 4+ (Ti 2p 3/2 and Ti 2p 1/2 of TiO 2 ) [48]. However, these peaks were 0.65 eV lower than those of bare TiO 2 , which may be an indication of successful N doping. The N element is less electronegative than the O element. When N atoms are present in the TiO 2 lattice, a part of Ti 4+ is reduced to Ti 3+ , which may lead to decrease in the binding energy of Ti 2p [38]. The absence of other non-Ti 4+ species or deconvoluted peaks of Ti could also be due to the resolution of XPS, which was unable to detect minor changes of TiO 2 or because the Ti 3+ species exist in the subsurface or bulk, which is inaccessible by XPS [37]. In any case, according to our results it seems that the replacement of oxygen atoms with nitrogen atoms in the TiO 2 structure of both TiO 2 nanoparticles and nanocomposites may have not occurred [5]. To sum up, Figure 7 depicts that TiO 2 is present in all catalysts, and this TiO 2 may be interstitially doped with N. The deconvoluted O 1s spectra of N-doped TiO2 nanoparticles, TiO2/CNT and TiO2/rGO composites (the latter as example, Figure 8a) showed the peak at 529 eV representing the stoichiometric existence of oxygen network in TiO2 with respect to Ti (Ti-O) as well as the doped N (Ti-O-N) [5]. The peaks at 531 eV and 535 eV corresponded to surface adsorbed oxygen and water, respectively [49,50]. When O 1s spectra of supports and composites were compared (N 10 /rGO as example, Figure 8b), new peaks at 531 and 533 eV appeared, related to C=O (carbonyl, carboxyl) and C-O (epoxy, hydroxyl) groups, respectively [50]. All these results imply that TiO2 may be doped with N, and the existence of functional groups in supports are susceptible to have bound TiO2 particles. In the case of N-doped CNT and TiO2/CNT composites (shown N 1 /TiO2/CNT as example in Figure 9a), the C 1s spectra are deconvoluted into two peaks at 283.6-284.5 and 290.1-290.9 eV [17]. The first one is bigger and due to graphitic carbon in CNT, whereas the second one is related to C=O/C-N bonds [50]. C 1s spectra of N-doped rGO ( Figure 9b) and TiO2/rGO composites (Figure 9c) can be deconvoluted into 2 and 4 peaks, respectively. The peaks at 283.8-284.5 eV and 290.9 eV indicate the presence of graphene C-(C,H) and O-C-O bonds in rGO, respectively [43]. The relatively weak signal of the C-O groups indicates that most of the GO oxygen is reduced during the synthesis of catalysts in supercritical medium [51]. The first peak also appears in the N-doped TiO2/rGO The deconvoluted O 1s spectra of N-doped TiO 2 nanoparticles, TiO 2 /CNT and TiO 2 /rGO composites (the latter as example, Figure 8a) showed the peak at 529 eV representing the stoichiometric existence of oxygen network in TiO 2 with respect to Ti (Ti-O) as well as the doped N (Ti-O-N) [5]. The peaks at 531 eV and 535 eV corresponded to surface adsorbed oxygen and water, respectively [49,50]. When O 1s spectra of supports and composites were compared (N 10 /rGO as example, Figure 8b), new peaks at 531 and 533 eV appeared, related to C=O (carbonyl, carboxyl) and C-O (epoxy, hydroxyl) groups, respectively [50]. All these results imply that TiO 2 may be doped with N, and the existence of functional groups in supports are susceptible to have bound TiO 2 particles. In the case of N-doped CNT and TiO 2 /CNT composites (shown N 1 /TiO 2 /CNT as example in Figure 9a), the C 1s spectra are deconvoluted into two peaks at 283.6-284.5 and 290.1-290.9 eV [17]. The first one is bigger and due to graphitic carbon in CNT, whereas the second one is related to C=O/C-N bonds [50]. C 1s spectra of N-doped rGO (Figure 9b) and TiO 2 /rGO composites (Figure 9c) can be deconvoluted into 2 and 4 peaks, respectively. The peaks at 283.8-284.5 eV and 290.9 eV indicate the presence of graphene C-(C,H) and O-C-O bonds in rGO, respectively [43]. The relatively weak signal of the C-O groups indicates that most of the GO oxygen is reduced during the synthesis of catalysts in supercritical medium [51]. The first peak also appears in the N-doped TiO 2 /rGO composite, proving that the structure of graphene remains after the synthesis of composite catalyst [39]. Moreover, the two small peaks at 286.3 and 288.7 eV in TiO 2 /rGO composite can be associated with C-O and C=O bonds in support, respectively. In N-doped TiO2 nanoparticles, CNT, rGO, TiO2/CNT and TiO2/rGO composites (N 1 /TiO2/CNT for example, Figure 10a), the only peak at 399.4-399.8 eV corresponding to N 1s confirms the existence of interstitial N in TiO2 (Ti-O-N) and the absence of substitutional N (N-Ti-N) [5]. According to Wang et al. (2009), at relatively low calcination temperature (<600 °C), N atoms tend to sit in the interstitial sites, above all if the N atomic percentage is below 1.2 [52]. At a relatively high calcination temperature (600 °C), some of the N atoms are incorporated into the TiO2 lattice substitutionally, in addition to the presence of interstitial N atoms. In the case of N-doped rGO (Figure 10b) two additional peaks at 398.2 and 404.2 eV are present. The first one can be related to pyridinic N, whereas the second one to C-N-O, indicating the successful doping of N atoms into the graphene framework [51]. In N-doped TiO 2 nanoparticles, CNT, rGO, TiO 2 /CNT and TiO 2 /rGO composites (N 1 /TiO 2 /CNT for example, Figure 10a), the only peak at 399.4-399.8 eV corresponding to N 1s confirms the existence of interstitial N in TiO 2 (Ti-O-N) and the absence of substitutional N (N-Ti-N) [5]. According to Wang et al. (2009), at relatively low calcination temperature (<600 • C), N atoms tend to sit in the interstitial sites, above all if the N atomic percentage is below 1.2 [52]. At a relatively high calcination temperature (600 • C), some of the N atoms are incorporated into the TiO 2 lattice substitutionally, in addition to the presence of interstitial N atoms. In the case of N-doped rGO (Figure 10b) two additional peaks at 398.2 and 404.2 eV are present. The first one can be related to pyridinic N, whereas the second one to C-N-O, indicating the successful doping of N atoms into the graphene framework [51]. From XPS analysis, the atomic percentage of N in the TiO 2 crystal lattice is about 0.22 in N-doped TiO 2 nanoparticles and N-doped TiO 2 /rGO composite, but 0.13 in N-doped TiO 2 /CNT composite. There seems to be some kind of correlation with the support used, since N-doped CNT and rGO exhibit 0.06 and 1.06 atomic percentage of N, respectively. If these figures are compared with those obtained in Section 3.1 with elemental analysis, it seems that N incorporation takes place mainly on the surface in the case of TiO 2 nanoparticles and TiO 2 /rGO composites, but in the bulk in the case of bare CNT and TiO 2 /CNT nanocomposites [27]. This could also have contributed to the lower photocatalytic yield of N-doped TiO 2 /CNT composites, where N-containing active sites are less accessible. From XPS analysis, the atomic percentage of N in the TiO2 crystal lattice is abou in N-doped TiO2 nanoparticles and N-doped TiO2/rGO composite, but 0.13 in N-d TiO2/CNT composite. There seems to be some kind of correlation with the support since N-doped CNT and rGO exhibit 0.06 and 1.06 atomic percentage of N, respec If these figures are compared with those obtained in Section 3.1 with elemental an it seems that N incorporation takes place mainly on the surface in the case o nanoparticles and TiO2/rGO composites, but in the bulk in the case of bare CN TiO2/CNT nanocomposites [27]. This could also have contributed to the photocatalytic yield of N-doped TiO2/CNT composites, where N-containing activ are less accessible. As a summary, XPS analyses allow us to state that the one-pot supercritical p achieved interstitial N-doping within both the TiO2 lattice structure and carbon su framework, almost complete reduction of GO into rGO, and the preservation graphitic structure of the supports. As a summary, XPS analyses allow us to state that the one-pot supercritical process achieved interstitial N-doping within both the TiO 2 lattice structure and carbon support framework, almost complete reduction of GO into rGO, and the preservation of the graphitic structure of the supports. Optical Properties Analysis (DRS) The optical UV-vis light absorption characteristics of the synthesized catalysts were investigated using diffuse reflectance UV-visible absorption spectroscopy. Some of the obtained spectra are displayed in Figure 11. The band gap energy and absorption threshold of the synthesized samples and supports were estimated as in previous works [6] and are given in Table 4. lattices. Substitutional N introduces localized nitrogen states up to 0.14 eV above the VB and interstitial N forms Π-character states up to 0.74 eV above the VB. The excitation from the occupied high energy levels to the CB is more favorable with interstitial N-doped TiO2, exhibiting higher visible light activity [53]. However, the absorbance of a photocatalyst cannot be directly correlated to its photoactivity, so improvement in photocatalytic activity may not necessarily be observed, due to band gap reduction [40]. Regarding the influence of calcination temperature on light absorption, Sathish et al. (2005) found that the light absorption of N-TiO2 particles in the visible region decreased very significantly as the calcination temperature increased above 400 °C, due to a decrease in the amount of N doping in TiO2 with calcination temperature [54]. On the other hand, it is not surprising that the light absorption spectra of composites and supports (CNT or rGO) are similar since the composite surface is not fully covered with TiO2. The particular shape of absorbance curves for carbon supports and composites has also been observed in other works dealing with traditional synthesis of N-doped On the one hand, it can be seen in the absorption spectra that the absorption threshold of undoped TiO 2 nanoparticles at 400 nm was shifted to 405 nm in the case of TiO 2 nanoparticles with 1 mg N/g TiO 2 . This indicates that N-doping slightly expanded the optical absorption of TiO 2 nanoparticles to the visible light region. Accordingly, the band gap energy of undoped and N-doped TiO 2 nanoparticles were 3.10 and 3.06 eV, respectively. This enhancement in optical properties could result from the formation of energy levels near and above the valence band (VB) of TiO 2 due to doped N atoms [5]. This slight decrease in band gap energy agrees with works on the traditional synthesis of N-doped TiO 2 [2]. As explained before, in the doping procedure N can create space for itself in the bulk or on the surface. If the crystallization of titania occurs while the dopant source is added, the N incorporates in the crystal lattice [53]. The dopant species could be incorporated in the crystal lattice occupying either a substitutional (Ti-N) or an interstitial site (Ti-O-N), which leads to the formation of a new band between the CB and VB of titania, resulting in reduction of the band gap energy [53]. Substitutional doping involves oxygen replacement, whereas interstitial doping involves the addition of nitrogen into TiO 2 lattices. Substitutional N introduces localized nitrogen states up to 0.14 eV above the VB and interstitial N forms Π-character states up to 0.74 eV above the VB. The excitation from the occupied high energy levels to the CB is more favorable with interstitial N-doped TiO 2 , exhibiting higher visible light activity [53]. However, the absorbance of a photocatalyst cannot be directly correlated to its photoactivity, so improvement in photocatalytic activity may not necessarily be observed, due to band gap reduction [40]. Regarding the influence of calcination temperature on light absorption, Sathish et al. (2005) found that the light absorption of N-TiO 2 particles in the visible region decreased very significantly as the calcination temperature increased above 400 • C, due to a decrease in the amount of N doping in TiO 2 with calcination temperature [54]. On the other hand, it is not surprising that the light absorption spectra of composites and supports (CNT or rGO) are similar since the composite surface is not fully covered with TiO 2 . The particular shape of absorbance curves for carbon supports and composites has also been observed in other works dealing with traditional synthesis of N-doped TiO 2 /carbon support composites [39]. Precisely, this very special form prevents us from calculating the band gap energies of composites with the same graphical method described in reference [6], as happened in previous works [8,9]. For this reason, it is necessary to apply Tauc's graphical procedure ( Figure S2) [15]. With it, values of band gap energies of about 2.10 eV and 2.40 eV were obtained for N/TiO 2 /CNT and N/TiO 2 /rGO, respectively, proving that the composites have higher visible light absorbance after N-doping and loading of N-TiO 2 [55]. In the case of the TiO 2 /rGO catalyst, with higher N content (N 2 /TiO 2 /rGO), the absorbance was slightly higher than that of N 1 /TiO 2 /rGO, being the value of the band gap energy 0.05 eV smaller. These findings may be due to doping of N into TiO 2 lattice narrows its band gap whilst the carbon support decoration could also improve photo-absorption in the visible light region and reduce the reflection of light [55]. This shift in the absorption threshold to the visible light range is consistent with the color change observed in powders, from white (undoped TiO 2 ) to light grey (N-doped TiO 2 ) and dark grey/black (N-doped TiO 2 /CNT and N-doped TiO 2 /rGO) [56]. Electrical Properties Analysis Finally, electrochemical impedance spectroscopy (EIS) was employed to evaluate the photo-excited charge-transfer property of the photocatalysts. Nyquist plots (Z vs. Z ) of the different photocatalysts are depicted in Figure 12. TiO2/carbon support composites [39]. Precisely, this very special form prevents us from calculating the band gap energies of composites with the same graphical method described in reference [6], as happened in previous works [8,9]. For this reason, it is necessary to apply Tauc's graphical procedure ( Figure S2) [15]. With it, values of band gap energies of about 2.10 eV and 2.40 eV were obtained for N/TiO2/CNT and N/TiO2/rGO, respectively, proving that the composites have higher visible light absorbance after Ndoping and loading of N-TiO2 [55]. In the case of the TiO2/rGO catalyst, with higher N content (N 2 /TiO2/rGO), the absorbance was slightly higher than that of N 1 /TiO2/rGO, being the value of the band gap energy 0.05 eV smaller. These findings may be due to doping of N into TiO2 lattice narrows its band gap whilst the carbon support decoration could also improve photo-absorption in the visible light region and reduce the reflection of light [55]. This shift in the absorption threshold to the visible light range is consistent with the color change observed in powders, from white (undoped TiO2) to light grey (N-doped TiO2) and dark grey/black (N-doped TiO2/CNT and N-doped TiO2/rGO) [56]. Electrical Properties Analysis Finally, electrochemical impedance spectroscopy (EIS) was employed to evaluate the photo-excited charge-transfer property of the photocatalysts. Nyquist plots (Z″ vs. Z′) of the different photocatalysts are depicted in Figure 12. Undoped TiO2 and N-doped TiO2 nanoparticles (N 1 /TiO2 as example) show a similar semicircular shape (Figure 12a). As the arc radii are alike, this implies similar resistance for charge transfer and similar charge separation efficiency for both photocatalysts. The results are coherent with those corresponding to TiO2 nanoparticles synthesized with Undoped TiO 2 and N-doped TiO 2 nanoparticles (N 1 /TiO 2 as example) show a similar semicircular shape (Figure 12a). As the arc radii are alike, this implies similar resistance for charge transfer and similar charge separation efficiency for both photocatalysts. The results are coherent with those corresponding to TiO 2 nanoparticles synthesized with traditional methods [57] and it is expected that the arc radius would be reduced if far more N content could be introduced into the photocatalyst [10]. In the case of CNT/TiO 2 and rGO/TiO 2 composites, all four catalysts show the typical characteristics of one semicircle in the middle-high frequency range and a sloping straight line in the low frequency (Figure 12b). The arc radii of the EIS Nyquist plot of the composites are far smaller than those of TiO 2 nanoparticles [33,51], indicating that the interface layer resistance and the charge transfer resistance on the surface are diminished, which reveals that charge migration is facilitated by interfacial interaction between the TiO 2 and carbon material (CNT or rGO) occurring in the TiO 2 -C heterojunction [58]. Regarding N-doping, EIS Nyquist plots show that the arc radius for N 1 /TiO 2 /rGO is noticeably smaller than the undoped TiO 2 /rGO composite (TiO 2 /rGO weight ratio equal to unity). This is due to the presence of N in both TiO 2 nanoparticles [10] and carbon support [59]. On the contrary, N 1 /TiO 2 /CNT nanocomposite exhibits larger arc radius (higher resistance for charge transfer and lower charge separation efficiency) than the undoped TiO 2 /CNT composite. This behavior has already been explained in previous sections in terms of smaller crystallite size and may be due to the presence of N in the bulk but not on the surface of the N-doped TiO 2 /CNT composite. Summary of Properties In this section, the different properties exhibited by the N-doped catalysts synthesized in supercritical medium will be summarized and compared with those of catalysts synthesized by traditional methods, as well as those obtained with supercritical fluids but not doped with N. Generally speaking, N-doped TiO 2 nanoparticles obtained with supercritical fluids in this work exhibited an improved photocatalytic activity in terms of both total conversion and methane selectivity than those obtained with traditional methods [29]. This enhanced behavior seems to be derived from a lower degree of aggregation [2], larger crystallite size [2] and slightly higher visible light absorption [2]. Regarding N-doped TiO 2 /carbon support composites synthesized in supercritical medium, only N-doped TiO 2 /rGO composites have shown higher photocatalytic activity (but not methane selectivity) than similar composites synthesized with traditional methods [30]. In this case, the main reason is undoubtedly the extraordinarily good ability of the composites obtained in this work to absorb visible light compared to conventional N-doped TiO 2 /rGO materials [39]. The poorer photocatalytic activity of N-doped TiO 2 /CNT composites seems to be derived from the smaller crystallite size [32] and BET area [34] of the materials obtained in this work, in contrast to those synthesized with traditional methods. In the case of the photocatalysts obtained in a supercritical medium by our group in previous studies, the main advantage of the materials described in this work is that N-doping allowed the photocatalytic activity shown by metal-doped catalysts to be maintained, with consequent saving of more expensive raw materials (Cu, Pd, Pt). Even in the case of N-doped TiO 2 nanoparticles, methane selectivity was doubled in contrast to undoped [36] and metal-doped nanoparticles [60], probably due to the larger crystallite size of the first ones. Something similar was observed for N-doped TiO 2 /rGO composites, although methane selectivity of metal-doped TiO 2 /rGO composites was not improved [9]. N-doped TiO 2 /rGO composites exhibited higher BET area and lower band gap energy, but smaller crystallite size than metal-doped TiO 2 /rGO composites [9]. Finally, N-doped TiO 2 /CNT composites showed lower photocatalytic activity than undoped and metaldoped TiO 2 /CNT composites [8]. In this case, the small crystallite size seems to hinder its excellent properties related to visible light absorption [8]. Moreover, XPS analysis suggested that N incorporation took place mainly in the bulk, but not on the surface, in N-doped TiO 2 /CNT composites, as the N-containing active sites were less accessible [27]. Conclusions N-doped TiO 2 nanoparticles, N-doped TiO 2 /CNT and N-doped TiO 2 /rGO nanocomposites were synthesized by a facile one-pot method in a supercritical medium. The presence of N in both TiO 2 and carbon supports endowed them with good visible light sensitization and high separation efficiency of the charges photogenerated after irradiation with solar light. The photocatalysts exhibited good photocatalytic performance in photoreduction of CO 2 in the presence of water vapor and the highest conversion rate of 8 µmol/gTiO 2 /h was achieved with N 1 /TiO 2 /rGO composite. The photocatalytic products depended on the catalyst type. CO and CH 4 were formed on N-doped TiO 2 nanoparticles (CH 4 /CO ratio 2.5), while almost only CO was produced on both composites (N-doped TiO 2 on CNT or rGO) as a result of a lack of H· radicals coming from H 2 O oxidation. The specific N content of the catalysts could be regulated by varying the N precursor, leading urea to higher N levels in the catalysts than TEA. Nevertheless, similar properties and even lower photocatalytic activity were exhibited by the composites with higher N percentage (N 2 /TiO 2 /rGO and N 2 /TiO 2 /CNT). To sum up, in the present work the effect of the carbon support (CNT and rGO) on the activity and selectivity of the N-doped TiO 2 nanoparticles in the CO 2 photocatalytic reduction reaction was evaluated. Specifically, it was found that maximum CO 2 conversion was achieved with the rGO support (N 1 /TiO 2 /rGO (b)). It almost doubled that obtained when using CNT (N 1 /TiO 2 /CNT (b)). However, no differences in selectivity were achieved with both carbon supports. The results regarding N-doped nanoparticles and N-TiO 2 /rGO nanocomposites are of special interest, especially in terms of methane selectivity and total conversion, respectively. Nevertheless, as a promising avenue for future research, we may suggest the modification of N-doped TiO 2 -based photocatalysts investigated in this work with an additional metallic dopant. The interesting results reported in studies of charge generation and transfer conducted with metal-doped TiO 2 support this hypothesis.
2022-05-27T15:19:51.627Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "0ec1ce87357ab5b785e622652f4e3eb995ac935f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/12/11/1793/pdf?version=1653403840", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bbf09e6515c5412f20e09459f301e9c7e96b05c7", "s2fieldsofstudy": [ "Chemistry", "Engineering" ], "extfieldsofstudy": [] }
129835441
pes2o/s2orc
v3-fos-license
Geography in Denmark Development of geographical thought and knowledge Geography as a body of knowledge of places, peoples and nature has a long tradition in Denmark. More than 1000 years ago, Danish Vikings knew places along the Atlantic sea border of Europe, most of the Mediterranean, places as distant as Constantinople, and they even knew how to get there by alternative routes via the Russian rivers. But it was only from 1636 that geography became an academic discipline at the University of Copenhagen. King C... Geography in Denmark Sofus Christiansen them with a vocabulary for their descriptions, a set of institutions in which to store and analyse their maps and findings and also a means for gaining academic merit. Soon the demand for explanations of geographical phenomena increased. Answers were sought in French geography, especially in the works of Vidal de la Blache and his followers. The central focus on how the challenges of the environment were modelling human society inspired the outstanding Danish geographer H.P. Steensby in his work on the origins of the Eskimo culture (1905). Published in the last days of the great explorations, his work spurred a massive amount of activities, but it took about one hundred years of Arctic research to prove that his hypotheses were basically right. The excitement of heroic expeditions meant that geography acquired a place in common awareness as never before. Geography played a great part in the exploration of Greenland, supporting and sometimes implementing expeditions. Central Asia and Northern Africa were also explored by Danish expeditions. Martin Vahl, professor 1921-39, contributed to geography with a plant-ecology based climatic system and to plant ecology. 2 WWI obviously meant a brutal interruption of Danish overseas research, but during the following decades Steensby's two successors, Martin Vahl and Gudmund Hatt, continued and expanded the research. Vahl developed the ideas of climatic control of the vegetation into a world-wide system, whereas Hatt elaborated the Eskimo-theme further, at the same time generating a systematic overview of the stages of evolution of material culture. Furthermore, the two professors produced a valuable handbook, "Jorden og Menneskelivet" I- IV (1922), in which a quintessence of contemporary geographical knowledge was concentrated. Until WWII, Danish geography followed the trend in "continental geography" and maintained a strong position, reinforced by the impact of Niels Nielsen's work in the Danish Wadden Sea, that set new standards for research in physical geography and inspired a whole generation of geographers and demonstrated that geography could be put to practical uses, especially regarding land reclamation. Progress in university geography resulted in a general improvement in the teaching of the discipline at all levels, mainly based on a series of excellent textbooks. Niels Nielsen, professor 1939-64, initiated field surveys in the Danish Wadden Sea and founded the Skalling-laboratory among a wealth of other activities. Axel Schou, professor 1953-72, an inspiring teacher and author of "The Marine Foreland". 3 Post-WWII, Danish geography followed the line of Niels Nielsen, successfully emphasizing physical geography, but with an added, strong influence from abroad. Whereas Danish human geography had been closely linked to ethnology, human geography in Sweden, England and the USA had focused strongly on the problems of contemporary, western society. First Hägerstrand from Lund, later the British geographers Haggett and Chorley, greatly influenced Danish human geography : urban themes and planning featured strongly on the agenda and new, quantitative methods were applied. New employment opportunities developed from this, and public attention to geography grew. From the late 1960s, Denmark involved itself in international aid to developing countries, and geographers proved themselves useful as advisers/administrators of the many types of activities that resulted from the new initiatives. At the same time, European youth became politically activated, often trying to obtain a role in solving practical problems in society. Political engagement was usually to the leftist fronts, resulting in a "Students' Revolt" in the universities with geography students demanding that quantitative geography and Marxist theories be added to the curricula. The effects were evident : the next decade in geography was characterised by the publication of several critical articles, many of which on developing countries and theories on the causes of their poverty. Works of a descriptive analytical type also appeared, often applying new techniques, especially the use of computers, that facilitated much more than just the numerical side of work. The positive effect of this was that the applicability of geography increased, but on the other hand a slight opposition to geographers also developed. This was only partly balanced by geographers' successful and steadily increasing use of satellite-technology that became an important tool for planning and monitoring, especially in developing countries and in relation to environmental problems. The new ways of acquiring data Geography in Denmark Belgeo, 1 | 2013 together with systems-thinking also provided possibilities for establishing not only models for describing and understanding functions, but also for reliable prognoses. 4 Presently, Danish geography, having experienced a period of substantial expansion and specialisation, may have to reconsolidate somewhat. A strong urge to specialize has increased the distance between specialists, the use of synthesizing models being one of the few means to arrive at general overviews or the "man-environment" connection that geography previously strived to establish. Development has urged research to concentrate on special topics such as coastal and arctic morphology, pedology, and their interaction with climate, hydrology and general environmental conditions for plant productivity and resource-management, urban and regional development and its consequences for both developing and developed countries. Projects along the lines indicated have given geography a public image which is connected to landscape morphology and dynamics, tasks in developing countries and urban-regional planning. The main connection is still to natural sciences at the University of Copenhagen. At the University of Aarhus, physical geography is now a part of the Institute of Geology, and at the Roskilde University Centre (RUC) it is included in the Institute for Geography and International Development Studies. At the Aalborg University Centre (AUC) the teaching of geography has recently been initiated. 5 Previously, nearly all geographers were employed in teaching ; this is no longer the case. Many jobs have been created within the sector of international aid or in environmental management, aside from the fact that many geographers have acquired jobs requiring less specific professional qualifications, for example, in administration. In spite of this, many Danes still see geographers as people "who know rivers and lakes etc. from maps". Others see certain geographers as natural or social scientists. Geographical education in Denmark 6 General education became compulsory in Denmark from 1814 on. It aimed first of all to eradicate illiteracy from the population, which was successfully achieved at a remarkable speed. Geography was included in teaching, mainly to reinforce a "love of the fatherland" and to implant a general idea in the minds of students of what the world looked like. Even at the higher education level ("Latinskolen") that prepared for university, geography students had to concern themselves with learning by heart simple facts about nations and places. In 1850, pressurized by the growing interest in "realia", the Latin School was reformed to become the "Laerde Skole", but geography, which included information on Denmark, retained its character of a mass of facts to be learnt by heart. 7 From 1871, geography in a modernised form entered the schools of higher education, though only in one of the two options that could be chosen. Competent teachers were in short supply because a university degree in the subject could only be obtained after the establishment of the "Skoleembedseksamen" in 1883. in his "Précis de la Géographie Universelle". Geography won a respectable place in the curricula at all levels of teaching, including the Gymnasium and university. Secondary schools originally only had geography in one of its three options (mathematical-physical), but this was expanded to two hours per week in one of the three years' duration. The geographical textbooks for the Gymnasium were written by university professors and were of outstanding quality, serving the newly-expressed aim of the school : to expand knowledge and increase understanding. The ideas of the early stages were, no doubt, that geography was a necessary assembly of knowledge, an introduction to natural sciences and, at the same time, an important element in general education. With minor modifications the system was in continued use up to WWII and some time afterwards. Modernization of the educational system in the middle of the XXth century was mainly aimed at teaching methods. Focus was moved from the individual to also encompass group work, and conventional texts were modified with selected themes for project-work, unconnected to any prescribed text. Emphasis was placed on motivating the pupils, who often took an active part in selecting the issues for analysis. Good working habits were seen as more important than training in specific curricula. Geography teachers played an important part in developing these new ideas and implementing them in practical life. 9 In most of the educational system modifications were introduced at the end of last century : for all disciplines there was a definition of what was compulsory and what was optional. In geography, most project-work had been applied to human geographical themes ; with the latest reform human geography has been sharply reduced in the Gymnasium with only physical geography retained. 10 Geography now occupies a modest place in the Danish educational system and the traditional "länderkunde" has been left out. The volume of geography-related teaching has also been reduced. In Danish primary schools, a special topic of "Nature and Technology" has been introduced from the sixth grade with two lessons per week in one year. Much of this encompasses geography in the classical sense : the productive use of natural resources. 11 At the upper secondary level (which has two options : "HF" and "Gymnasium") geography used to be taught for three hours per week during one year (compulsory) and four hours per week during one year (optional) along the classical lines with both physical and human geography. However, when the "Gymnasium" was recently reformed (May 2003), it was initially proposed to totally remove geography from the curriculum. In the end, physical geography was retained, though it had to be chosen only as one out of three disciplines (chemistry, biology or geography). Geography is thus no longer a compulsory subject for all pupils at the Gymnasium-level, having been so for about 150 years. 12 The background for the declining weight of geography in the curricula is no doubt founded on a loss of trust in its practical usefulness : relatively few jobs are specifically for geographers. There also seems to be a certain disbelief in the necessity of geography as an element in the "normal mental tools for the educated". In modern times, most information can easily be extracted from the internet or from handbooks, and for mental training more demanding topics are available. As to creating a better understanding of conditions for human life, geography is rivalled by many competing subjects, especially economy and sociology. However, the points of views cited are heavily debated. The recent reduction of geography in the Gymnasium raised sharp criticism, but the government stood fast : human geography was cut out, and physical geography only remained because it is a part of natural sciences -and because of strong public resistance Geography in Denmark Belgeo, 1 | 2013 to its removal. For the time being, the new geography is at the planning stage. It should be composed of physical geography and geology, aiming at providing pupils with knowledge and understanding of the Earth and the physical environment. It remains to be seen if the social sciences, new in the Gymnasium, will absorb and develop human geography as a part of their curricula. With regard to the primary school level, a very recent survey (September 2003) revealed that pupils' geographical knowledge was incredibly modest. Public demand for more training has been met with political promises of improvement in primary schools. 13 At the university level, geography is administered differently at the three universities with geography departments (Copenhagen (KU), Roskilde (RUC), and Aalborg (AUC)). The University of Aarhus (AU) has no geography department, but physical geography is taught at the Geological Institute. They all encompass both physical and human geography but in different combinations. At KU and AU, physical geography is strongly represented, whereas it is weaker at RUC and AU. In Copenhagen, geography and geology together form the "Geocenter Copenhagen" which thus incorporates human geography. Currently, both geographical teaching and research is carried out as usual, but some adaptations to the new situation at the Gymansium level must be foreseen. 14 It is an open question whether the weaker position of geography in the "Gymnasium" in a further perspective would also induce a splitting of the subject in two separate parts at the universities, one becoming part of geosciences, the other of social sciences. Amidst a time of overwhelming environmental problems that need an overview such as that supplied by geography, a splitting of the discipline into two separate units seems illadvised. 15 The reasons why geography is part of the educational system have changed over time. During more recent times, the role of geography has steadily been expressed as one of improving international understanding and providing an understanding of "the Earth as a living place for people". In essence, this has been the content of various declarations on the purpose of geographical training as expressed by shifting ministries. Geography and the media 16 Danish geography as such does not attract much interest from the media. When geography-curricula were recently reduced in primary schools, substituted by "Technology and Society", and in upper secondary schools, reduced to "Physical Geography", the media took no great notice -aside from the fact that a flow of letters from readers were printed in various newspapers. 17 However, this disinterest does not relate to geographical topics in general. Geography has often been cited in public discussions, especially in connection with infrastructuredevelopment, such as the Öresund-connection, both in the press and on television. The discipline gave responses on the effects of this connection on the economy, on traffic and on the effects on water quality in the Sound. In fact there was almost continuous reporting on the possible effects on the European urban system and the future ranking of the Öresund region in Europe. Similarly, problems of urban development have been dealt with by geographers in the press, as well as environmental problems : coastal protection, regeneration of wildlife sites in the Wadden Sea, climatic development, the carbondioxide issue and global warming, etc. Geography in Denmark Belgeo, 1 | 2013 18 On Danish television, geographical themes are often touched upon, mainly in the form of "reports from foreign lands". They take a multitude of forms, from news on current problems abroad, to ethnographical surveys, to simple advice on travelling possibilities in remote regions. Most of these reports are foreign-made and prepared without the cooperation of professional geographers. Some would undeniably have benefited from such support. Regretfully, no Danish parallel to the products of the "National Geographical Society" exists. If Danish geographers are consulted, it is usually on an individual basis. 19 A very similar situation can be seen with the radio. In Denmark, this is largely operated by the State in cooperation with television. Geography is also relatively weakly represented here, although perhaps a little more than on television. 20 The written media, mainly newspapers, often deal with geographical topics. In this case specifically, in-house employed newspaper-people are responsible, supplemented by their occasional counsellors. Articles by geographers are seldom presented, and it is far from being the rule that they are used as advisers. In fact, only two periodicals report on geography : "Geografisk Tidsskrift" (Danish Journal of Geography), published since 1877 on scientific progress, and "Geografisk Orientering" on topics of a more general interest. Besides the journals, publishing of geographical topics finds an outlet via the publications of the Royal Danish Geographical Society, notably in two series : "Atlas of Denmark" and "Folia Geographica". Geography, economy and politics 21 In Denmark, teaching and research in geography has mainly been financed by the State ; private funding is, however, important when it comes to publishing and supporting new initiatives such as expeditions. Teaching at all levels is publicly paid, in primary schools via the Communes, upper secondary schools are administered by the Counties, and all universities are largely self-governing and state-funded. Some private schools exist at both primary and secondary levels, but not universities ; they are, however, supported by public means, usually covering about 85% of expenses. 22 The state-funding of research takes place through two channels : one is via basic support to institutions, some of which are solely responsible for research, the other (at universities) offers both teaching and research. In such institutions, teachers have an obligation to use about half of their paid time for research, and institutions can, via their funding, provide them with basic implements and support for research, usually to be reinforced by external sources. For more expensive projects external funding must be applied for. Most often projects are supported by funding from the State Research Councils. These provide more than half of the means for geographical research, assisted by funds from the Council for Research in Developing Countries, other state-institutions and ministries, and sometimes from private funds (e.g. The Carlsberg Foundation) 23 Geography has over the years had a fair share of funding. It deserves to be mentioned that research in the Danish Wadden Sea, at the "Skalling Laboratory", has received substantial means over the decades, and so has glacial research in Greenland, at the "Sermelik Station". The Skalling Laboratory was from the start financed by the Carlsberg Foundation and has only relatively recently been state-funded. In recent years, private 24 Danish geographers are often used as consultants by ministries and other official bodies, especially those concerning relations with developing countries, and environmental and planning problems relating to towns and infrastructures, and similarly, geographers have seats in many steering committees. 25 Danish geographers have found employment in many branches of public administration and in research institutions, mainly those dealing with area analyses or planning. In recent years, expertise in the use of satellite imagery has opened many job opportunities for geographers. The private sector, e.g. technical consultancy-firms, especially those working in developing countries, has employed both human and physical geographers. Geographical organisations in Denmark 26 Geographers in Denmark find many organisations at their service. Most of those teaching are members of the "Geografforbundet" ("Association of Geographers"). The journal of the association, "Geografisk Orientering", publishes articles on geographical themes, foremost those of current interest, such as topics for teaching, reports on current professional discussions, organises meetings and training courses and offers various services to members, such as travel arrangements etc. The association has taken the initiative to publish an impressive collection of books, valuable in order to ensure the steady improvement of geography teaching in schools. 27 A similar organisation, "Geografilaererfore ningen for gymnasiet og HF", the members of which are mainly teachers from upper secondary schools, is the caretaker of their professional interests. The organisation has had a special role in the discussion of curricula and in the organisation of additional training courses, etc. 28 Det Kongelige Danske Geografiske Selskab (Royal Danish Geographical Society -RDGS) organises meetings, publishes a journal and a set of books on geographical topics, and supports expeditions. It has a very large library, about 4 kms of shelves filled with books and several hundred geographical periodicals, and a map collection. Great importance is attached to the fact that the RDGS is an active forum for the discussion of geographical topics with participants representing wide circles of Danish society. 29 It is necessary to add to the above that geographical connections and co-operation also have an international aspect. An example of this is the relation to Swedish geography, notably geography at the University of Lund. The inspiration provided by Lund has meant much for Danish geography, even to the point of drawing on Swedish resources for teaching and research in Denmark. The Royal Danish Geographical Society was established 1876 at the initiative of professor E. Erslev. From the virtues of its purposes and the backing of viceadmiral Steen Bille and rear admiral O. Irminger the society immediately received royal recognition. HM King Christian IX became protector and HRH Crownprince Frederik became the first president of the society. A tradition of relation to the Royal House followed ever after was created.The purpose of the society, unchanged during more than 125 years of its existence, has remained "the promotion of knowledge of the Earth and its inhabitants" and "the spread of information of on the geography of Denmark and of the work of Danish geographers". These aims are strived at through the publication of "The Danish Geographical Journal" (Geografisk Tidsskrift) and three series of occasional publications : "Kulturgeogra fiske Skrifter", "Folia Geographica Danica", and "Atlas of Denmark" and further by an annual series of meetings with lectures. The publications and ensuing exchange with foreign societies have been the base for assembling a relatively large library ; this is open to the general public.Initially the attention of the society was concentrated on exploration, mainly in the Arctic regions, specifically Greenland. All expeditions were reported to the society, and an impressive representation of renowned explorers were received at the society and honoured with its Gold Medal (Nordenskiöld, Scott, Amundsen, Shackle ton, Wegener etc). The society acquired an interesting position regarding Danish research in Greenland from being the place where plans were presented and discussed, and the results from the projects reported. One of the effects was that the exploration came to be relatively well coordinated and effective, which again served to sustain Danish sovereignty in Greenland. Also the highlands of Central Asia were the place of Danish expeditions from Olufsen's early Pamir-expeditions in the last decade of the 19th century to those of Haslund-Christensen's three expeditions to Central Asia in the 1930-40s.Post WWII the society has focused its attention largely on research in the geography of Denmark (including modern Greenland and the Faeroes). A major development has been the Dansh Wadden Sea research, which not only contributed much to understanding the genesis of the marshlands, but also assisted in a modernisation of physical geography as a discipline. Similarly, meetings and publications of the society has helped in introducing new ideas into human geography, which has had many new tasks to perform in relation to Dansih assistance to the developing countries.The society's major role in opening links to international geography and in maintaining connections to other geographical societies to promote the flow of fertile new ideas into Danish geography and to report on Danish achievements remains one of its most important duties.
2022-05-30T21:42:11.056Z
2004-03-31T00:00:00.000
{ "year": 2004, "sha1": "ec4012a5d3fbbc8aac4b6d987ea4f56bec2a831e", "oa_license": "CCBY", "oa_url": "https://journals.openedition.org/belgeo/pdf/10062", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ec4012a5d3fbbc8aac4b6d987ea4f56bec2a831e", "s2fieldsofstudy": [ "Geography", "History" ], "extfieldsofstudy": [ "Geography" ] }
10657734
pes2o/s2orc
v3-fos-license
A comparison of multiple shRNA expression methods for combinatorial RNAi RNAi gene therapies for HIV-1 will likely need to employ multiple shRNAs to counter resistant strains. We evaluated 3 shRNA co-expression methods to determine their suitability for present use; multiple expression vectors, multiple expression cassettes and single transcripts comprised of several dsRNA units (aka domains) with each being designed to a different target. Though the multiple vector strategy was effective with 2 shRNAs, the increasing number of vectors required is a major shortcoming. With single transcript configurations we only saw adequate activity from 1 of 10 variants tested, the variants being comprised of 2 - 3 different target domains. Whilst single transcript configurations have the most advantages on paper, these configurations can not yet be rapidly and reliably re-configured for new targets. However, our multiple cassette combinations of 2, 3 and 4 (29 bp) shRNAs were all successful, with suitable activity maintained in all positions and net activities comparable to that of the corresponding single shRNAs. We conclude that the multiple cassette strategy is the most suitably developed for present use as it is easy to design, assemble, is directly compatible with pre-existing shRNA and can be easily expanded. Introduction The recently discovered RNA interference (RNAi) pathway is a post-transcriptional gene silencing and regulation mechanism with potential application in the field of gene therapy. In mammalian cells RNAi begins with a double-stranded RNA inducer that is progressively processed from its termini by RNase III type endonucleases, firstly Drosha in the nucleus followed by Dicer in the cytoplasm, to yield a short interfering RNA (siRNA) duplex of~22 bp [1,2]. The duplex is unwound and loaded into the RNA induced silencing complex (RISC) in a process that favors one of the two strands (the guide strand) based on a difference in thermodynamic stability at the ends of the duplex [3]. The most common natural substrates for mammalian RNAi are microRNA, short hairpin-like RNA transcripts implicated in regulating gene expression activity [1,2]. The RNAi pathway can be artificially engaged at any point in the process, typically either through delivering synthetic siRNAs to the RISC [4,5] or by expressing short hairpin RNAs (shRNA or hairpins) to be processed by Dicer and possibly Drosha [6,7]. shRNAs are well suited for use in current gene therapy plans. shRNA consists of a short single-stranded RNA transcript that folds into a 'hairpin' configuration by virtue of self-complementary regions separated by a short 'loop' sequence. Whilst hairpins can be expressed from either polymerase (pol) III or more recently pol II promoters, it is the U6 and H1 pol III promoters that have been most extensively employed owing in part to their relatively welldefined transcription start and end points [6,7]. Importantly, pol III based hairpin expression cassettes have been incorporated into viral vectors which have been stably integrated both in culture and whole animals with effective silencing maintained over time [8][9][10]. The potency of individual shRNA directed to HIV has been extensively demonstrated [11][12][13], however, several studies have also shown single shRNA can be rapidly overcome by the emergence of escape mutants [14][15][16][17]. Modeling shows that perhaps as few as 4 shRNAs used in combination may be sufficient to prevent the emergence of escape mutants [18][19][20][21][22]. This idea is supported by several wet studies showing that in laboratory conditions HIV-1 escape can be delayed by using more than one shRNA [11,[23][24][25][26]. In a clever variation on this idea, some have designed anticipatory shRNAs specifically to block known escape routes, though when tested it was found that the virus still evolved around these [27,28]. There is now clearly a need for an evaluation of multiple shRNA expression strategies to identify those that can be readily integrated into current anti-HIV gene therapy research programs. One method for expressing multiple shRNAs is to use separate expression vectors encoding individual shRNA. Multiple shRNAs have been used successfully against cellular, viral and exogenous gene targets by the use of either multiple plasmid [29], retrovirus [30] or lentivirus vectors [9]. Multiple shRNAs can also be combined into a single expression vector via several self-contained expression cassettes (e.g. 1 cassette = promoter, shRNA and terminator), of which there are now many examples [11,[31][32][33][34]. Alternatively, multiple shRNA domains can be combined in a single transcript, of which there are two base configurations; distinct hairpin domains joined 3' to 5' in what we call a 'cluster' (CL) configuration, and a 'head-to-tail' (HT) configuration in which all the sense stem regions are joined first, followed by a single loop and then all the anti-sense regions [17,[35][36][37][38][39]. This second configuration appears as a long hairpin which, depending on the design, may be punctuated by unpaired spacer regions between the hairpin domains. Single transcript strategies are the most compact, and in this respect the most desirable means of co-expressing multiple shRNAs for gene therapy, but with few examples and no design guidelines yet reported, its general ease-of-use is unclear. The aim of this study was to evaluate the 3 different shRNA co-expression methods to determine their suitability for present use in gene therapy schemes, with a key focus on ease of construction, applicability to new sequences, and the retention of suppressive activity in the component shRNAs. We assembled combinations of 2 to 4 shRNAs using 3 different strategies: one using multiple expression vectors, one using multiple hairpin expression cassettes and the other based on a single transcript comprised of different hairpin domains. While we were able to achieve successful suppression with each strategy, we concluded that the multiple cassette strategy is currently most useful due to ease of design, assembly, and its immediate compatibility with preexisting shRNA already selected for high activity. Co-expression using multiple vectors The effects of co-expressing hairpins of different sequence were first examined in its simplest form; using two hairpins, each expressed from a separate plasmid vector (pS) derived from pSilencer 3.0-H1 (Ambion). In this circumstance the suppressive activities of each hairpin were measured when separately expressed, and then again when co-expressed. All hairpins in this study were expressed from the human H1 polymerase III promoter. This experiment employed two 29 bp shRNAs against the HIV-1 genes encoding for Tat and Vif: Tat 56-29 (T) and Vif 88-29 (V) (variant NL4-3, accession #AF324493) ( Table 1). Suppressive activity was measured by flow cytometry as a reduction in fluorescence of an appropriate reporter after transfection and transient expression of both hairpin(s) and reporter(s) in HEK293a cells. In this and all subsequent experiments the activity of each shRNA vector was measured relative to the activity of the appropriate control vector containing an equivalent number of 'empty' (e) expression cassettes (i.e. consisting of a promoter(s) plus terminator but expressing no hairpin(s)). Either the two hairpin vectors, or each hairpin vector and the equivalent control vector were co-transfected at different ratios ( Figure 1). The total amount of DNA delivered for each transfection was kept constant and the shRNA expression vector was always present at a level such that the RNAi process would presumably be saturated so that the effects of competition, if any, would be evident [40]. As each hairpin was directed to a different target the suppressive activity was measured using two unique reporters (GFPsTat and AsRed1sVif), which could be detected both simultaneously and independently. The specific activity of each hairpin vector was unaffected in the presence of empty expression vector at all ratios. However, the specific activity of each hairpin when co-expressed was progressively reduced at ratios that increasingly favored the competing hairpin. We surmise that reduction in specific activity was not due to a second vector (or promoter), but rather from inter-hairpin competition for access to the RNAi machinery. In summary, co-expressed hairpins delivered via separate vectors can function simultaneously, but do so at reduced levels due to competitive access for the RNAi machinery. Co-expression using multiple-cassette vectors: establishing positional effects An alternative strategy to overcome the obvious limitations of using multiple vectors to co-express hairpins (e.g. issues of vector multiplicity) was to incorporate multiple hairpin expression cassettes within a single vector. However, to address concerns of potential promoter or transcriptional interference [41] it was important to first determine if each individual cassette position in a multiple cassette vector was capable of expressing a hairpin that functioned with equivalent suppressive activity. At this point we switched to using a pLenti6 derived vector backbone (pL) (Invitrogen) so that we could later test our constructs in stably transduced scenarios. We assembled a series of 1, 2, 3 and 4 cassette vectors containing only a single hairpin (T) expression cassette per vector, which was placed at each different position. The surrounding positions consisted of 'empty' expression cassettes, e.g. the 3 cassette vectors included 3.1: T+e+e, 3.2: e+T+e, and 3.3: e+e+T. There was~130 bp of spacer sequence between each cassette, as measured from the terminator of one cassette (n) to the promoter of the next downstream cassette (n+1). Control vectors were also created that were of corresponding sizes and were composed of an equivalent number of all empty expression cassettes (2 to 4). Suppressive activity was first measured by flow cytometry as a reduction in fluorescence after transfection and transient expression from both hairpin and reporter vectors ( Figure 2a). There was no apparent reduction in activity from any cassette position, from either the 2, 3 or 4 cassette vectors, with each cassette position retaining full hairpin activity equivalent to the single position, single hairpin cassette vector. Co-expression using multiple-cassette vectors: multiplication of a single hairpin pLenti6 based vectors with 2, 3, and 4 cassettes were constructed with an identical hairpin (T) expression cassette placed in all positions to investigate whether increasing the cassette number could increase the suppressive activity (e.g. 2×: T+T, 3×: T+T +T, and 4×: T+T+T+T). Suppressive activity was measured across a range of vector amounts (from 400 -0 ng) so that the effects of increasing cassette number could be investigated during both RNAi-saturating and sub-saturating conditions ( Figure 2b). The total amount of DNA delivered for each transfection was kept constant by supplementing each reaction with the appropriate amount of corresponding control vector whilst keeping the amount of reporter vector constant. There were no differences in the suppressive activities from 400~100 ng of each vector delivered, which supported the hypothesis that the RNAi process was saturated across this range. However, at 50 -10 ng of vector(s) there were statistically significant improvements in suppressive activity (*) with increasing cassette numbers (P < 0.05, comparing the single cassette vector to the 2, 3, and 4 cassette vectors). The trend did not extend below this concentration range, as effective suppressive activity was lost and accordingly any meaningful difference between the different numbers of cassettes. We further examined whether multiplying an identical expression cassette would be beneficial in stably transduced cell lines. Infectious virus was generated from each of the 1, 2, 3 and 4 (T) cassette vectors, along with the (*) There was a statistically significant reduction in the individual suppressive activities of each hairpin relative to their single counterparts (P < 0.01) (n = 1). (f) Net suppressive activities of the same vectors were measured using a HIV-1 production assay by transfecting HEK293a cells with 110 -130 ng of shRNA vector with 800 ng of pNL4-3 reporter and measuring the impact on p24 production (n = 2). activity from the vector backbone, promoters or irrelevant hairpins. However, the suppression levels from the Tatspecific shRNA cell lines were reduced by~30 -40% relative to the maximum levels observed during transient expression (cf. Figure 2b), which was not unexpected and most likely due to low-copy number integration with a corresponding reduction in shRNA expression [8,18,42]. In this stable expression system there were no statistically significant differences in suppressive activity between the 2, 3 and 4 cassette cell lines. Co-expression using multiple-cassette vectors: diversifying hairpin targets The multiple cassette strategy was further investigated by combining four different shRNA in a single vector using the same pL base vectors and cassette configurations as before. In addition to Tat 56-29 and Vif 88-29, two further (HIV-1) 29 bp shRNA were used: Vpr 72-29 (R) and Vpu 158-29 (U). Each shRNA was placed in both single cassette vectors and combination vectors of 2 (T+V), 3 (T+V+R) and 4 (T+V+R +U) cassettes. All vectors were assayed in turn with each of the relevant reporter vectors enabling the specific activity of each shRNA to be measured independently (Figure 2e). Each individual hairpin exhibited potent and specific activity for its matched assay vector(s). There was, however, a progressive and statistically significant reduction in the individual suppressive activity of each hairpin when expressed in combinations of increasing number, relative to the corresponding individual shRNA (P < 0.01), the exception being Vpu 158-29, which appeared unaffected. The activity of each vector was further examined using HIV-1 as the target and p24 production (a capsid protein) as the readout (Figure 2f). Whilst each hairpin was of a different sequence, their activities were now measured via a common endpoint (p24 production) and thus each could be considered as being directed towards a "common or shared target". The net suppressive activity of each multiple hairpin vector was similar to the corresponding individual hairpin vectors (all within 5% of each other). We concluded that whilst hairpins compete with each other for access to RNAi machinery, a net suppressive activity can be maintained from multiple simultaneously acting shRNAs. Co-expression using single transcript arrays of hairpin domains The last co-expression strategy tested was the single transcript strategy comprised of several shRNA domains (or technically just dsRNA domains, depending on design). There are two basic configurations, the 'cluster' (CL) configuration and the 'head-to-tail' (HT) configuration. Using pSilencer based vectors, we constructed two cluster arrays of three 29 bp hairpin domains (R_V_T 1 and R_V_T 8 ) with either 1 or 8 nt. spacers separating each domain (Table 2), and tested them with our fluorescent reporters (Figure 3a). Despite reported success from others [43,44], we only saw good activity in the first domain, with poor activity in the remaining two. Following this we created a head-to-tail configuration comprised of two 29 bp hairpin domains (V-T) separated by an 8 nt. spacer. Suppressive activity was measured for both domains simultaneously by using dual reporters (GFPsTat and AsRed1sVif), which showed that each domain was simultaneously active (Figure 3b). We also show that this result was target-specific, since a second control molecule similarly constructed from two 29 bp offtarget hairpin domains (O'-O") showed no suppressive activity (Figure 3c). We further transferred the V-T headto-tail configuration into a our pLenti6 based plasmid and assayed it using HIV-1 (i.e. a shared target) (Figure 3d). The net suppressive activity was comparable to the activities of the corresponding individual shRNAs and the combined activities of both individual shRNAs when delivered via the equivalent multiple cassette strategy. Encouraged by this finding, we assembled and tested several more headto-tail configurations, comprised of the same domains but in different order (T-V), different domains (U-R and R-U), more domains (R-V-T and T-V-R), no spacers (V-T o ), and shorter 19 bp domains (V 19 -T 19 ). However, in no case did we achieve a similarly successful outcome to our original V-T molecule (Figure 3e-h). While some domains were active, others were not, and no configuration retained comparable activity to the corresponding component shRNAs for all domains. We conclude that while two hairpins combined in a head-to-tail configuration can be successful, reliably obtaining an active molecule requires a more detailed design (than simply connecting pre-existing hairpins) that will come from a better understanding of how these configurations are processed. Discussion In this study we tested 3 different strategies for the simultaneous expression of multiple hairpins and showed that all could be effective. But, the multiple vector strategy is likely to be of limited use in gene therapy since it requires a unique vector per shRNA, with potential issues in ensuring that each cell receives all vectors (without which may facilitate the emergence of resistant strains). The single transcript strategy was effective in one instance, but since similar success was not reproduced with different domains (or configurations), it is also of limited use in its present form. However, the multiple expression cassette strategy was used successfully with up to 4 shRNAs, and was easy to assemble and expand with pre-selected shRNAs. When hairpins were co-expressed at levels that saturated the RNAi process we found that the individual suppressive activity of each hairpin was progressively reduced with increasing numbers of hairpins coexpressed. This was equally applicable for all three hairpin co-expression strategies; multiple vectors, multiple cassettes, and multiple domains. Hairpin competition was evident in all cases except one, where the activity of the Vpu 158-29 hairpin was little affected when transiently expressed with 3 other hairpins from a 4 cassette vector. Although the reasons for this are unknown to us, we note that this hairpin was very active and so it may be that even as it competed with 3 other shRNA its suppressive impact was unaffected. Overall, we surmise that hairpins interact competitively for access to the RNAi machinery. Whilst this conclusion is supported by some [45][46][47], it is worth noting that there are also conflicting conclusions, where others report no evidence of shRNA/ siRNA competition [29,30,48,49]. The reason for this disparity may relate to differences in experimental design such as expression levels and observations under subsaturating conditions. It should also be noted that issues of shRNA competition in mammalian cells encompass endogenous RNAi substrates as well, e.g. microRNA. Expression levels in a clinical setting may need to be finely tuned to attain sufficient activity, with minimal impact [2,[50][51][52][53]. Another idea for removing competition may be to employ multiple agents of different modalities (e.g. RNAi, aptamers and ribozymes) so that no single pathway is overwhelmed [54,55]. When each hairpin of different sequence was directed to a common target (i.e. the complete HIV-1 sequence rather than individual gene-fusions), we saw that the net suppressive activity was approximately equivalent to the average activity of the component hairpins. This suggests that hairpin diversity may be increased whilst maintaining overall suppressive activity. This could potentially be exploited for countering the emergence of viral escape mutants in-line with other studies [27] though it requires further work for verification. Moreover, we did not test the effect on net activity of using one or more hairpins which was poor, or completely inactive (as all our hairpins here were classed as highly active). Such a situation could conceivably arise in a clinical setting due to a virus developing a mutation in one of the target sites. We speculate that the net suppressive activity would be reduced, though our mathematical modeling of various infection scenarios indicates that some loss of shRNA efficacy can be tolerated without impacting on treatment success [21,22]. Our data shows that up to 4 repeats of the same shRNA can increase the net suppressive activity when transiently expressed at levels below that which results in maximal Mcintyre et al. Genetic Vaccines and Therapy 2011, 9:9 http://www.gvt-journal.com/content/9/1/9 suppressive activity. Interestingly, the same effect was not seen in the corresponding stably transduced cell lines. The reasons for this are unclear, but could result from promoter interference [41], or transcriptional silencing [56]. Other studies have shown that promoter interference is not necessarily a barrier to multiple shRNA cassette strategies [31,32,54]. One study has shown that up to 6 identical expression cassettes could increase total expression and suppressive activity during both transient and stable expression, though in this case more than six cassettes was deleterious in stable expression as it decreased the net suppressive activity [32]. Repeat-mediated cassette deletion is also a concern, as we and others have since shown that it commonly occurs during infection [21,51], possibly via reverse transcriptase slipping on its template [57], though again, our modeling suggests that the practical impact of this in a gene therapy setting may be low [21]. Finally, the reduction in titre that we observed, whilst inconsequential here, has also been noted by others [42,[58][59][60] and is an issue that may have to be addressed prior to scaled-up manufacturing. Our attempts to generate a cluster of 3 shRNA domains in a single transcript with activity retained equally in each domain were unsuccessful. However, due to the highly structured templates, we were unable to confirm the sequence of these templates with automated sequencing procedures. Thus we cannot rule out the possibility of single nucleotide errors in these configurations. This is another issue that needs to be overcome, though Liu et. al. have reported success by incorporating several G:U wobbles [61]. Head-to-tail configurations can be sequenced though (using a modified protocol [62]), and in this respect may be a more attractive choice. However, since these are akin to 'long' hairpins, they in turn may induce non-specific dsRNAresponse pathways such as protein kinase R (PKR) or interferon (IFN) [63,64], though recent work suggests otherwise [61]. Even though our designs incorporated spacers (of 1 or 8 nt.) to keep all regions of paired double-stranded RNA less than 30 bp (the minimal length traditionally thought to activate non-specific pathways), there are reports that some structures outside of this traditional view (e.g. < 30 bp) may also stimulate these responses [65][66][67][68]. At this point though, the principle limitation of these configurations is in not knowing how they are processed, and consequently how they should be designed to reliably retain activity in all domains. Based on our understanding of single shRNA processing, where a single nt. shift in the start of the shRNA stem can significantly alter activity, we speculate that the processed products from our multiple domain constructs are simply different from the those liberated from the 'corresponding' single shRNAs. Liu et. al. and others are making inroads in this area [61,69], which will be well complemented by future deep-sequencing type studies. It is interesting to note that others who have also looked at similar structures have been similarly unable to produce effective silencing from more than 3 domains [61]. As a workaround one group has stacked several 2-domain structures, which could then be further used in combination with a multiple cassette type arrangement to increase targeting capacity [39]. In summary, we found that while all 3 co-expression strategies tested were effective, the multiple cassette strategy is a most useful method for immediate use in gene therapy. This is because it was easy to design, assemble, and is directly compatible with pre-existing shRNA already selected for high activity. It is worth noting that a similar study was published during the preparation of this manuscript, with similar conclusions, thus strengthening the validity of our findings [70]. Furthermore, we have since applied the multiple cassette strategy in several additional studies, including the development of a repeating modular cloning method (tested with up to 11 shRNAs), the assembly of combinations of up to 7 shRNAs to target entire subtypes of HIV-1, and a large-scale study around repeat-mediated deletion of 1 or more cassettes [21,71]. shRNA design and vector construction Each shRNA was designed so that the sense or upper strand of the shRNA stem was homologous to the target (designed to give rise to the siRNA passenger strand) and the anti-sense or lower strand of the shRNA stem was complementary to the target (designed to give rise to the siRNA guide strand) ( Table 1). Sense and antisense strands were connected by an 8 or 9 nt loop and all hairpins were expressed from a human H1 polymerase III (pol III) promoter with transcription presumably terminating at a run of 4 or more 'T' residues in the included termination signal (TTTTTTGGA). Each shRNA insert was constructed using either annealed complementary oligonucleotides (oligos) or primer extension [62] to create a synthetic DNA insert that was cloned into a pSilencer 3.0-H1 derived vector (Ambion). The pSilencer derivative was generated by replacing the bla gene (ampicillin r ) for the neo gene (kanamycin r /G418 r ). Single cassette pLenti6 based vectors were created by sub-cloning entire shRNA expression cassettes from the pSilencer based vectors into a derivative of pLenti6/V5-D-TOPO (Invitrogen). The pLenti6 derivative was generated by exchanging the CMV promoter, V5 epitope and SV40 terminator region for a multiple clone site (MCS) to facilitate the unique insertion of 1 to 4 self-contained (i.e. consisting of promoter, hairpin and terminator) shRNA expression cassettes. Multiple cassette pLenti6 based vectors were created by PCR amplification of the desired cassette(s) from the corresponding pSilencer based vector(s), which were then progressively inserted into the appropriate pLenti6 based vector downstream of any previous cassettes with a gap of~130 bp separating each cassette. pSilencer based vectors were propagated in GT116 E. coli cells (a cell line specifically developed for the replication of hairpin containing vectors, Invivogen) and pLenti6 based vectors were propagated in Stbl3 E. coli cells (manufacturer recommended cell line, Invitrogen). DNA was extracted (Hi-speed Maxi-prep Kit, Qiagen), quantitated in triplicate and was sequence confirmed either by standard protocols or a modified protocol where required to enable automated sequencing of hairpin expression vectors possessing reaction-inhibiting secondary structure [62] (excluding the multiple hairpin 'cluster' configurations as indicated in the text). Fluorescence based shRNA activity assay Human Embryonic Kidney cells (HEK293a, sourced from the American Type Culture Collection) were seeded at a density of 4 -5 × 10 5 cells/well in 6 well plates in 2 ml of Dulbecco's modified eagle medium plus 10% fetal bovine serum (DMEM-10). Cells were transfected 1 day later using 1 μg of total DNA (comprised of different amounts of shRNA and/or 1 or more reporter vectors as indicated in each figure) with 4 μl of Lipofectamine 2000 (Invitrogen) in OptiMEM (Invitrogen) to a total volume of 100 μl/well. Cells were analyzed by flow cytometry 2 days later (using either a FACsort or FACsCalibur instrument, BD Bioscience). The suppressive activity of each shRNA was measured as a change in fluorescence of the reporter(s) (FL1 for 'green' proteins and FL2 for 'red' proteins). The Fluorescence Index (FI) of cells in each channel was calculated by multiplying the geo mean of fluorescence by the percentage of cells that were fluorescent (only those cells gated above background). The FI was expressed as a percentage of the FI of cells transfected only with the corresponding empty expression control vector that expresses no hairpin. Target-specific shRNA activities were normalized to account for non-specific effects measured using an additional 'green' or 'red' off-target reporter to which the shRNA bore no homology, except for cases where the simultaneous activities of 2 shRNAs where measured using a 'green' reporter for one shRNA, and a 'red' reporter for the other shRNA. Most assays included a 29 bp off-target control shRNA (O'), which displayed no meaningful suppressive activity against any reporter, and thus was omitted from the graphs for clarity. Lentivirus production and infection 293FT cells (Invitrogen) were seeded at a density of 5 × 10 6 cells/plate (100 mm plates; 10 ml DMEM-10) and were transfected 1 day later using 12 μg of total DNA (comprised of 3 μg pLenti6 based hairpin vector and 9 μg packaging vectors, Invitrogen) with 36 μl of Lipofectamine 2000 in OptiMEM to a total volume of 8 ml/plate. Viruscontaining medium (VCM) was harvested at 2 -3 days post-transfection, cold spun at 3000 rpm for 15 min. and stored at -80°C. Viral titres were calculated using HEK293a cells seeded at 1 × 10 5 cells/well (6 well plates; 2 ml DMEM-10) which were infected with serial dilutions of VCMs ranging from 10 -1 to 10 -6 supplemented with 6 μg/ml of polybrene (hexadimethrine bromide, Sigma). Selective medium (DMEM-10 plus 10 μg/ml Blasticidin, Invitrogen) was applied to infected cells 2 days later and maintained for 10 -14 days prior to Giemsa staining (Merck) and quantification of colony numbers, with titres calculated as infectious viral particles (IVF)/ml of VCM. Stably transduced cell lines were generated using HEK293a cells seeded at a density of 4 × 10 5 cells/plate (6 well plates; 2 ml DMEM-10) which were infected 1 day later with 2 ml of VCM with an average MOI of~0.4. Selective medium (DMEM-10 plus 10 μg/ml Blasticidin) was applied to infected cells 4 days later and maintained for at least 14 days. HIV-1 production assay HEK293a cells were seeded at a density of 2 × 10 5 cells/ well (12 well plates; 1 ml of DMEM-10). Cells were transfected 1 day later using 110 -130 ng of hairpin expression vector (at equimolar amounts across transfections) and 800 ng (3× molar amount of expression vector) of pNL4-3 reporter vector (expressing the 4-3 strain of HIV-1) with Lipofectamine 2000 at a ratio of 1 : 4 (μg DNA: μl Lipofectamine) in OptiMEM to a total volume of 200 μl/well. Medium was replaced with an equal volume 1 day post-transfection and the cells were harvested a further 1 day later by centrifugation at 400 g for 10 min. at room temperature. Samples were stored at -20°C until assayed for p24 levels (a capsid protein required for HIV-1 virion production) via Enzyme-Linked Immunosorbent Assay (ELISA) using the INNOTEST HIV antigen mAb kit (Innogenetics). The suppressive activity of each shRNA was measured as a reduction in, and expressed as percentage of, p24 production (measured as pg/ml) relative to p24 production from cells transfected with the corresponding empty expression control vector. Statistical analysis Each sample was analyzed in triplicate with 95% confidence intervals (CI) calculated using Microsoft Excel X. P values were determined by analysis of variance (ANOVA, with a Bonferroni's multiple test comparison) using Prism 4.0a.
2018-04-03T00:17:10.172Z
2011-04-17T00:00:00.000
{ "year": 2011, "sha1": "faec1bb4b2d06b1ffdd680f2a4d9a0874dd4d949", "oa_license": "CCBY", "oa_url": "https://gvt-journal.biomedcentral.com/counter/pdf/10.1186/1479-0556-9-9", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "67bbf55eb8a4d967626a8a41b7818aa1a3a13a5b", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [] }
57367913
pes2o/s2orc
v3-fos-license
Bilateral Presentation of Pleural Desmoplastic Small Round Cell Tumors: A Case Report Desmoplastic small round cell tumor (DSRCT) is a highly aggressive malignant small cell neoplasm that tends to affect adolescents and young adults and occurs predominantly in the abdominal cavity, including the pelvis and omentum (1, 2). Other primary sites are rare and have included the paratesticular region, pleura, posterior cranial fossa, soft tissues, bone, ovary, and kidney (2-4). DSRCT in the lung is extremely rare. This distinct clinicopathological entity was first described by Gerald and Rosai (1) in 1989. A DSRCT is composed of small round tumor cells of uncertain histogenesis, associated with prominent stromal desmoplasia and polyphenotypic differentiation. This article reports bilateral tumors in the pleura of a 15-year-old male. CASE REPORT INTRODUCTION Desmoplastic small round cell tumor (DSRCT) is a highly aggressive malignant small cell neoplasm that tends to affect adolescents and young adults and occurs predominantly in the abdominal cavity, including the pelvis and omentum (1,2). Other primary sites are rare and have included the paratesticular region, pleura, posterior cranial fossa, soft tissues, bone, ovary, and kidney (2)(3)(4). DSRCT in the lung is extremely rare. This distinct clinicopathological entity was first described by Gerald and Rosai (1) in 1989. A DSRCT is composed of small round tumor cells of uncertain histogenesis, associated with prominent stromal desmoplasia and polyphenotypic differentiation. This article reports bilateral tumors in the pleura of a 15-year-old male. CASE REPORT A 15-year-old male presented with a 1-month history of sharp pain in the left lower chest, which occasionally woke him. He was a nonsmoker, as were his parents. He had no serious medical or surgical history and he had grown up in an ordinary residential and social environment. He had no shortness of breath or cough. There was no history of weight loss, fever, or night sweats. Chest radiographs showed a soft-tissue mass in the left mid hemithorax (Fig. 1A). The mass was at an obtuse angle to the chest wall. The long diameter of the mass was approximately 9 cm. There was no other abnormality in the lung parenchyma or bony thorax. In the left down decubitus view, the mass did not shift position or change contour (Fig. 1B). Chest computed tomography (CT) revealed multifocal nodular pleural thickening in the lateral, posterior aspect of the left Desmoplastic small round cell tumor (DSRCT) is a highly aggressive malignant small cell neoplasm occurring mainly in the abdominal cavity, but it is extremely rare in the pleura. In this case, a 15-year-old male presented with a 1-month history of left chest pain. Chest radiographs revealed pleural thickening in the left hemithorax and chest computed tomography showed multifocal pleural thickening with enhancement in both hemithoraces. A needle biopsy of the left pleural lesion was performed and the final diagnosis was DSRCT of the pleura. We report this unusual case arising from the pleura bilaterally. The pleural involvement of this tumor supports the hypothesis that it typically occurs in mesothelial-lined surfaces. density was seen in the lesion in the lateral left hemithorax. The mediastinal and hilar lymph nodes were not enlarged. In the scanned portion of the abdomen, there was no mass, lymphadenopathy, or ascites. From these imaging findings, we suspected fibromatosis of the pleura and a localized fibrous tumor because it appeared as a well-defined relatively homogeneous soft tissue hemithorax and posterior aspect of the right lower hemithorax ( Fig. 1C-E). In the pre-contrast image, the pleural lesion showed homogeneous soft tissue attenuation and there was no calcification or pleural effusion. After contrast enhancement, the thickened pleura generally showed homogeneous enhancement, with attenuation similar to that of the back muscles and a subtle low . 1F). Immunohistochemically, the tumor cells were positive for CD56, desmin, synaptophysin, and vimentin, and negative for cytokeratin, epithelial membrane antigen, neuron specific enolase, and thyroid transcription factor-1 (Fig. 1G, H). Malignant non-Hodgkin' s lymphoma is also similar to DSRCT. Index terms On imaging, pleural nodules with focal or diffuse pleural thickening with homogeneous enhancement accompanied by pleural effusion can be seen (8). However, it often has associated mediastinal and hilar lymphadenopathy, unlike DSRCT (8). In our case, there was no lymphadenopathy and the image findings resembled DSRCT more than malignant non-Hodgkin' s lymphoma. Most neuroblastomas occur before 5 years of age, and characteristically present with invasion through the neural foramina, giving a dumbbell appearance due to their origin from sympathetic nervous tissue (7). In our case, the patient was older than most neuroblastoma patients and the tumor did not have a dumbbell shape. The prognosis of DSRCT remains poor and the 5-year survival rate is less than 15% (5). There is no recommended treatment for DSRCT. Aggressive anticancer treatment is thought to contribute to relatively long-term survival (4). Aggressive surgery can be used to reduce the tumor size before or after chemotherapy and complete surgical excision seems to improve survival (2). Radiation therapy can be used to treat DSRCT (2). Despite the poor overall prognosis, there are some cases of long-term survival and they common involve early aggressive anticancer treatment (4). There are some new approaches to treating DSRCT, such as molecular targeted therapy and immunotherapy (9,10). In summary, DSRCT is a rare, highly aggressive malignancy that typically involves the abdominal cavity, but is rare in the pleura. The imaging findings of DSRCT in the pleura are nonspecific and it is difficult to suspect DSRCT. Various treatments have been attempted for DSRCT, but the prognosis is quite poor despite aggressive combination therapy. Nevertheless, there are some reports of long-term survival and they are common with
2019-01-23T16:04:46.770Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "86769a026576ad129b2ade99095695c8941a930c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3348/jksr.2015.72.4.295", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "825eb9835389cf048f70b729cf1be08bf8fcdc8f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263229144
pes2o/s2orc
v3-fos-license
Does gamma-glutamyltransferase correlate with liver tumor burden in neuroendocrine tumors? Purpose In patients with neuroendocrine tumors (NETs) and liver metastases, increased gamma-glutamyltransferase (GGT) is commonly assumed as an indicator for progressive disease. To date, however, empirical data are lacking. This study aimed to investigate associations between GGT and liver tumor burden. In longitudinal analyses, associations of GGT and radiographic responses of liver metastases under therapy were investigated. Methods The cross-sectional sample consisted of 104 patients who were treated at the University Medical Center Hamburg-Eppendorf from 2008 to 2021 (mean age 62.3 ± 12.6 years, 58.7% male). GGT and liver imaging were identified in a time range of 3 months. Radiologic reassessments were performed to estimate liver tumor burden. In a separate longitudinal sample (n = 15), the course of GGT levels under chemotherapy was analyzed. Data were retrospectively analyzed with a univariate ANOVA, linear regression analyses, and Wilcoxon tests. Results Of 104 cross-sectionally analyzed patients, 54 (51.9%) showed a GGT elevation. GGT levels and liver tumor burden were positively correlated (p < 0.001), independently from age, gender, primary tumor location, grading, and cholestasis. Notably, GGT increase was associated with a liver tumor burden of >50%. In the longitudinal sample, 10 of 11 patients with progressive disease showed increasing GGT, whereas 4 of 4 patients with regressive disease showed declining GGT. Conclusion Our findings indicate that GGT is associated with liver tumor burden. Over the course of therapy, GGT appears to change in line with radiographic responses. Further longitudinal studies with larger sample sizes are required to define GGT as a reliable marker for tumor response. Introduction In metastatic neuroendocrine tumors (NETs), liver metastases are present in 82% of patients [1].In addition to surgical therapy, systemic treatment is often required to maintain the quality of life and prevent unhindered progression of the tumor disease [2].Over the course of systemic treatment, response is usually monitored by radiographic imaging.Since regular scans are necessary, this approach strains healthcare resources and is often responsible for high radiation exposure to the patient.Laboratory markers could provide an additional tool for therapy monitoring, reducing the frequency of imaging. Gamma-glutamyltransferase (GGT) is a membranebound glycoprotein and a key enzyme of the gammaglutamyl cycle.It is required for the transport of amino acids across the membrane and in particular for the provision of glutathione, one of the most important antioxidants of the human body [3].It is found mainly in epithelia with high secretory or absorptive functions, such as renal tubules, bile ducts, liver, pancreas, and intestine [3].It has been used as a laboratory marker for more than 50 years and is considered to be one of the most sensitive biomarkers for liver conditions in general [4].Serum GGT is associated with increased oxidative stress [5].Typical clinical conditions in which GGT is elevated are alcohol consumption, cholestasis, or drug intake.However, in patients with malignancies, elevated GGT can also be a sign of advanced disease [6,7].It is associated with poor prognosis in patients with hepatocellular carcinoma [8], renal cell carcinoma [9], ovarian [6], and endometrial cancer [10].In clinical follow-up of NETs, increased GGT levels are especially seen in patients with liver metastases.To the best of our knowledge, there is no published data on the prevalence of increased GGT levels in patients with NETs and possible associations with liver tumor burden. To exemplify the clinical association between GGT and liver tumor burden, we report the case of a patient with pancreatic NET G2 (male, 40 years old).Upon the initial diagnosis in 2004, he underwent a partial pancreatectomy and splenectomy.After a long and stable treatment course with Lanreotide from 2005 to 2018, laboratory testing showed significantly increased GGT.In an abdominal magnetic resonance imaging (MRI), a distinct hepatic progression was found (Fig. 1A).Another biopsy of the tumor was performed, revealing a grade progression from G2 to G3. Consecutively, the therapy regimen was changed to oral chemotherapy with Capecitabine/Temozolomide (CAP-TEM).Hereunder, partial remission with high response of liver metastases was achieved, while GGT decreased from an over tenfold increase of the upper normal limit back to normal values (Fig. 1B).In early 2020, routine imaging showed stable disease but growth of two liver metastases.Transarterial chemoembolization was performed.In the next laboratory follow-up, GGT levels increased noticeably.Abdominal MRI showed another hepatic progression (Fig. 1C).Third-line therapy was initiated within a clinical study.In this case report, GGT changes were documented twice before routine imaging, so serial GGT testing helped to detect progression and lead to adaptation of therapy.However, analyses of a larger sample are necessary to extrapolate these individual findings to clinical practice. This study primarily aimed to investigate associations between GGT and liver tumor burden in a cross-sectional sample of NET patients with liver metastases.We hypothesized that high GGT values are associated with high liver tumor burden.To evaluate the impact of GGT in predicting the clinical course under therapy, we secondarily analyzed a separate small sample of patients undergoing Streptozotocin/5-Fluorouracil (STZ/5FU) treatment for pancreatic NET.This sample was chosen since the clinical course was well documented and GGT levels were available for each Fig. 1 Exemplary course of gamma-glutamyltransferase and liver tumor burden.Notes: Images A, B and C were captured in the same layer of transverse post contrast T1-weighted magnetic resonance imaging.GGT gamma-glutamyltransferase, CAPTEM Capecitabine/ Temozolomide, TACE transarterial chemoembolization cycle of therapy.In addition, this therapy holds the potential for partial remission [11], thus giving the opportunity to demonstrate GGT dynamics in case of decreasing tumor burden.We hypothesized that progression or regression of liver metastases are associated with GGT increase or decrease, respectively. Sample 1 A retrospective cross-sectional analysis of all patients with well-differentiated NETs and liver metastases undergoing treatment at the University Medical Center Hamburg-Eppendorf (UKE) between 12/2008 and 04/2021 (n = 268) was conducted.We included patients who underwent radiologic evaluation of the liver and testing for GGT with 3 months or less in between.Recent surgical therapy of the tumor and interventional therapies targeting the liver had to be at least 6 months prior to the date of evaluation to rule out its potential confounding influence.Due to the considerable differences in tumor biology, course of disease and therapy, neuroendocrine carcinomas were not included in the analyses.Accordingly, all tumors were classified according to WHO 2022 classification [12].Thus, all G3 neoplasms were well-differentiated NETs G3.Patients with neuroendocrine carcinoma (n = 41), extrahepatic cholestasis (n = 6) or missing data for either radiologic or laboratory findings (n = 117) were excluded.The final sample consisted of 104 patients. Sample 2 To investigate longitudinal associations of GGT levels with radiographic response, a small sample of patients with welldifferentiated NETs of the pancreas and liver metastases undergoing treatment with STZ/5FU in our center between 05/2005 and 12/2012 was analyzed.Radiographic response of liver metastases (regression or progression) based on RECIST 1.1 criteria and GGT testing within 3 months was available for n = 15 patients. Measures Radiologic studies included contrast-enhanced computed tomography (CT) (n = 38), contrast-enhanced MRI (n = 30), and positron emission tomography (PET)-CT (Tracer: 68 Ga-DOTA-TATE) (n = 36).Imaging analyses were conducted in consensus by two radiologists with three and thirteen years of experience in abdominal radiology, who were blinded regarding clinical data and laboratory markers.The liver was evaluated in axial, sagittal, and coronal planes in all available sequences and contrast phases.Since volumetry of individual metastases is not feasible particularly in livers with high tumor burden, metastatic load was categorized visually as a percent estimate of total liver volume (very low, <10%; low, 10-25%; moderate, 25-50%; high, 50-80%; and very high, >80%), as recommended by the ENETS Consensus Guidelines [13].Good inter-and intraobserver agreement has been shown for this visual semi-quantitative method [14]. Statistical analysis Statistical analysis was performed using IBM SPSS Statistics for Mac (Version 25) and Excel for Mac (Version 16.65).For cross-sectional analyses, three linear regression analyses were conducted with GGT as independent variable and age, gender, and clinical parameters as dependent variables (Model 1: age, gender, primary tumor location, liver tumor burden, Model 2: age, gender, grading, liver tumor burden; Model 3: age, gender, cholestasis, liver tumor burden).Besides, a oneway ANOVA analysis was performed to test for differences in GGT levels between five groups separated by liver tumor burden (1: <10%, 2: 10-25%, 3: 25-50%, 4: 50-80%, 5: >80%).Post-hoc analyses were conducted with Bonferroni correction.Longitudinal analysis of our STZ/5FU cohort was conducted using Wilcoxon tests for paired data.The significance level was set at p < 0.050. Results Sociodemographic data, therapy-relevant variables, radiographic and laboratory parameters of the analyzed crosssectional sample are shown in Table 1. Multivariate linear regression analyses showed a significant association between GGT levels and liver tumor burden in the general study population (p < 0.001), controlling for age, gender, primary tumor location, and cholestasis, as shown in Table 2. The prespecified groups based on liver tumor burden differed significantly in GGT levels (p < 0.001, Fig. 2).ANOVA analyses revealed that patients with high or very high tumor burden had increased GGT levels compared to patients with very low (both p < 0.001), low (both p < 0.001), and moderate tumor burden (p = 0.002 and p < 0.001, respectively).Between patients with very low, low, and moderate tumor burden, however, no differences in GGT levels were found (each p = 1.0).For predicting a liver tumor burden of >50%, GGT showed a sensitivity of 100% and a specificity of 70.4%.Positive and negative predictive value were 61.1% and 100%, respectively. In our cohort of patients with pancreatic NET treated with STZ/5FU, regressive or progressive disease in the liver was observed in 4 and 11 cases, respectively (Table 3).In patients with regression, GGT levels decreased from a mean value of 271 U/l to 46 U/l.Due to the small sample size, Wilcoxon test showed no statistically significant difference (p = 0.067), yet four out of four patients showed a decline in GGT levels.In patients with progressive disease, mean GGT levels increased from 122 U/l to 337 U/l.This difference was statistically significant (p = 0.004) with 10 out of 11 patients showing an increase in GGT. Discussion In our study, we demonstrated for the first time an association between GGT and liver tumor burden in patients with NETs.Our data show that GGT elevation is common in those patients and is associated with a high liver tumor burden (>50%).Normal values imply low or moderate liver tumor burden, being of high negative predictive value.Longitudinal analysis of our STZ/5FU cohort showed, that GGT values change accordingly to the clinical course. Why do we find elevated GGT in patients with liver metastases? The pathophysiology behind the rise in GGT levels in patients with liver metastasis has not been described yet.Interestingly, we did not observe a significant effect of radiologically detectable cholestasis on GGT levels.However, detectability of cholestasis on imaging depends on the imaging modality and is much lower on CT and PET-CT than on MRI.Furthermore, the rise of GGT could rather be driven by cholestasis in small bile ducts, undetectable in radiographic imaging.The tumor microenvironment of NETs has already been intensively researched.It could be shown that infiltrating immune cells mediate an immunosuppressive microenvironment [15,16].Therefore, peritumoral inflammation seems to be a less likely explanation for GGT elevations in NET liver GGT gamma-glutamyltransferase metastases.A connection between GGT activity in the tumor's membrane and hepatic tumor growth has been reported for melanoma cells [7].As GGT expression in NET cells has not been investigated yet, increased cell turnover in large GGT-positive metastases would pose an explanation for the fact that GGT elevations are so common in our NET cohort.We therefore conducted a PCR analysis on GGT expression in four different NET cell lines compared to HEPG2-cells as a control sample.NET cells showed very low expression levels of GGT compared to HEPG2-cells (data not shown).Thus, NET cells do not seem to be the source of GGT elevations, it rather the surrounding liver tissue.Hence, NET metastases to other organs should not lead to elevated GGT. Association between GGT and liver tumor burden Considering GGT as a biomarker for liver metastasis, no data for NETs were found.For renal cell carcinoma [9], ovarian cancer [6], and endometrial cancer [10], elevated serum GGT was shown to be of negative prognostic value in general.In patients with colorectal cancer, an initial decrease in GGT under therapy was associated with improved overall response and progression-free survival [17].Yet, in the mentioned studies, there were no analyses on the presence or size of liver metastases.A correlation between GGT and liver tumor diameter and volume has so far only been demonstrated for hepatocellular carcinoma [8].Liver metastases are common in a variety of solid neoplasms, particularly in GI cancer, and monitoring them is usually of great clinical importance.Our study demonstrates a significant association between GGT and liver tumor burden caused by metastases.This should also prompt further research in other oncologic entities.The clinical utility of GGT as a marker for liver tumor burden is favored by the fact that it is an established, easy-to-access and commonly performed test for various indications.It should be noted, however, that only half of our patients, whom all had liver metastases, showed an increase of GGT at the time of evaluation.Hence, GGT is not suited for ruling out liver metastases.If liver metastases are known M mean, SD standard deviation, G Grading, GGT gammaglutamyltransferase though, normal GGT values might rule out a tumor burden >50%, according to our Greater clinical utility of GGT testing might, however, be achieved in serial testing over the course of treatment. Laboratory biomarkers for NETs Due to the slow growth dynamics of NETs and consecutively long treatment and follow-up periods, there is a clinical need for therapy monitoring with laboratory markers.A whole range of biomarkers are known for NETs, although some are only applicable to specific entities [18].Measurement of 5-hydroxyindoleacetic acid in either urine or plasma may be of use as a biomarker in functionally active NETs of the small intestine, yet it has not shown to be a reliable prognostic marker [19].Chromogranin A (CgA) is a protein found in cells of neuroendocrine origin and has been the most widely used biomarker for neuroendocrine neoplasms in general to date [20].It has shown to be predictive of disease-free survival after surgery as well as therapeutic response and is associated with a high liver tumor burden [21][22][23].A recent study found that CgA is associated with disease progression in pancreatic NETs and predictive of negative outcome in patients with small intestine or cecum NETs, however, the results were limited to these subgroups [24].Regarding its role as a follow-up parameter, a review and meta-analysis showed that it has sufficient accuracy, especially when baseline values are impaired [25] Other authors conclude that the sensitivity and specificity of CgA are insufficient for its use as a clinical biomarker [26].For example, it has been shown that CgA is also elevated in chronic liver diseases such as cirrhosis, hepatitis and hepatocellular carcinoma [27].Furthermore, not all NETs reliably express and secrete CgA, limiting its use in routine clinical practice [28].NETest is a novel diagnostic tool based on mRNA detection in the patient's blood [29].Recent studies have shown it to be of high diagnostic accuracy and predictive of disease progression or stable disease, respectively [30].In comparison to CgA, it was found to be far more accurate in predicting therapy response or progression free survival [31].However, NETest is still not in routine clinical use and costs are estimated to be very high (3000-4000$/year).Like GGT, alkaline phosphatase (AP) is a widely used laboratory marker for cholestasis.Studies have shown it to be a negative predictor of survival in patients treated for NETs [32,33].Another recent study retrospectively analyzing 49 NET patients confirmed the negative prognostic value of AP, but it found no correlation between AP levels and the quantity or size of metastases [34].Since AP elevation was detected in only one in three patients and no correlation with the disease extent was found, it may be less suitable as a laboratory follow-up marker for NET patients. To date, no study has focused on the potential role of GGT for therapy monitoring in patients with neuroendocrine liver metastases.However, since GGT testing is often performed, clinicians tended to attribute a rise in GGT with hepatic progression.Hence, our study poses an evaluation of an until then common clinical practice.The results support this assumption, showing that there is indeed an association between GGT and liver tumor burden.It should be noted, though, that low or moderate liver tumor burden may not be detected by GGT testing.In patients with a low liver tumor burden, the serial testing of GGT, therefore, seems to be helpful only insofar as an increase in GGT can indicate progression of the disease.Normal values do not exclude progression, as in our analyses the GGT values only rise reliably above a tumor burden of 50%.Accordingly, serial GGT tests may be of less utility in these patients.However, from a clinical point of view, close surveillance is of greater importance in patients with high tumor burden, as these patients have a worse prognosis [35].In case of a sudden increase in GGT, liver tissue damage due to other conditions should be considered.Change dynamics might help to distinguish between liver tissue damage or malignant progression, as GGT due to NET progression appeared to increase slowly and steadily, matching the clinical course of the patients. When directly comparing GGT and the current standard CgA as biomarkers for neuroendocrine tumor disease, there are some important points to consider.Whereas CgA is of prognostic value for intra-and extra-hepatic disease, GGT has only been evaluated for liver tumor burden.Using CgA as a biomarker is only possible in tumors expressing and secreting CgA.In contrast, GGT is not dependent on tumorspecific features and might be of value in all different kinds of NETs with liver involvement.However, a larger study addressing this point is warranted.As GGT determination is part of routine laboratory diagnostic, results are often immediately available, whereas CgA determination is restricted to specialized laboratories and is often performed only once a week or even less frequently, thus causing a delay in response to changing values. Limitations and strengths There were several limitations to this study.This was a single-center study with relatively small sample sizes, especially in the longitudinal cohort.Due to the retrospective design, no causal conclusions can be drawn from the data.In addition, retrospective studies carry the risk of selection bias.Patients usually had multiple GGT testing with matching radiographic evaluation.The time of evaluation was therefore chosen individually for each patient, avoiding confounding factors such as liver or biliary duct interventions, operations, or other causes for GGT increase.However, we did not collect data on patients' concomitant diseases (e.g., diabetes, metabolic syndrome) or the use of medication, which might have confounded the In addition, as GGT is not a specific lab test, there may have been unknown confounders facilitating GGT elevations.Imaging evaluation was done on a visual scale, a method which has been established in similar studies but to which a certain degree of subjectivity is inherent.This approach was chosen because a very large number of liver metastases were present in our patient collective, so volumetry of each one would not have been feasible.However, since readers were blinded to clinical information and laboratory findings, the subjectivity of the method does not create any systematic error.Regarding the longitudinal analysis, which was conducted exploratively, no direct conclusions can be drawn for clinical practice due to the small sample size.However, to our knowledge, this study is the first to investigate GGT as a biomarker for clinical follow-up of liver metastases. Implications The association between GGT elevations and liver metastases demonstrated in this study should raise physicians' awareness of possible disease progression when detected in a routine examination.Further research in larger longitudinal series is required to assess the utility of GGT as a follow-up parameter.If confirmed in future studies, GGT can be implemented in clinical practice as a very costeffective tool for therapy monitoring of liver metastases under systemic treatment for NETs to trigger radiographic evaluation, thereby allowing timely detection of disease progression and adaptation of therapy. Table 1 Characteristics of the cross-sectional sample (n = 104) M mean, SD standard deviation, G Grading, GGT gammaglutamyltransferase Table 2 Association of GGT and liver tumor burden by controlling age, gender, grading, primary tumor location, and cholestasis Statistical analysis was performed via multivariate linear regression.Model 1: Reference category of Primary tumor: unknown primary; Model 2: Reference category of Grading: 1. Nagelkerke's R 2 : Model 1 = 0.109; Model 2 = 0.311; Model 3 = 0.296
2023-09-30T06:18:03.925Z
2023-09-28T00:00:00.000
{ "year": 2023, "sha1": "4f3d4a404038e45278d265ff157ec2860baaa3ae", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12020-023-03545-x.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "372eeb1f630f4fda827ca5ff067b757bfc820e4c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
169698323
pes2o/s2orc
v3-fos-license
Research on Closed Residential Area Based on Balanced Distribution Theory : With the promotion of the street system, residential quarters and units of the compound gradually open. In this paper, the relationship between traffic flow and traffic flow is established for external roads, and the road resistance model is established by internal roads. We propose a balanced distribution model from the two aspects of road opening conditions and traffic flow inside and outside the district, and quantitatively analyze the impact of the opening and closing on the surrounding roads. Finally, it puts forward feasible suggestions to improve the traffic situation and optimize the network structure. Introduction February 21st, 2016, the State Council issues the "opinions" on further strengthening the administration of city planning and construction, saying views aroused widespread concern and discussion, in principle no longer closing residential construction, opening built-residential area and the unit compound gradually. Although the open area may lead to security issues, the focus of the discussion is whether open area can achieve the optimization of network structure, improve the road capacity and traffic conditions. One view is that the enclosed residential cell destroys the urban road network structure, which is easy to cause traffic jams while the open cell will reduce the density of the road network. Model establishing This paper analyzes the influence of open area on the surrounding road traffic. The mathematical model of vehicle traffic includes the opening condition of the road and the balance to reach the driving time of the traffic inside and outside the residential cell. Cost -minimized Goal Planning Model It is indicated that the parameter k is the total number of openings in the residential cell while K is the maximum acceptable number, which satisfies the condition k K  . Because there is a certain amount of expenditure to be used for every opening, we should make the number of openings as few as possible. Regarding the cost of per road section as the same, the total length of all sections which will be built should be as short as possible in terms of economy. Therefore, the goal is ( ) Time -balanced Distribution Model of Road Network Traffic assignment is to allocate the traffic demand to the traffic network according to a certain routing principle, so the traffic flow is got. If the time outside the residential cell is less than that through it, it is of little significance to open the cell. On the contrary, if most of the vehicles have chosen to pass through the cell after opening, then the cell will be in congestion and the traffic flow will be diverted into the outer road. Finally, all vehicles are randomly selecting paths, which can achieve the equilibrium state. At this time, the time spent on the road either inside or outside the cell is same. According to the principle of time balance, a traffic assignment model of road network is established v represents the speed inside, both of which will be obtained in the following models. Speed-flow Relationship Model of Traffic Flow The traffic capacity of the external road is determined by the flow of the road. According to Greenshields theory, we can establish the speed-flow relationship model, as shown in figure 1. Figure 1 Schematic Diagram of Greenshields Theory The average speed of the vehicle arriving in the unit time is In which 0 U represents the average speed of vehicles when the traffic volume is zero and V C is a ratio about the actual traffic volume and the capacity of the road. In addition, according to the relationship between the whole and the part, we can get the following equation However, the formula is obtained under the assumption that the vehicle uniformly dissipates, so two correction coefficients  and  are quoted to obtain the traffic impedance model To obtain a more accurate prediction result, we make  . As a result, we get ( ) In the model, s U is modified into the design speed and  is a nonlinear function to / V C , which is expressed as ( ) Therefore, the speed-flow relationship model at any arbitrary level and traffic load is Road Resistance Function Model According to the traffic flow theory, the relationship in Q v  =  will be obtained when the road facilities are in certain circumstances, in which in Q is traffic flow inside the cell, v is driving speed and  is traffic density. Under the assumption of continuous flow in the absence of interference, the model of road resistance function is ( )  mean the length of the road, the smooth speed and the blocking density respectively. In specific practical applications, the road traffic is usually intermittent, so we fix the processing model from four aspects of the interference, including intersection spacing, lane width, non-motor vehicles and pedestrians. So we learned that A Comparison of Roads in Two Exact Opening Conditions According to the classification of the cell 5.3.1, two types of cells closure are mainly discussed. Figure 2 shows one residential cell whose opening level is not high in a medium-sized city. Assuming the external trunk road is two-way four-lane, it is known that the road designed speed is 40km / h; capacity is 900pcu / h; width of a lane is 3.75m. As for the cell we design, we make the two-lane road speed 40km / h, capacity 900pcu / h, width of a lane 3.75m. Base on the actual data of a residential cell in Changsha, the plot covers an area of a square, side length 500m, area 250000, the resident population of about 15000 people. Two import and export corresponding to the external road traffic is 2000pcu/h while the other two external roads' traffic is 800pcu/h. The two import and export flow data are shown in Table 1: We can see that after opening the cell, the density of the trunk road network increases and the external roads of the cell share the traffic flow of 857pcu / h for the external ones, making the internal and external road load are about 1.2. In terms of speed, the open cell reduces the degree of peripheral lane congestion and improves the traffic speed. Although the speed of the internal lane is limited by the crowd and the width of the road, it reaches the requirements of the equilibrium theory in general, the speed of internal and external traffic can make the entire road network inside and outside keep relative balance, so it is reasonable and effective under the conditions of such a cell. Figure 3 shows one cell whose opening level is high in a general city. Assuming the external trunk road is twoway six-lane, access to information that the road designs speed of 60km / h, capacity of 1100pcu / h, width of a lane of 3.75m. As for the cell we design, we make it the twoway four-lane road speed of 40km / h, capacity of 900pcu / h, width of a lane of 3.75m. Base on the actual data of a cell in Changsha, the plot covers an area of a square, side length 600m, area 360000 2 m , the resident population of about 20000 people. Three import and export corresponding to the main external road traffic is 2500pcu/h while the other road's traffic is 1200pcu/h. The data about three import and export flow are shown in Table 2: We can see that after opening the cell, the density of the trunk road network increases and the external roads of the cell share the traffic flow of 1047pcu / h for the external ones, making the internal and external road load are about 1.3 and 1.1 respectively. In terms of speed, the internal and speed almost reaches the requirements after the cell opens. The degree of congestion in the outer lane has alleviated and the traffic speed has increased, but still far less than the road design speed. Therefore, the external road is still congested and cannot make the road network balanced. That is to say, such an open residential cell settings cannot meet the requirements of the road network. Compared the two types of cell models, we can conclude that the first one is a cell model whose opening degree is relatively low, roads within the cell are narrow, pedestrians are more and traffic capacity is poor, which is suitable for the condition that the traffic density is not too high; the second one is a cell model whose opening degree is relatively high, roads within the cell are wide and traffic density is not too large. However, the second cell model of the blocking density 0.1987 0.1357 shows the role of road traffic is not as well as the first category. Therefore, the selection of the cell model needs to be controlled on the external traffic flow within a certain range. After analyzing the changes of different types of residential cells in different conditions, it is concluded that when the external traffic density does not exceed the limit, the effect of the open cell is positively correlated with the width of the road and the density of the road while it is negatively correlated with the population density and the number of non-motor vehicles. The open residential area can reduce the road load and blocking density, playing a role of sharing traffic flow, improving the road capacity and the traffic congestion. Conclusions When the external traffic density does not exceed the limit, the effect of the open cell is positively correlated with the road width and the trunk density while it is negatively correlated with the population density and the number of non -motor vehicles. Open residential area can reduce the road load and blocking density, share traffic flow, increase the road capacity and improve the traffic congestion.
2019-05-30T23:44:42.350Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "a21ca3a34e95bb8935e4f540785a1429cdc14ad0", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/13/e3sconf_icemee2018_03001.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "adab63ccf23a40dadd5ff52108b279c73985d66d", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
11959933
pes2o/s2orc
v3-fos-license
Oxygen-Induced Surface Reconstruction of SrRuO3 and Its Effect on the BaTiO3 Interface Atomically engineered oxide multilayers and superlattices display unique properties responsive to the electronic and atomic structures of the interfaces. We have followed the growth of ferroelectric BaTiO3 on SrRuO3 electrode with in situ atomic scale analysis of the surface structure at each stage. An oxygen-induced surface reconstruction of SrRuO3 leads to formation of SrO rows spaced at twice the bulk periodicity. This reconstruction modifies the structure of the first BaTiO3 layers grown subsequently, including intermixing observed with cross-section spectroscopy. These observations reveal that this common oxide interface is much more interesting than previously reported, and provide a paradigm for oxygen engineering of oxide structure at an interface. density wave transitions. The central role of oxygen stoichiometry has been repeatedly shown in defining both structure and properties of oxide interfaces, [11][12][13] and can be tuned as a means to control static and dynamic distributions of electrons and atoms for a new generation of functional materials with applications ranging from oxide sensors and electronics to energy capture and storage. Nevertheless, few atomic scale studies of interface structures exist for complex oxides, due to a need for multiple tools to probe subsurface features, the need for a highly controlled environment, 14,15 and the insulating nature of many oxides. We have studied the structural evolution of surfaces and interfaces during the layerby-layer growth of BaTiO 3 films on SrRuO 3 . This pair combines the classic ferroelectric, BaTiO 3 , and the most common conducting oxide, SrRuO 3 , and has been the subject of a number of investigations. 11,16 By combining in situ measurements of in-plane surface structure, ex situ cross sectional microscopy and spectroscopy, and first principles simulations, we observed the atomic structure of SrRuO 3 surfaces and its impact on the interface and structure of several layers of BaTiO 3 . Surprisingly, the SrRuO 3 surface, which has conventionally been considered flat based on observation of well-separated, single layer steps in ex situ atomic force microscopy (AFM), is reconstructed at atomic length scales. This 3 reconstruction increases the oxygen concentration and leads to both intermixing and structural change in BaTiO 3 at the interface. Clearly, AFM lacking atomic resolution cannot be relied upon to identify structural or stoichiometric deviations. In situ characterization of films must become the norm to identify the fundamental origins of behavior. SrRuO 3 and BaTiO 3 films were grown on (001) oriented SrTiO 3 substrates using pulsed laser deposition with protocols described in the Methods section. To study the interface structure between BaTiO 3 and SrRuO 3 films, 1, 2, 4 and 10 unit cell BaTiO 3 films were grown on SrRuO 3 /SrTiO 3 . The detailed deposition conditions and the in situ electron diffraction experiments of BaTiO 3 /SrRuO 3 films on SrTiO 3 are reported elsewhere. 17 The growth quality and quantity of deposited material were confirmed using Reflection High Energy Electron Diffraction (RHEED). As shown in Figure 1A and Figure 1B, diffraction intensities oscillated as materials were deposited with each oscillation indicating one layer of film growth. For SrRuO 3 , RHEED intensities oscillated several times but quickly reached a steady state, which has been shown to reflect a transition from layer-bylayer growth to step-flow growth with a SrO surface termination. 18 BaTiO 3 growth produced extended oscillations indicating good layer-by-layer growth and revealing the number of layers (one per oscillation) as they were grown. The flatness of these films was assured by removing several samples from vacuum for Atomic Force Microscopy (AFM). While exposure to atmosphere results in adsorption which could affect the surface structure, 14 ambient AFM images were useful as a comparison to many previous studies. Results shown in Figures 1C, D, and E, show these films were smooth with only single unit-cell steps and step densities similar to that of the TiO 2 -terminated SrTiO 3 substrates. 4 The expectation, then, was that the interface between SrRuO 3 and BaTiO 3 would be atomically flat. Buried interfaces are notoriously difficult to characterize on an atomic scale; scanning probes, or other surface methods, cannot access the region of interest. Here, we examined this interface with cross-sectional Scanning Transmission Electron Microscopy (STEM). In Z-contrast STEM imaging, the intensity of an atomic column in the image is roughly proportional to the square of its atomic number, providing contrast between the two materials. The interface between SrRuO 3 and BaTiO 3 is clearly seen in Figure 2A where TiO 2 planes take over from more intense RuO 2 planes. Image profiles help quantify the transition. The profile in Figure 2B corresponds to the box on the image in Figure 2A, i.e. it represents an average over 6 atomic rows parallel to the interface. A general intensity decrease from left to right in the image originates from the decreasing specimen thickness. The individual column intensities follow the composition, for example, the SrO termination of the SrRuO 3 is clearly observed. Interestingly, the first BaO column marked Ba* has a considerably reduced intensity relative to others. This suggests the depletion of Ba or presence of Sr in this column. If, however, we construct a profile from individual rows across the image and track the intensities of the SrO and BaO columns closest to the interface, it becomes clear that this compositional change is not uniform. In Figure 2C we plot the corresponding peak heights (obtained from Gaussian fits of the profiles) in the image in Figure 2A as a function of vertical coordinate (along the interface). The last SrO layer and the first BaO layer have variable profiles, implying changing compositions. It is important to discern that these layers are correlated in composition, i.e. the two layers nearest the interface can be both BaO, both SrO, or both mixed at about the same degree. The composition of these layers varies on a nanometer scale forming "domains" along the interface. The presence of mixed BaO/SrO columns does not necessarily imply mixing on an atomic scale; because the observed compositional domains could have dimensions as small as 2 nm (see Figure 2C), there could be overlap in the beam direction (normal to the image plane) resulting in apparent mixing. The cumulative BaO/SrO ratio in these rows calculated over the entire image is very close to 1:1, suggesting that stoichiometry is preserved overall. The observed contrast pattern indicates that the interface has a complex structure, possibly similar to the schematic in Figure 2D while the spacing between rows is twice that value, i.e. every second substrate unit cell. It is clear that these rows and holes, unobserved with ex situ AFM, will provide a profound effect on the surface and interface properties and on subsequent growth mechanisms of materials such as BaTiO 3 . 6 Low Energy Electron Diffraction (LEED) revealed the evolution of long-range ordered structures at several stages of the film growth. As this technique involves scattering of electrons with energies typically between 50 and 200 eV, a conducting substrate is needed to avoid charging which would mask the diffraction pattern. Patterns were obtained from SrRuO 3 and from thin films of BaTiO 3 on SrRuO 3 . As shown in Figure 5B, the diffraction pattern from SrRuO 3 thin film surfaces had not only the square pattern expected from a bulkterminated film, but also an addition spot halfway between each bulk spot. This pattern showed the surface unit cell periodicity was doubled in both surface crystallographic directions, a periodicity known as p(2x2). This diffraction pattern is consistent with the rows observed in STM that were separated by twice the bulk lattice constant. However, the STM images revealed that the local periodicity is established by rows along either (100) or (010) directions, i.e. local domains of (2x1) and (1x2) symmetry that sum together to appear as a p(2x2) pattern. Analysis of the growth of SrRuO 3 on STO has shown that the surface is terminated by a SrO layer, with RuO 2 below. 18 Since Sr has a greater density of conducting electronic states than oxygen, Sr atoms most likely are imaged by STM. The images suggested therefore that Sr or a Sr oxide is responsible for the rows. The holes between rows (i.e. where rows are incomplete) are similar to those seen with STM on surfaces of layered Sr-Ru oxides, including Sr 2 RuO 4 and Sr 3 Ru 2 O 7 , although these materials exhibit a c(2x2) symmetry, without extended rows. 19 This corrugated, imperfect surface can also help explain the unexpected surface reactivity of the surface when exposed to atmosphere. 14,20 To identify the rowed structure observed experimentally, we examined several structural candidates using first principles density functional theory (DFT). We initially 7 verified the effect of removing single SrO dimers from the SrRuO 3 surface; a dimer being charge neutral is more likely than a single atom to produce a stable structure. The observed vertical corrugation between rows in STM was 0.1 nm, not too different than the ~ 0.15 nm dip corresponding to the rigid removal of a pair from bulk SrRuO 3 . However, the computed energy to remove a SrO dimer is prohibitively large, costing ~ 7 eV/pair. This large extraction energy can be understood since the process involves breaking covalent bonds with an accompanying energy penalty that is not counterbalanced by the creation of other bonds. We next investigated the energetics related to the removal and "elevation" of a SrO dimer onto the surface (i.e. the SrO pair is promoted from the surface layer over the surface), in such a way that part of the energy associated with the creation of the SrO vacancy is compensated by bond formation with top atoms. This could be expected to be an unstable configuration, and indeed, during the calculation most of the initial configurations relax back in to the cleaned, defect-free surface ( Figure 4A). However, appropriate surface structures, for example the Sr and O geometry of Figure 4B, did support bonding creating a local energy minimum, i.e. a metastable configuration. In the observed (2x1)+(1x2) rows, isolated defects appear less stable than a row of defects. Computationally, we found that a row of SrO is more stable, by 0.32 eV/pair, and that the preferential ordering was along the (110), or equivalently (1-10), direction. The structure corresponding to complete rows of defects is shown in Figure 4B, and corresponds to a formation energy of +5.29 eV/pair when compared to a defect-free surface. As expected the defect formation energy was significantly lowered when the SrO remains bound to the surface, rather than simply ejected from it. Nevertheless, the reduced number of Ru-O and Sr-O bonds still leaves the energy cost too high to explain the observations. 8 In the oxygen rich atmosphere required to approach stoichiometric growth of oxides such as SrRuO 3 , molecular oxygen should be very reactive with displaced Sr as described above. We focused next our attention on O interactions with the rowed SrO structure, where SrO is promoted onto the surface. In Figure 4C, we present the minimum energy structures resulting from the interaction of O 2 with the system of Figure 4B. When a single O is added per displaced SrO, its stable position is directly above the underbonded Ru (small white sphere), close to the position of the displaced O in the pristine surface. More importantly, the system energy is considerably reduced by 4.76 eV/O, which is much larger that the corresponding 1.2 eV/O adsorption energy on a defect-free surface. In other words, the defect energy was reduced to 1.78 eV/SrO. This formation energy, calculated at 0 K, is sufficiently low to suggest that SrO rows, displaced from the surface layer, together with excess oxygen, could produce the observed structure. The calculations continue to indicate that this structure should be metastable, which is consistent with STM measurements that will be discussed elsewhere. As 1-2 layers of BaTiO 3 were grown on SrRuO 3 , the LEED pattern remained p(2x2) ( Figure 5B), however the relative intensities of diffraction spots were altered from those observed from SrRuO 3 alone. This change in relative intensities indicates a change of structure, with two important implications. First, this pattern must represent the order of the BaTiO 3 film, and cannot arise solely from exposed remnants of SrRuO 3 . Second, this shows that the SrRuO 3 reconstruction influences the structure of the BaTiO 3 at the interface, which does not share the symmetry of bulk BaTiO 3 , but instead has a periodicity two times larger in the plane of the interface. Growth of thicker BaTiO 3 reverts the pattern observed in LEED to the (1x1) symmetry of the bulk. As shown in Figure 5B, as few as 4 layers of BaTiO 3 produce at a (1x1) periodicity; the additional diffraction spots indicating a doubled unit cell are gone. STM images of Figure 5C and D show the local periodic atomic rows along either (100) or (010) direction whereas Figure 5E Calculations for structural investigation were performed within DFT, using the Vienna ab initio simulation package (VASP). 27,28 The Kohn-Sham equations were solved using the projector augmented wave (PAW) approach 29,30 and a plane-wave basis with a 400 eV energy cutoff and the exchange-correlation functional was represented by the Local Density Approximation (LDA). 31 Spin polarized calculations were used throughout. The system was set-up as follows: first we relaxed a SrTiO 3 (STO) unit cell in bulk using a 12x12x12 Monkhorst-Pack Brillouin zone sampling, resulting to a crystal structure with a=b=0.546 nm and c=3.863 nm. We used a single 2x2 slab of STO along as support. The atoms in this slab were kept fixed during the course of all the simulations. Imposing the lattice constants of STO 11 in bulk, this results in a unit cell of 1.092 x 1.092 nm2 for the planar dimension. The system, called cleaned or defect-free surface hereafter, was obtained by adding and relaxing a 2x2 (001) slab of SrRuO 3 on top of the previously position STO, resulting in a 96 atom unit cell. We chose the c-axis of the working unit in such a way as to ensure a minimum of 0.7 nm of vacuum between periodic images. That unit cell was used as a starting point for all the calculations shown here. Note that we used a 4x4x1k-point grid for the slab calculations.
2018-04-03T00:26:33.784Z
2010-06-24T00:00:00.000
{ "year": 2010, "sha1": "b7f7db09efde196ec0d5619ee115db34d98385fc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1007.4006", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b7f7db09efde196ec0d5619ee115db34d98385fc", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics", "Medicine" ] }
45786606
pes2o/s2orc
v3-fos-license
Dynamic Hip Screw Compared to Condylar Blade Plate in the Treatment of Unstable Fragility Intertrochanteric Fractures Dynamic hip screw (DHS) fixation is considered standard treatment for most intertrochanteric fractures. However, excessive sliding at the fracture site and medialisation of femoral shaft may lead in fixation failure. In contrast, fixedangled 95° condylar blade plate (CBP) has no effective dynamic capacity and causes little bone loss compared to DHS. We compared the outcome of 57consecutive unstable intertrochanteric fragility fractures treated with these two fixation methods. CBP instrumentation is more difficult requiring longer incision, operating time and higher surgeonreported operative difficulty. The six month post-operative mortality rate is 16%. Post-operative Harris hip scores were comparable between the two methods. Limb length shortening more than 20 mm was 6-fold more common with DHS. In elderly patients with unstable intertrochanteric fragility fractures, fixed angled condylar blade plate appears to be a better choice than dynamic hip screws for preventing fixation failures. INTRODUCTION Fragility fractures can be caused by a fall from a standing height or less.Osteoporosis which leads to bone fragility is considered the major contributing factor to fragility fractures and is directly linked to risk of hip fracture.Every year there are 250,000 hip fractures in the United States, and it is predicted that this number will double in the next 40 years 1 .A similar increase is expected in Asia 2 .In general, as the population ages, the incidence and cost for treating these injuries is going to increase dramatically 3,4 .Hip fractures are associated with high morbidity and mortality, a rate of up to 20% can be expected in the year following the injury 5 .The majority of those who survive are disabled with only 25% able to resume normal activities 6 .About half of the hip fragility fractures are intertrochanteric fractures.Dynamic hip screw (DHS) fixation has been considered the gold standard for treatment of stable intertrochanteric fractures 7,8 .DHS allows controlled collapse of the fracture followed by progressive stabilization.However, there are divergent opinions about the fixation of unstable intertrochanteric fractures in the elderly.Authors reported high failure rates (range: 3% to 26%) for DHS fixation in unstable intertrochanteric fractures.Failure usually occurs due to loss of fixation of the lag screw with resultant varus angulations and medial collapse at the fracture site; plate pull-off from the shaft; implant disassembly; or fatigue failure in cases of delayed union 9,10,11,12 .Orthopaedists have questioned whether the fixation should be dynamic by default allowing fracture compression and union at the cost of reduced femoral neck length and medialisation of the femur shaft.Or should rigid fixation applied to restore the pre-fracture anatomy to achieve immediate stability for early mobilization in a race towards fracture union before implant fatigue?There are limitations as to how much of dynamic fracture compression is desirable.DHS fixation permits fracture compression along the femoral neck that leads to femoral neck shortening; there is also an inherent tendency towards medialisation of the femoral shaft 13 .Jacobs et al found that the average length of lag screw sliding was 5.3 mm in stable fractures, and 15.7 mm in unstable fractures.Excessive sliding (more than 15 mm) correlates to higher prevalence of fixation failure 4 , and is also associated with increased post- Dynamic Hip Screw Compared to Condylar Blade Plate in the Treatment of Unstable Fragility Intertrochanteric Fractures operative pain and decreased post-operative mobility 14 .Parker found that if medialisation of more than one-third of the femoral shaft diameter occurs at the fracture site, there is a seven-fold increased risk of fixation failure 15 . A fixed-angled 95°condylar blade plate (CBP) can be used in these difficult unstable fractures as well as in revision fixation for failed intertrochanteric fractures 16 .This technique results in improved resistance to rotation of the proximal fragment, as it has no effective dynamic capacity.This is due to the fact that, on loading, the hip joint reaction force is 159°towards the vertical plane.CBP does not allow the proximal fragment to slide laterally thus avoiding the undesirable medialisation of the shaft. We compared these two extramedullary fixation devices in the treatment of unstable intertrochanteric fractures. MATERIALS AND METHODS Consecutive patients admitted in 2004 with unstable intertrochanteric fractures were prospectively randomized (by drawing lots from sealed envelopes) into two study groups.The study was approved by the hospital ethics committee.Consent was also obtained from patients or, in cases where they were confused, from their caretakers.Inclusion criteria were: 55 years of age or older with low traumatic intertrochanteric fracture classified as AO/OTA Type 31 A2.2 and A2.3 and Kyle Type III and Type IV.They were fractures with comminution, loss of postero-medial calcar, subtrochanteric extension, and reverse oblique fracture pattern 17 .Two-part fractures were excluded since they are considered stable fractures.Patients with fractures associated with polytrauma, pathological fractures, and previous surgery on the ipsilateral hip or femur were excluded. There were 61 patients with 63 hip fractures treated during the study period.57 hips are included in the resutls: 31 treated with DHS (Dynamic Hip Screw, Synthes-Stratec, Oberdorf, Switzerland) and 26 treated with CBP (95°c ondylar blade plate; Synthes-Stratec, Oberdorf, Switzerland).Six patients were excluded as five were unfit for surgery and therefore treated conservatively, while one was transferred to other centre.Surgery was performed within four days in all patients.Best possible alignment is achieved either by closed manipulation under fluoroscopy control or open reduction.Standard method of instrumentation and fixation were performed under fluoroscopy control.The modified Harris Hip Score without assessing the hip motion was used to determine pre-fracture status, and the Harris Hip Score (HSS) was used for postoperative 3rd and 6th month functional assessment.Plain radiographs were taken before and one day post-operatively, followed by repeated radiographs at 1st, 3rd and 6th month postoperatively.Post-operative rehabilitation was standardized in both groups with emphasis on early gain of protected partial weight-bearing using a walking frame before discharge.Categorical variables were analyzed using chi-square tests of association (ambulatory status).Continuous variables were assessed by One-way ANOVA test or Student's t-test (Harris Hip Score, operating time, blood loss and Visual Analogue Score of operating difficulty).The level of significance was set at p < 0.05. RESULTS The subjects in both groups were similar in terms of demographic and premorbid functional status (Table I).With CBP the incision was typically longer and blood loss was also more but these differences were not statistically significant.Skin to skin operative time was significantly longer using CBP, though the clinical relevance of a mean of extra 20 minutes is unclear.When coupled with a significantly higher surgeon-reported Visual Analogue Score (VAS) in operating difficulty, a longer operative time rightly reflects the difficult CBP instrumentation (Table II). Within 3 months following surgery, 10% of the patients passed away from medical complications unrelated to their surgery, with another 5% in the following 3 months, contributing to a 16 % mortality 6 month after surgery (Table III).Harris Hip Scores 3 months and 6 months after surgery were comparable between the 2 groups.Twenty-one per cent of surviving patients at did not regain the ability to walk when assessed 6 months after surgery.There was no correlation between the ability to walk and limb length shortening or choice of implant.Despite early surgical intervention, the 16% 6 month mortality rate and the high rate of failure to return to ambulatory status were attributed to the systemic nature of osteoporosis in the elderly. Complications There was no recognised deep vein thrombosis, pulmonary embolism, or deep infection.There were 3 fixation failures in each group due to suboptimal surgical fixation (Table IV).In the DHS group, poor reduction with medialisation of femoral shaft by 25%, a short lag screw and/or maximal initial sliding leaving no room for further fracture impaction, eventually led to superior cut out in 3 patients (Figure 1).This resulted in two patients remaining bedridden, and in the third patient with no acetabulum involvement, revision fixation was performed 6 months later.Shortening (leg length difference of more than 20 mm, clinically reported and radiologically measured) was significantly more common in the DHS group. In the CBP group, the failure to primarily restore neck-shaft angle, as evidenced by the post-operative pelvic anteroposterior film, led to one case of excessive varus, nonunion and subsequent implant fatigue at the 4th postoperative month.There was one nonunion leading to superior cut out in the CBP group (Figure 2).The overall failure rate, when limb length shortening and fixation failures were considered, was lower in group CBP and approaching statistical significance with p value at 0.51.Medialisation of femoral shaft was not compared, as it was not possible with the fixedangle CBP. DISCUSSION For the fixation of stable intertrochanteric fractures, there exists a plethora of choices of implants, whereas the ideal implant remains elusive for unstable intertrochanteric fractures.Unstable intertrochanteric fractures especially those with a posterior medial defect are more prone to complications.Loss of posterior medial support leads to increase hip screw telescoping and limb shortening.Further, these defects also lead to increased load transfer to the tip of the dynamic hip screw, thereby increasing superior cut out.Loss of fixation of the lag screw in an osteoporotic head, with resultant varus angulation and medial collapse at the fracture site can also occur 18 .Independent of the type of implant used, patients with unstable trochanteric hip fractures and osteoporotic bone are at the highest risk for implant failure 19 . Kaufer determined that the strength of the implant-fracture composite is based on 5 variables: bone quality, fracture geometry, fracture reduction, placement of implant, and implant used 20 .A CT scan of an intertrochanteric fracture may reveal poor density of trabeculation and even emptiness within the head.Maximum bone density is likely found at the site where the compressive and tensile trabeculae coalesce, in the centre-centre region of the femoral head 21 .An ideal implant should engage this region while taking as little remaining strong bone as feasible.The CBP at 95°to the femur shaft is wedged below the densest trabeculae in the centre-centre region of the femoral head.There is little bone loss in preparing the sitting bed for CBP especially as compared to DHS that requires bone coring. Of concern is unstable fracture geometry that include fractures with increased comminution, loss of posteromedial calcar, subtrochanteric extension, and reverse oblique fractures.Anatomic reduction of femoral neck length, axial length, and neck-shaft angle must be achieved before implantation.In addition to the degree of osteoporosis, the rate of superior cut out is strongly correlated to surgical technique.Baumgaertner et al demonstrated that enhanced surgeon awareness of short tip-apex distance (TAD) decreases the risk of fixation failure 22 . Choice of implant should not be routine but customised according to fracture characteristics.Unstable fractures with osteoporosis are reported to have failure rate of more than 50% 23 .In such cases DHS may still yield good outcome (Figure 3) but DHS should not be automatically the first choice for treatment.Our results indicate that unstable intertrochanteric fractures, treated with fixed-angled 95°C BP have similar outcomes and near statistically significant lower failure rate compared to DHS.This fixed-angle device acts as a bridge plate across the fracture site, and therefore may be of value in severe osteoporotic bones as well as in comminution where dynamic fixation may lead to excessive interfragmentary compression.Conversely, the two failure patterns of CBP observed in present study, namely implant breakage and nonunion, may be due to lack of intefragmentary compression.For optimal application of the bridging plate principle across fracture comminution, the authors recommend the use of a longer 7 or 9-hole side plate CBP with 4 distal screws and 1 proximal lag screw for the femoral head.Subsequent cases conducted using this technique consistently led to fracture union (Figure 4). The present study has several limitations.Though both devices analyzed in this study are extramedullary using similar surgical exposure, the more difficult CBP instrumentation may be subject to performance bias; hence a larger studied population is warranted.Secondly, only some of the patients underwent a Quantitative Computed Tomography (QCT) scan.Among the patients who were scanned, there was no difference in bone mineral density detected between the two groups, but inclusion of all patients may have reveal subtle differences (Table V).Thirdly, although body mass index was not studied, it may be an important factor in early stability of fracture-implant construct and subsequent union rate.Of course, follow-up longer than 6 months would have been optimal.Lastly as the focus was on extramedullary fixation, the present study did not include any intramedullary device options for these fractures. CONCLUSION In the treatment of fragility fractures, it is important to recognise unstable fracture geometry and customise the choice of implant according to the fracture characteristics.In elderly patients with unstable fragility intertrochanteric fractures, fixed angled condylar blade plate appears to be a better choice than dynamic hip screws with a lower rate of fixation failure.
2017-11-03T11:35:18.104Z
2009-05-01T00:00:00.000
{ "year": 2009, "sha1": "e86e861bb5ed30700a02b3268b74cf8a79a01271", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5704/moj.0905.001", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e86e861bb5ed30700a02b3268b74cf8a79a01271", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
13885943
pes2o/s2orc
v3-fos-license
A patient with pseudohypoaldosteronism type II complicated by congenital hypopituitarism carrying a KLHL3 mutation Abstract. Pseudohypoaldosteronism type II (PHA II) is a renal tubular disease that causes hyperkalemia, hypertension, and metabolic acidosis. Mutations in four genes (WNK4, WNK1, KLHL3, and CUL3) are known to cause PHA II. We report a patient with PHA II carrying a KLHL3 mutation, who also had congenital hypopituitarism. The patient, a 3-yr-old boy, experienced loss of consciousness at age 10 mo. He exhibited growth failure, hypertension, hyperkalemia, and metabolic acidosis. We diagnosed him as having PHA II because he had low plasma renin activity with normal plasma aldosterone level and a low transtubular potassium gradient. Further investigations revealed defective secretion of GH and gonadotropins and anterior pituitary gland hypoplasia. Genetic analyses revealed a previously known heterozygous KLHL3 mutation (p.Leu387Pro), but no mutation was detected in 27 genes associated with congenital hypopituitarism. He was treated with sodium restriction and recombinant human GH, which normalized growth velocity. This is the first report of a molecularly confirmed patient with PHA II complicated by congenital hypopituitarism. We speculate that both GH deficiency and metabolic acidosis contributed to growth failure. Endocrinological investigations will help to individualize the treatment of patients with PHA II presenting with growth failure. Introduction Pseudohypoaldosteronism (PHA) is a rare renal tubular disease that can be divided into two types based on their pathogenesis. PHA type I (PHA I) is caused by abnormalities in the aldosterone receptor or epithelial sodium channel and is inherited in an autosomal dominant or recessive manner, respectively. PHA I is clinically characterized by renal salt wasting and decreased response of the aldosterone receptor or epithelial sodium channel to aldosterone. PHA type II (PHA II, also referred to as Gordon syndrome (1)) is a heterogeneous syndrome inherited in an autosomal dominant or recessive manner. The best-characterized pathogenesis of PHA II is associated with increased membrane expression of the thiazidesensitive NaCl cotransporter (NCC) in the distal convoluted tubules, resulting in excessive sodium resorption. Excretion of potassium and hydrogen is decreased because it is coupled with sodium resorption. As a result, patients with PHA II show early-onset hypertension, hyperkalemia, and metabolic acidosis. Plasma renin levels are suppressed and aldosterone levels are variable but are relatively low given the degree of hyperkalemia. Other clinical manifestations of PHA II include growth failure, muscle weakness, periodic paralysis, skeletal abnormalities, urinary calculus, and psychomotor retardation. Electrolyte abnormalities and hypertension can be treated with thiazide. Recently, the molecular pathogenesis of PHA II was partly elucidated. Here, we report a patient with PHA II complicated by congenital hypopituitarism carrying a KLHL3 mutation. The patient exhibited growth failure. Two possible factors, PHA II and congenital hypopituitarism, were considered as the cause(s) of the growth failure. Case Report The proband, a 3-yr-old boy, was the second child of healthy nonconsanguineous Japanese parents. The course of pregnancy and delivery were uneventful. He was born at gestational age 39 wk with a birth length of 51 cm (+1.0 SD) and weight of 2918 g (-0.2 SD). At age 10 mo, he experienced loss of consciousness after a 15-h fasting period and was brought to our hospital. On physical examination, short stature (68.0 cm; -2.0 SD) and normal weight (8.6 kg; -0.6 SD) were noted. He was drowsy but moved his extremities in response to verbal or physical stimulation. He had elevated blood pressure (110/55 mmHg; age-matched reference ranges 80-105/34-61) with a normal heart rate. His testes were in the scrotum and his testicular volume was 2 cm 3 per testis. His stretched penile length was 3.0 cm (-0.8 SD). He had no abnormalities in deciduous teeth. Routine blood examination revealed a high serum potassium level (6.6 mM), high serum chloride level (107 mM), normal plasma glucose level (77 mg/dL), and an elevated beta-hydroxybutyrate level (1.6 mM; reference ranges 0-0.074). Mild metabolic acidosis was observed by vein blood gas analysis (pH 7.34, estimated bicarbonate level 14.8 mM). Bone age was not evaluated. He was treated with intravenous half saline including glucose, and his consciousness was normal 3 h after fluid therapy. Although hyperkalemia and hyperchloremia gradually improved, hypertension and metabolic acidosis (estimated bicarbonate levels 14.6-16.0 mM) were sustained through eight days after admission. Hyperkalemia reoccurred at home after discharge and was sustained until the introduction of sodium-intake restriction. To clarify the etiology of metabolic acidosis, transient hyperkalemia, and hypertension, we conducted a series of renal function tests. Ultrasonography of the kidneys showed no anatomical lesions or calcification. Plasma renin activity was low (0.6 ng/mL/h; reference 1.0-3.2) with normal plasma aldosterone level (66 pg/ Vol. 25 / No.4 mL; reference 66-218). Transtubular potassium gradient was low (4.5; reference >5.0), indicating that renal potassium excretion was decreased. Based on these clinical data, we diagnosed him with PHA II. When we reviewed the growth record of the patient, we noted slow decreases in length/height and weight from age 8 mo. Serum IGF1 levels were very low (< 4 ng/mL; age-matched reference 11-149 (8)). Provocation tests for GH release revealed low GH responses to insulin, arginine, and glucagon (peak GH 0.9, 1.7, and 2.1 ng/mL, respectively). He also had a poor gonadotropin response to GnRH (peak LH 0.39 mIU/L, peak FSH 0.98 mIU/L). Responses of TSH, ACTH, and prolactin were not defective. Cranial magnetic resonance imaging revealed hypoplasia of the anterior pituitary gland. We diagnosed him with congenital hypopituitarism. He was treated with sodium restriction (sodium intake 600 mg/day) and recombinant human GH replacement from age 14 mo. His height SD score increased gradually (93.9 cm; -0.5 SD at age 40 mo) (Fig. 1), which coincided with elevation in serum IGF1 levels. Maximum growth velocity was achieved after age 20 mo when metabolic acidosis was alleviated (Fig. 2). Hypertension was improved by age 21 mo. Metabolic acidosis was improved by age 30 mo (Fig. 2), although mild hyperkalemia (5.1-6.0 mM) was sustained. We did not use thiazide throughout the course because metabolic acidosis and hyperkalemia were mild. Loss of consciousness did not reoccur after the first admission. Psychomotor development has remained intact. RNA expression analysis of KLHL3 Human pituitary gland cDNA (Human Pituitary Gland QUICK-Clone cDNA) was purchased from TaKaRa Biotechnology (Shiga, Japan). Human thymus RNA was purchased from Agilent Technologies (Santa Clara, CA, USA), and was reverse-transcribed into cDNA using the SuperScript III First-Strand Synthesis System (Life Technologies, Carlsbad, CA, USA) and an oligo-dT primer. Primer pairs each specific for KLHL3 cDNA (forward 5′-AGATGTACACACCTGCACTGACCTT-3′, reverse 5′-GACATGTTCCATCAGCTTTGC-CATG-3′) and GAPDH cDNA (TaKaRa, HA067812) were used in PCR to estimate the RNA expression levels of the two genes in the pituitary gland and thymus. Thirty-five cycles of PCR amplification were performed using ExTaq HS (TaKaRa) according to the manufacturer's instructions. The PCR products were electrophoresed in a 1.2% agarose gel containing ethidium bromide. We detected the band using a UV transformer illuminator. Genetic analyses Clinical diagnosis of PHA II and congenital hypopituitarism prompted us to perform genetic analyses to clarify the genetic bases of the two diseases. PCR-based direct sequencing of PHA IIassociated genes revealed a heterozygous KLHL3 mutation (p.Leu387Pro), which has previously been reported in patients with PHA II (3). Family analyses unexpectedly demonstrated that the mother and elder brother carried identical mutations. Blood examination of the two family member showed that both had hyperkalemia (5.9 mM in both). Metabolic acidosis was observed only in the brother (pH 7.31, estimated bicarbonate level 21.2 mM). The mother had leg length discrepancy and persistence of deciduous teeth. The mother and brother had neither hypertension nor short stature (mother: blood pressure 110/54 mmHg and height 163 cm at age 34 yr, brother: blood pressure 79/27 mmHg and height 97.8 cm; +0.9 SD at age 3 yr). We further analyzed 27 genes associated with the hypopituitarism for the proband, but no diseasecausing variation was found. RNA expression analysis of KLHL3 RT-PCR analysis showed that KLHL3 mRNA was expressed in the pituitary gland, although the expression level was low (Fig. 3). Discussion We described a patient with PHA II carrying KLHL3 mutation, who showed growth failure at first presentation. Endocrinological investigations revealed congenital hypopituitarism affecting Vol.25 / No.4 the secretion of GH and gonadotropins. Both GH deficiency and metabolic acidosis was suspected to be responsible for the growth failure. However, the recovery of growth velocity coincided with the increase in serum IGF1 levels, indicating that his growth was at least in part affected by GH deficiency. Growth failure is a PHA II-associated symptom, but the frequency has not been investigated in a structured manner. We reviewed 42 patients with PHA II whose growth records were described in the literature and found that 16 (38%) had short stature ( Table 1). Six of 16 patients experienced catch-up growth after treatment with thiazide. The cause(s) of growth failure in PHA II are not fully understood, although metabolic acidosis is the most plausible explanation (10). Boyden et al. reported that patients with PHA II with CUL3 mutations generally showed more severe acidosis and hyperkalemia and a greater likelihood of growth failure than patients with mutations in WNK1, WNK4, and KLHL3 (3). McSherry et al. reported that growth failure associated with renal tubular acidosis in children was reversible upon alkali treatment (11). The mechanisms of acidosis causing growth failure have been elucidated in some reports. Brungger et al. reported that the IGF1 response to GH administration was significantly blunted during acidosis, while the GH response to GH releasing factor administration was significantly enhanced October 2016 Fig. 3. RT-PCR analysis of KLHL3. KLHL3 mRNA is expressed in the human pituitary gland, although the level of expression was low. A thymus-derived sample was used as a control. (13). A similar mechanism may explain the growth failure observed in patients with PHA II. In the present case, the recovery of growth velocity coincided with the increase in serum IGF1 levels, which began immediately after beginning GH replacement, suggesting that his growth was affected by GH deficiency. However, maximum growth velocity was achieved after age 20 months when metabolic acidosis was alleviated. This indicates that correcting the acidosis improved the actions of anabolic hormones and further increased his growth velocity. The mother and brother of the proband did not have short stature, although they carried the identical KLHL3 mutation (p.Leu387Pro). It is clear that the complication of congenital hypopituitarism of the proband caused the differences in height among the family members. Regarding the PHA II phenotypes (e.g. hypertension, metabolic acidosis, and hyperkalemia), it is unclear which factor(s) were responsible for the differences. Farfel et al. observed no significant difference in severity of low IGF1, hyperkalemia, metabolic acidosis, and hypercalciuria in affected family members with short stature and with normal stature (14). Gordon et al. found that the difference in sodium intake appeared to cause the phenotypic differences, particularly hypertension (15). In the present case, the patient lost consciousness on first admission. This may have been because of hypoglycemia, which was followed by a 15-h fasting period, although plasma glucose level was normal upon admission. Considering that severe hypoglycemia stimulates the secretion of counterregulatory hormones, normal plasma glucose levels at admission do not exclude the possibility of hypoglycemia as the cause of loss of consciousness. This is the first case report of the coexistence of PHA II and congenital hypopituitarism. In the original report of the identification of the human KLHL gene, the highest KLHL3 mRNA expression was observed in the pituitary gland and cerebellum (16). Using RT-PCR, we confirmed the expression of KLHL3 mRNA in the pituitary gland. The function of KLHL3 in the pituitary gland is unknown. Considering that KLHL2 is an actin-binding protein affecting cytoskeleton dynamics in neurons, it is possible that KLHL3 also affects cell motility where it is expressed (16). However, KLHL3 mRNA expression was very low in the pituitary gland, indicating that KLHL3 mutations did not cause hypopituitarism. Additionally, no other KLHL3 mutation carriers reported so far (including the mother and brother of the proband in this report) had hypopituitarism. Thus, we speculate that the KLHL3 mutation and congenital hypopituitarism were independent occurrences. The presence of additional factor(s), such as genetic, epigenetic, and environmental factors, may be required to cause aberrant development of the anterior pituitary lobe. In conclusion, we described a patient with PHA II carrying KLHL3 mutation, who showed growth failure and congenital hypopituitarism. Our case exemplifies the importance of endocrinological investigations in patients with PHA II with growth failure.
2018-04-03T04:10:29.301Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "aba977bb9df2f36fe6937dda01477932a7af251d", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/cpe/25/4/25_2016-0006/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aba977bb9df2f36fe6937dda01477932a7af251d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17222947
pes2o/s2orc
v3-fos-license
The Influence of Non-Uniform High Heat Flux on Thermal Stress of Thermoelectric Power Generator A thermoelectric generator (TEG) device which uses solar energy as heat source would achieve higher efficiency if there is a higher temperature difference between the hot-cold ends. However, higher temperature or higher heat flux being imposed upon the hot end will cause strong thermal stress, which will have a negative influence on the life cycle of the thermoelectric module. Meanwhile, in order to get high heat flux, a Fresnel lens is required to concentrate solar energy, which will cause non-uniformity of heat flux on the hot end of the TEG and further influence the thermal stress of the device. This phenomenon is very common in solar TEG devices but seldom research work has been reported. In this paper, numerical analysis on the heat transfer and thermal stress performance of a TEG module has been performed considering the variation on the power of the heat flux being imposed upon the hot-end; the influence of non-uniform high heat flux on thermal stress has also been analyzed. It is found that non-uniformity of high heat flux being imposed upon the hot end has a significant effect on the thermal stress of TEG and life expectation of the device. Taking the uniformity of 100% as standard, when the heating uniformity is 70%, 50%, 30%, and 10%, respectively, the maximum thermal stress of TEG module increased by 3%, 6%, 12%, and 22% respectively. If we increase the heat flux on the hot end, the influence of non-uniformity on the thermal stress will be more remarkable. Introduction The need for renewable and environmentally friendly green energies to substitute for fossil fuels has gained a lot of attention all over the world, especially in China.The rapid development of the economy increased people's income significantly, but was accompanied by a lot of serious pollution problems.Thermoelectric is an attractive technology to convert various low quality heat energies to electricity to meet people's increasing demand on clean energy.In thermoelectric materials, electrons or holes, diffusion is driven by a temperature drop between the hot-and cold-ends, which induces an electrical potential between them.Thermoelectric generators (TEG) are compact, highly reliable, have no moving parts, an endless shelf life, are silent in operation, and no pollution; as such, it has many advantages over other energy technologies [1,2].Since the Seebeck effect was found in 1821, researchers did many works to accelerate the extensive application of thermoelectric devices in cooling and electricity generation. During the past 20 years, thermoelectric devices were widely used to convert waste-heat energy from power plants and power localized autonomous sensors, to collect waste energy from the exhaust of automotive vehicles, and to cool the electronic devices with high heat flux being used in aerospace systems [3][4][5][6][7][8][9][10][11][12]. Many researchers focused on improving system performance by various methods.Sahin et al. [13] investigated the influence of thermoelectric pin geometry on the module's efficiency and maximum power output.The results indicated that pin geometry has an obvious effect on the modules, with various temperature differences applied on the two ends.The feasibility of the use of TEG to power a thermoelectric cooling (TEC) device was explored by Khattab et al. [14].They finally obtained a best match number of TEC and TEG and achieved the desired result using a solar thermoelectric generator to drive a small thermoelectric cooler for the greater part of a year.Rodríguez et al. [15] designed a calculation model to examine the thermal and electrical properties of the thermoelectric module.Using the fewest boundary conditions, they managed to obtain a design method with better encapsulation characteristics.The research group led by O'Brien et al. [16] made a comparison between several radioisotope heat sources which were thought to be much easier to get than traditional ones and made a comprehensive analysis of the thermal characters and radiation barrier problems.Yilbas et al. [17] explored the influence of dimensionless size and external load parameters on a thermoelectric module's efficiency.A two-stage solar concentrator designed by Omer and Infield [18] was applied to increase the temperature on the hot ends of a thermoelectric module.The device improved the module's stability and efficiency by reducing its sensitivity to light angle as well as keeping the concentration ratio at 20.The two-stage structure not only enhanced the light-gathering efficiency but also confined the air convection intensity in the tube.A device integrating traditional rooftop solar isolation material and thermoelectric power generator improved by Maneewan et al. [19,20] was applied to reduce indoor temperature in Thailand.Fans powered by a thermoelectric module were used to cool the cold end of the thermoelectric module.The device reduced heat flux into the house and increased the efficiency of the thermoelectric module, which had a negative effect on the fan's total power and air convection intensity.An idea that incorporates commercially available thermoelectric generators (TEGs) to a water-fed heat exchanger was examined by Zhou et al. [21].They demonstrated that, when reducing pin length while increasing the number of pins, the resulting reduction in flow resistance was found to facilitate an increase in convective heat transfer, as well as in ∆T, and thus a great increase in conversion efficiency.Xiao et al. [22] built a three-dimensional finite element model of a thermoelectric module based on the low-temperature thermoelectric material bismuth telluride and the medium-temperature thermoelectric material filled-skutterudite.The numerical simulation results showed that reasonable thermal design of multi-stage models would take full advantage of the characteristics of thermoelectric materials and effectively improve the performance of power generation.Nguyen et al. [23] explored the behavior of thermoelectric generators exposed to transient heat sources.Comparing the simulation results with experimental results, they found that the Thomson effect plays a significant role in accurately predicting the power generated by the device.Rezania et al. [24] executed co-optimized design of microchannel heat exchangers and thermoelectric generators.Zhang et al. [25] designed a novel solar thermoelectric cogenerator that can supply electric power and heat simultaneously by adding TEG modules to the heat pipe in evacuated tubular solar collectors-the collector efficiency, output electrical power, and electrical efficiency are calculated to be 47.54%, 64.80 W, and 1.59%, respectively. Recently, Chen et al. [26][27][28][29] presented comprehensive numerical and analytical investigations on the thermoelectric system under various working conditions, together with the influence of key geometric parameters of the integrated thermoelectric power generating-cooling system on cooling power and overall performance.In addition, Chen et al. [30] reported an experimental study on thermoelectric modules for power generation at various operating conditions.They declared that a thermoelectric module is a better choice for power generation in recovering waste heat if the temperature of a system is below 150 ˝C.After that, Wang et al. [31] investigated the performance of a TEG combined with an air-cooling system designed using two-stage optimization.In this research, they used an analytical method to model the heat transfer of the heat sink and employed a numerical method with a finite element scheme to predict the performance of the thermoelectric generator.They found that using the obtained compromise point, despite the fact that the heat sink efficiency is reduced by 20.93% compared to that without the optimal design, the system output power density is increased by 88.70%, which is recommended for the design of the heat sink. From the research shown above, we found that increasing the hot-cold ends' temperature difference is a good method for increasing the thermoelectric efficiency limited by Carnot efficiency if appropriate TEG materials are selected.A high temperature difference will cause thermal stress along the materials and between the interfaces of different materials, however: the higher the temperature difference, the larger the thermal stress.This phenomenon has attracted the attention of several researchers.Merbati et al. [32] carried out a thermodynamic and thermal stress analysis of thermoelectric power generators with different pin geometry configurations.They managed to get a temperature and thermal stress field and tested the thermal efficiency, maximum power output, and thermal stress in the modules.Their findings showed that trapezoidal pins could alleviate thermal stress in the module and simultaneously increase efficiency.Ziabari et al. [33] addressed the problem of reducing interfacial shearing stress in a thermoelectric module (TEM) structure using analytical and finite-element-analysis (FEA) modeling.They also calculated the maximum shearing stress occurring at the ends of the peripheral legs (supposedly responsible for the structural robustness of the assembly) for different leg sizes.The results concluded that shearing stress can be effectively reduced by using thinner (smaller fractional area coverage) and longer (in the through thickness direction of the module) legs and compliant interfacial materials.Wu et al. [34] performed a numerical analysis on the thermodynamics and thermal stress performance of the thermoelectric module.They considered the variation of the thickness of materials and examined the influence of high heat flux on thermal efficiency, power output, and thermal stress.The results indicated that under high heat flux being imposed upon the hot end, the thermal stress is so strong that it has a decisive effect on the life expectancy of the device. Much investigation has been carried out to examine the thermodynamic performance of the thermoelectric device.Thermal stress generated in different heating uniformity in TEG modules due to temperature gradients is neglected to a certain extent, however.Thermal stress induced by a high temperature gradient in the device undoubtedly decreases the predicted life cycle of the module [34].For solar thermoelectric modules, a much higher focus of solar energy will be applied to the hot end of the TEG to achieve higher system efficiency.However, higher focus of solar energy may lead to heating non-uniformity on the hot end and thus cause larger thermal stress among different materials [35], which will significantly influence the life cycle of the TEG.A better understanding of the operating features of thermoelectric modules with different heating uniformity becomes essential, but seldom can similar work be found in the previous studies.The location of the maximum stress and the level of thermal stress intensity are obscure and the positions with the highest probability of cracking are not given.An optimum structure is one that decreases thermal stress while having little impact or even a positive effect on the device's thermoelectric performance.In this paper, a numerical model is presented to examine the effect of the heating uniformity on the module's stress level. Physical Model The thermoelectric model tested in the paper is presented in Figure 1, including a ceramic plate, conducting strips (copper), and thermoelectric pins.It is considered that the basic thicknesses of copper strip and ceramic plates are 0.6 mm.The size of thermoelectric pins is a ˆa ˆa = 3.00 mm 3.00 mm ˆ3.00 mm.The distance between the two pins is 0.60 mm.The TEG model, with 18 P-type and N-type legs, is thermally parallel-connected and electrically series-connected in order to achieve considerable power and voltage output.The most commonly used low-temperature thermoelectric material Bi 2 Te 3 is selected.Aluminum oxide (Al 2 O 3 ) ceramic was selected as the material of the ceramic plate. Energies 2015, 8, page-page in order to achieve considerable power and voltage output.The most commonly used low-temperature thermoelectric material Bi2Te3 is selected.Aluminum oxide (Al2O3) ceramic was selected as the material of the ceramic plate.Actually, a single thermoelectric module's life cycle is random, but the distribution of the life cycles of a large number of thermoelectric modules could be expected.The decisive factor for the life cycle of the module is thermal stress intensity.The Young's modulus of aluminum oxide ceramics (Al2O3) and Bi2Te3 vary greatly, so the positions that are most likely to crack are the interfaces of the copper strips and ceramic plates and the edges of the thermo-pins. The material properties used in the previous study [35] are incorporated in the present simulations, which are listed in Tables 1 and 2. Temperature (°C) Thermal expansion (10 Actually, a single thermoelectric module's life cycle is random, but the distribution of the life cycles of a large number of thermoelectric modules could be expected.The decisive factor for the life cycle of the module is thermal stress intensity.The Young's modulus of aluminum oxide ceramics (Al 2 O 3 ) and Bi 2 Te 3 vary greatly, so the positions that are most likely to crack are the interfaces of the copper strips and ceramic plates and the edges of the thermo-pins. The material properties used in the previous study [35] are incorporated in the present simulations, which are listed in Tables 1 and 2. [32]. Mathematical Model and Boundary Conditions The analysis of the TEG model is divided into two-sub steps including heat transfer analysis and thermal stress formulations. Heat Transfer Analysis (1) Governing equations In this paper, a finite element method is employed to simulate the temperature field in the thermoelectric modules.Equations coupling temperature T and electric potential V are: ∇ ¨J " 0 where J " ´σr∇p µ e `Vq `α∇Ts q " αTJ ´k∇T In the equations, k is thermal conductivity at zero current; vector J is the electric current per unit area; ρ is electric resistivity; σ = 1/ρ is electric conductivity; α is the Seebeck coefficient; µ is the chemical potential; e is the charge of a charged particle.Note that k, ρ, α, and σ of TE materials are a function of temperature ([35], Table 1). Thermoelectric modules are not ideally one dimensional in structure.Equation (1) reflects the multidimensional effects that can be obvious in interfaces of the module.Equations ( 1) and (2) form a system coupled with two partial differential equations with two dependent variables: temperature and electric potential.Equation (1) can be separated into four parts, which respectively reflect the magnitude of thermal energy transferred by conducting, Joule heat, heat absorbed by Peltier effect, and heat absorbed or released by Thomson effect. (2) Boundary conditions Some reasonable assumptions are made to simplify the mathematical model without too much deviation from real conditions.Note that all assumptions introduced above are aimed at excluding other unimportant factors that have little effect on the results and avoiding analyzing two or more factors simultaneously. The boundary conditions for thermoelectric heat transfer analysis are shown as follows: The actual TEG device is cooled by the heat sink connected to the cold end, with the water serving as working medium.The cooling of the TEG device is considered to be very good and the first boundary condition is applied to the cold end of the TEG module with a fixed value of 25 ˝C.This is reasonable for the TEG model and can be seen as a temperature buffer such that slight temperature change in cold end can be neglected.Then a specified temperature was applied to the cold end: In this paper, the total heat flow applied on the hot end is assumed to be of a constant value P, and the values of P for different case series were 4.41 W, 8.82 W, 13.23 W, 17.64 W, and 22.05 W, respectively. Considering that the heat flux being imposed upon the surface of the hot end is sometimes different from the total area of the hot end because the irradiance beam of solar energy concentrated on the system will not be exactly the same as the surface of the thermoelectric system's hot end.Thereby, we define the parameter of heating uniformity as: where A is the heating area of the hot end which receives the radiation energy, and S is the total area of the hot end. Total heat flow (P) is the same in one case series and thus the heat fluxes vary in one case series with U f being 10%, 30%, 50%, 70%, and 100%, respectively. The relationship between heat flow and heat flux in one case is as follows: where HF is heat flux.The magnitude of heat flux chosen in this paper is common in the electronic components.The corresponding peak temperature of the hot end of the TEG is not higher than 250 ˝C, which is a reasonable value for thermoelectric power generator devices. Reference voltage applied to a point on the copper strip surface: All the surfaces of the legs are exposed to electric insulating gas, and the current must be parallel to the surfaces. Thermal Stress Analysis (1) Governing equations Because the thermal conductivity of a material is a function of temperature, a thermoelectric module is not strictly one-dimensional.The thermodynamic and mechanical characteristics in the diection of the y-axis are nonlinear.When the temperature distribution in the system is considered, the part of heat converted to electric energy is neglected, as it accounts for only a small portion (i.e., less than 5%) of the total heat flow and it would have virtually no effect on the conclusions.The steady state energy equation in the whole TEG is presented as: where k is a function of temperature as shown in Figure 2. From the equations shown above, a temperature field is obtained by numerical simulation, which is applied to thermal stress analysis. A similar thermal stress analysis method, utilized in [34,35], is employed to evaluate thermal intensity in the model.In this paper, the analysis of the thermoelectric generator is divided into two sub-steps, including thermodynamic analysis and thermal stress formulations.Temperature field obtained from thermodynamic analysis is used to calculate the thermal stress field in the model.There is no doubt that temperature field and deformation will influence each other.It should be mentioned, however, that the temperature field will significantly affect the thermal stress field while the opposite is not obvious, as the deformation is quite small when compared to the model's geometric magnitude. Energies 2015, 8, page-page There is no doubt that temperature field and deformation will influence each other.It should be mentioned, however, that the temperature field will significantly affect the thermal stress field while the opposite is not obvious, as the deformation is quite small when compared to the model's geometric magnitude.To identify the displacement-strain relations, the dimensionless equations are as follows: A non-symmetrical Jacobian matrix expresses the stress-strain relationship in dimensionless form: The mechanical and thermodynamic equations are coupled to obtain the temperature and thermal stress field in the module. If the three principal stress values are not equal to zero in the module, we mark them σ1, σ2, and σ3 (supposing σ1 ≥ σ2 ≥ σ3).Then, we get maximum normal stress σmax, the minimum normal stress σmin, and maximum shear stress τmax: ( 2 ( ) ) To identify the displacement-strain relations, the dimensionless equations are as follows: A non-symmetrical Jacobian matrix expresses the stress-strain relationship in dimensionless form: The mechanical and thermodynamic equations are coupled to obtain the temperature and thermal stress field in the module. If the three principal stress values are not equal to zero in the module, we mark them σ 1 , σ 2 , and σ 3 (supposing σ 1 ě σ 2 ě σ 3 ).Then, we get maximum normal stress σ max , the minimum normal stress σ min , and maximum shear stress τ max : (2) Boundary conditions Boundary conditions for heat transfer analysis are listed in Equations ( 5) to (9).In this case, corresponding heat flux magnitude is common in electronic products.High heat flux leads to considerable thermal stress level in the model. Computational Procedure and Verification A grid system identical to the one employed in thermodynamics and thermoelectric analysis is applied to thermal stress analysis.Finite element method (FEM) calculations are performed by using the general thermoelectric analysis package ANSYS 14.0.Thermal solid brick 8note 70 element and structural solid brick 8note 185 element are used to discretize the computational domain.The iterations continue until the relative errors of heat flow and electric current are both below 1 ˆ10 ´4.It has been verified that the commercial software ANSYS package can present credible results [29]. In order to test the grid-independence of the grid system in numerical simulation, three cases with grid numbers of 11,025, 45,325 and 88,200 are tested (for single couple thermo-pins, the respective numbers are 612, 2518 and 4900) in the same boundary conditions.When the external resistance is chosen as 0.165 Ω, numerical simulation results indicate that the external voltages respectively are 0.2205, 0.2202, and 0.2199 V. Another series of tests were carried out to check the stress intensity in a single couple thermo-pins; the respective grid numbers are 11,025, 45,325, and 88,200.The maximum thermal stresses are 876, 877, and 877 MPa, respectively.We find that the deviation is negligible, which demonstrates that numerical calculations are grid-independent for these cases.Here a grid number of 45,325 shown in Figure 2 is thus selected as the mesh system in this paper. Few results coupling the temperature and thermal stress of thermoelectric system have been reported in recent years.One way to verify the simulation method used in this work is to employ the previous geometrical model [34] and compare the temperature and thermal stress results to those results reported in [34].It is shown that the deviations between the present results and previous results at given points in the same geometrical model are less than 2%.Further validation of numerical simulation will be executed by experimental results concerning both the system output power and thermal stress under non-uniform high heat fluxes. Temperature Distribution of the TEG Model As we can see from Figure 3, the highest temperature appears on the center of the top surface for all cases.With the heating uniformity decreasing from 100% to 30%, temperature becomes increasingly inhomogeneous on the surface of the TEG hot end.The temperature gradient becomes higher, and the maximum temperature increases from 187 ˝C to 222 ˝C, which is consistent with our prediction.The temperature increases quite moderately, however, and it seems that the ceramic plates play an important role in dispersal of heat flux.As we see in Figure 3b-d, even though the heating power area on the hot end has the geometrical characteristics of both central and axial symmetry, the temperature distribution only shows the feature of axial symmetry, which is because the heat conductivity of the copper sheets connecting the thermocouple arm is far larger than that of the thermocouple arm and ceramic plates.There is no doubt that the difference in temperature distributions in the module will lead to voltage output variations for different heat flux concentration rates.At the center of the module, thermoelectric couples working under a higher temperature difference will have a larger Seebeck voltage output, while the voltage outputs for the couples at the margins are much lower.The Seebeck voltage generated by the module is a function of the temperature distribution of the model, thus the heat concentration rate will exert a significant influence on the energy conversion efficiency of the module.As displayed in the figures, the temperature gradients in the model increase with the concentration rate, which should be reflected in the increasing thermal stresses for these cases. Thermal Stress Distribution of the TEG Model Figure 4 shows that as the heating uniformity decreases from 100% to 30%, the maximum stress increases from 1330 MPa to 1467 MPa.The highest thermal stress locations are mainly distributed over the interspace between two copper sheets and the ceramic plate on the hot end.The relatively low thermal conductivity of the ceramic, and the super thermal conductivity of the copper in this region will stimulate high temperature gradients between these two materials, which is responsible for the high thermal stress in these regions. It seems that the thermal stress differences among these cases are not as large as expected.It can be reasoned by the buffering effect of the ceramic plate.Though the temperature gradients reach the peak value on the hot end of the model, the highest thermal stress happens at the edges of interface between the copper and the ceramic, where the expansion coefficient differences between the two materials will exacerbate the thermal stress intensity.There is no doubt that the difference in temperature distributions in the module will lead to voltage output variations for different heat flux concentration rates.At the center of the module, thermoelectric couples working under a higher temperature difference will have a larger Seebeck voltage output, while the voltage outputs for the couples at the margins are much lower.The Seebeck voltage generated by the module is a function of the temperature distribution of the model, thus the heat concentration rate will exert a significant influence on the energy conversion efficiency of the module.As displayed in the figures, the temperature gradients in the model increase with the concentration rate, which should be reflected in the increasing thermal stresses for these cases. Thermal Stress Distribution of the TEG Model Figure 4 shows that as the heating uniformity decreases from 100% to 30%, the maximum stress increases from 1330 MPa to 1467 MPa.The highest thermal stress locations are mainly distributed over the interspace between two copper sheets and the ceramic plate on the hot end.The relatively low thermal conductivity of the ceramic, and the super thermal conductivity of the copper in this region will stimulate high temperature gradients between these two materials, which is responsible for the high thermal stress in these regions. It seems that the thermal stress differences among these cases are not as large as expected.It can be reasoned by the buffering effect of the ceramic plate.Though the temperature gradients reach the peak value on the hot end of the model, the highest thermal stress happens at the edges of interface between the copper and the ceramic, where the expansion coefficient differences between the two materials will exacerbate the thermal stress intensity. Thermal Stress Distribution on the Horizontal and Longitudinal Cross-Section The detailed positions of Lines 1 and 2, which will serve as reference lines for the thermal stress distribution on horizontal and longitudinal directions, are shown in Figures 5 and 6, and the results are shown in Figures 5, 7, and 8.As clearly shown in Figures 5 and 7, the high thermal stress region of the TEG model is mainly distributed in the upper part of the model.This is reasonable, since the temperature near the hot end is much higher, thus intensive local deformation will arouse severe thermal stress concentration.The phenomenon is typically reflected at the central parts of thermo-pins, where the mismatch of the deformations greatly enhances the possibility of material failure.To make the situation worse, the interface between different materials is often connected by metal alloy solder, which is more likely to be damaged.This agrees with the longitudinal distribution of copper sheets.The larger thermal conductivity of cooper sheets lowers the longitudinal temperature gradient of the TEG model, and as a result, the longitudinal stress is lowered. In Figure 7, the highest thermal stresses in the F1 section for Uf = 100%, 50%, 30%, and 10% are 877, 910, 956, and 1054 MPa, respectively.The thermal stress for the last case is 67.3% higher than the first case, while the energy conversion efficiency difference for the two cases is quite small.In Figure 8, the highest thermal stress in the F2 section for Uf = 100%, 50%, 30%, and 10% are 877, 910, 956, and 1054 MPa, respectively.A trend similar to Figure 6 is presented.It is obvious that non-uniform heat flux distributions weaken the reliability of the model while doing little good to improve the efficiency of the module.Measures should be taken to keep the concentration uniformity of the solar energy concentration system. Thermal Stress Distribution on the Horizontal and Longitudinal Cross-Section The detailed positions of Lines 1 and 2, which will serve as reference lines for the thermal stress distribution on horizontal and longitudinal directions, are shown in Figures 5 and 6 and the results are shown in Figures 5, 7 and 8.As clearly shown in Figures 5 and 7 the high thermal stress region of the TEG model is mainly distributed in the upper part of the model.This is reasonable, since the temperature near the hot end is much higher, thus intensive local deformation will arouse severe thermal stress concentration.The phenomenon is typically reflected at the central parts of thermo-pins, where the mismatch of the deformations greatly enhances the possibility of material failure.To make the situation worse, the interface between different materials is often connected by metal alloy solder, which is more likely to be damaged.This agrees with the longitudinal distribution of copper sheets.The larger thermal conductivity of cooper sheets lowers the longitudinal temperature gradient of the TEG model, and as a result, the longitudinal stress is lowered. In Figure 7, the highest thermal stresses in the F1 section for U f = 100%, 50%, 30%, and 10% are 877, 910, 956, and 1054 MPa, respectively.The thermal stress for the last case is 67.3% higher than the first case, while the energy conversion efficiency difference for the two cases is quite small.In Figure 8, the highest thermal stress in the F2 section for U f = 100%, 50%, 30%, and 10% are 877, 910, 956, and 1054 MPa, respectively.A trend similar to Figure 6 is presented.It is obvious that non-uniform heat flux distributions weaken the reliability of the model while doing little good to improve the efficiency of the module.Measures should be taken to keep the concentration uniformity of the solar energy concentration system.From Figures 5, 7, and 8, we can conclude that the concentration of thermal stress is more likely to happen where the shorter edge of the copper sheet connects with the ceramic plates.As for the TEG model, the most fragile parts are the regions shown in Figure 5. Therefore, it's quite necessary to increase the strength of the material or to alleviate the thermal stress magnitude at these regions.From Figures 5, 7, and 8, we can conclude that the concentration of thermal stress is more likely to happen where the shorter edge of the copper sheet connects with the ceramic plates.As for the TEG model, the most fragile parts are the regions shown in Figure 5. Therefore, it's quite necessary to increase the strength of the material or to alleviate the thermal stress magnitude at these regions.TEG model, the most fragile parts are the regions shown in Figure 5. Therefore, it's quite necessary to increase the strength of the material or to alleviate the thermal stress magnitude at these regions. The Effect of Heating Power on Temperature and Thermal Stress As shown in Figures 9 and 10 there is a linear relationship between heating power and the highest temperature as well as maximum stress.As the heating power increases from 4.41 W to 22.05 W, the highest temperature increases from 56 ˝C to 187 ˝C, and the maximum stress increases from 420 GPa to 1330 MPa (U f = 100%).The efficiency of the TEG is positively correlated with the temperature difference between the hot and cold ends.The high efficiency of the model is at the cost of TEG Energies 2015, 8, page-page The Effect of Heating Power on Temperature and Thermal Stress As shown in Figures 9 and 10, there is a linear relationship between heating power and the highest temperature as well as maximum stress.As the heating power increases from 4.41 W to 22.05 W, the highest temperature increases from 56 °C to 187 °C, and the maximum stress increases from 420 GPa to 1330 MPa (Uf = 100%).The efficiency of the TEG is positively correlated with the temperature difference between the hot and cold ends.The high efficiency of the model is at the cost of TEG reliability. The Effect of Heating Uniformity on Temperature and Thermal Stress From Figures 11 and 12, we can see that as heating uniformity decreases from 100% to 10%, the highest temperature increases from 59 °C to 76 °C when the heating power is 4.41 W; the corresponding maximum stress increases from 418 MPa to 472 MPa; the highest temperature and the absolute value of the slope of the maximum stress curve increases as the heating uniformity decreases.When the Uf is smaller than 50%, there is a dramatic change in the absolute value of the curve slope, indicating a significant change in the highest temperature and maximum stress.When the Uf is larger than 50%, however, the absolute values of the curves change more slightly.The same trend is found for maximum thermal stress.Lower uniformity leads to higher heat flux density in parts of the model; as a result, the temperature gradient is further increased in some local regions.Energies 2015, 8, page-page The Effect of Heating Power on Temperature and Thermal Stress As shown in Figures 9 and 10, there is a linear relationship between heating power and the highest temperature as well as maximum stress.As the heating power increases from 4.41 W to 22.05 W, the highest temperature increases from 56 °C to 187 °C, and the maximum stress increases from 420 GPa to 1330 MPa (Uf = 100%).The efficiency of the TEG is positively correlated with the temperature difference between the hot and cold ends.The high efficiency of the model is at the cost of TEG reliability. The Effect of Heating Uniformity on Temperature and Thermal Stress From Figures 11 and 12, we can see that as heating uniformity decreases from 100% to 10%, the highest temperature increases from 59 °C to 76 °C when the heating power is 4.41 W; the corresponding maximum stress increases from 418 MPa to 472 MPa; the highest temperature and the absolute value of the slope of the maximum stress curve increases as the heating uniformity decreases.When the Uf is smaller than 50%, there is a dramatic change in the absolute value of the curve slope, indicating a significant change in the highest temperature and maximum stress.When the Uf is larger than 50%, however, the absolute values of the curves change more slightly.The same trend is found for maximum thermal stress.Lower uniformity leads to higher heat flux density in parts of the model; as a result, the temperature gradient is further increased in some local regions. The Effect of Heating Uniformity on Temperature and Thermal Stress From Figures 11 and 12 we can see that as heating uniformity decreases from 100% to 10%, the highest temperature increases from 59 ˝C to 76 ˝C when the heating power is 4.41 W; the corresponding maximum stress increases from 418 MPa to 472 MPa; the highest temperature and the absolute value of the slope of the maximum stress curve increases as the heating uniformity decreases.When the U f is smaller than 50%, there is a dramatic change in the absolute value of Energies 2015, 8, 12584-12602 the curve slope, indicating a significant change in the highest temperature and maximum stress.When the U f is larger than 50%, however, the absolute values of the curves change more slightly.The same trend is found for maximum thermal stress.Lower uniformity leads to higher heat flux density in parts of the model; as a result, the temperature gradient is further increased in some local regions.The thermo-pins are connected in series-a local damage will lead to the failure of the whole device.Since the heat non-uniformity brings higher thermal stresses while bringing few benefits to the system, the non-uniform heat flux distribution should try to be avoided in future designs.There is still something to be keep in mind, however, that the maximum temperature and thermal stress in the model changes are quite small when U f > 50%, a higher energy efficiency may be achieved without significantly reducing the reliability of the device. Energies 2015, 8, page-page The thermo-pins are connected in series-a local damage will lead to the failure of the whole device.Since the heat non-uniformity brings higher thermal stresses while bringing few benefits to the system, the non-uniform heat flux distribution should try to be avoided in future designs.There is still something to be keep in mind, however, that the maximum temperature and thermal stress in the model changes are quite small when Uf > 50%, a higher energy efficiency may be achieved without significantly reducing the reliability of the device.Figures 13 and 14 show the maximum increments of the highest temperature and maximum stress in varying conditions of heating uniformity.When the Uf is 70%, 50%, 30%, and 10%, the increment of the highest temperature is 6%, 9%, 20%, and 45%, respectively, and the increment of maximum stress is 3%, 5%, 10%, and 22%, respectively.In practice, when the heating uniformity is larger than 70%, it does not have a strong effect on the highest temperature and maximum stress (<6%), and the increment will not significantly influence the life-cycle of the device.It is important to keep the heat flux uniform in the hot end, but the requirement is not strict.Thus, the requirements for the design of the solar energy concentration device is not so rigorous.The thermo-pins are connected in series-a local damage will lead to the failure of the whole device.Since the heat non-uniformity brings higher thermal stresses while bringing few benefits to the system, the non-uniform heat flux distribution should try to be avoided in future designs.There is still something to be keep in mind, however, that the maximum temperature and thermal stress in the model changes are quite small when Uf > 50%, a higher energy efficiency may be achieved without significantly reducing the reliability of the device.Figures 13 and 14 show the maximum increments of the highest temperature and maximum stress in varying conditions of heating uniformity.When the Uf is 70%, 50%, 30%, and 10%, the increment of the highest temperature is 6%, 9%, 20%, and 45%, respectively, and the increment of maximum stress is 3%, 5%, 10%, and 22%, respectively.In practice, when the heating uniformity is larger than 70%, it does not have a strong effect on the highest temperature and maximum stress (<6%), and the increment will not significantly influence the life-cycle of the device.It is important to keep the heat flux uniform in the hot end, but the requirement is not strict.Thus, the requirements for the design of the solar energy concentration device is not so rigorous.Figures 13 and 14 show the maximum increments of the highest temperature and maximum stress in varying conditions of heating uniformity.When the U f is 70%, 50%, 30%, and 10%, the increment of the highest temperature is 6%, 9%, 20%, and 45%, respectively, and the increment of Energies 2015, 8, 12584-12602 maximum stress is 3%, 5%, 10%, and 22%, respectively.In practice, when the heating uniformity is larger than 70%, it does not have a strong effect on the highest temperature and maximum stress (<6%), and the increment will not significantly influence the life-cycle of the device.It is important to keep the heat flux uniform in the hot end, but the requirement is not strict.Thus, the requirements for the design of the solar energy concentration device is not so rigorous. Temperature Gradient and Thermal Stress Distribution on Line 1 and Line 2 The distributions of temperature gradient and thermal stress along Line 1 are shown in Figure 15.As expected, the higher is the thermal gradient, the larger the thermal stress in the model.Both of these two parameters had their peak value between x = 0.0066 and 0.0072 m, 0.0138 and 0.0144 m.The absolute temperature gradient varies from 145 to 4.50× 10 4 K/m and the thermal stress from 250 MPa to 720 MPa.This phenomenon can be explained by two factors: Firstly, the locations are close to the center of the heat flux imposed at the hot end; Secondly, the large thermal conductivity differences among the copper, ceramic, and thermoeletric materials stimulate large temperature gradients in the local regions.Further, severe deformation mismatch is aroused by the large expansion coefficients among these materials. Temperature Gradient and Thermal Stress Distribution on Line 1 and Line 2 The distributions of temperature gradient and thermal stress along Line 1 are shown in Figure 15.As expected, the higher is the thermal gradient, the larger the thermal stress in the model.Both of these two parameters had their peak value between x = 0.0066 and 0.0072 m, 0.0138 and 0.0144 m.The absolute temperature gradient varies from 145 to 4.50× 10 4 K/m and the thermal stress from 250 MPa to 720 MPa.This phenomenon can be explained by two factors: Firstly, the locations are close to the center of the heat flux imposed at the hot end; Secondly, the large thermal conductivity differences among the copper, ceramic, and thermoeletric materials stimulate large temperature gradients in the local regions.Further, severe deformation mismatch is aroused by the large expansion coefficients among these materials.The distributions of temperature gradient and thermal stress along Line 1 are shown in Figure 15.As expected, the higher is the thermal gradient, the larger the thermal stress in the model.Both of these two parameters had their peak value between x = 0.0066 and 0.0072 m, 0.0138 and 0.0144 m.The absolute temperature gradient varies from 145 to 4.50 ˆ10 4 K/m and the thermal stress from 250 MPa to 720 MPa.This phenomenon can be explained by two factors: Firstly, the locations are close to the center of the heat flux imposed at the hot end; Secondly, the large thermal conductivity differences among the copper, ceramic, and thermoeletric materials stimulate large temperature gradients in the local regions.Further, severe deformation mismatch is aroused by the large expansion coefficients among these materials.As can be seen from Figure 16, when the heating uniformity is 10%, thermal stress achieves peak values of 700 MPa and 580 MPa between z = 0.0108 and 0.0138 m and z = 0.0144 and 0.0174 m, respectively, from the center to the edges along Line 2, and it decreases to430 MPa between z = 0.0138 and 0.0144 m.The positions where thermal stress reaches its peak values are the edges of the interface between the copper strips and ceramic plates, where the large thermal expansion coefficient difference between the two materials leads to stress concentration there.Points between z = 0.0138 and 0.0144 m are places where ceramic disconnected with the copper strips, thus allowing the ceramic plate to expand freely without restriction.This is the very reason why thermal stress is relatively small there.Meanwhile, we could see that the thermal stress between z = 0.0108 to 0.0138 m in the center area increases from 550 MPa to 700 MPa with the uniformity decreasing from 100% to 10%.In the edge regions (z = 0 to 0.003 m), however, the trend is reversed such that the thermal stress decreases from 580 MPa to 51 MPa with increasing concentration ratio.This phenomenon is reasonable.As seen in Figure 3, because of the increasing spreading thermal resistance, the temperature in these regions decrease with concentration rate.Lower thermal expansion is sure to reduce thermal stress in these regions. Conclusions A TEG model with 18 thermo-pin couples is established and analyzed by a finite element analysis method in this paper.We examined the temperature and thermal stress distributions in the TEG model and obtained the most likely crack zone of the model for different heat flux concentration rates.Numerical simulation results indicate that: As can be seen from Figure 16, when the heating uniformity is 10%, thermal stress achieves peak values of 700 MPa and 580 MPa between z = 0.0108 and 0.0138 m and z = 0.0144 and 0.0174 m, respectively, from the center to the edges along Line 2, and it decreases to430 MPa between z = 0.0138 and 0.0144 m.The positions where thermal stress reaches its peak values are the edges of the interface between the copper strips and ceramic plates, where the large thermal expansion coefficient difference between the two materials leads to stress concentration there.Points between z = 0.0138 and 0.0144 m are places where ceramic disconnected with the copper strips, thus allowing the ceramic plate to expand freely without restriction.This is the very reason why thermal stress is relatively small there.Meanwhile, we could see that the thermal stress between z = 0.0108 to 0.0138 m in the center area increases from 550 MPa to 700 MPa with the uniformity decreasing from 100% to 10%.In the edge regions (z = 0 to 0.003 m), however, the trend is reversed such that the thermal stress decreases from 580 MPa to 51 MPa with increasing concentration ratio.This phenomenon is reasonable.As seen in Figure 3, because of the increasing spreading thermal resistance, the temperature in these regions decrease with concentration rate.Lower thermal expansion is sure to reduce thermal stress in these regions.As can be seen from Figure 16, when the heating uniformity is 10%, thermal stress achieves peak values of 700 MPa and 580 MPa between z = 0.0108 and 0.0138 m and z = 0.0144 and 0.0174 m, respectively, from the center to the edges along Line 2, and it decreases to430 MPa between z = 0.0138 and 0.0144 m.The positions where thermal stress reaches its peak values are the edges of the interface between the copper strips and ceramic plates, where the large thermal expansion coefficient difference between the two materials leads to stress concentration there.Points between z = 0.0138 and 0.0144 m are places where ceramic disconnected with the copper strips, thus allowing the ceramic plate to expand freely without restriction.This is the very reason why thermal stress is relatively small there.Meanwhile, we could see that the thermal stress between z = 0.0108 to 0.0138 m in the center area increases from 550 MPa to 700 MPa with the uniformity decreasing from 100% to 10%.In the edge regions (z = 0 to 0.003 m), however, the trend is reversed such that the thermal stress decreases from 580 MPa to 51 MPa with increasing concentration ratio.This phenomenon is reasonable.As seen in Figure 3, because of the increasing spreading thermal resistance, the temperature in these regions decrease with concentration rate.Lower thermal expansion is sure to reduce thermal stress in these regions. Conclusions A TEG model with 18 thermo-pin couples is established and analyzed by a finite element analysis method in this paper.We examined the temperature and thermal stress distributions in the TEG model and obtained the most likely crack zone of the model for different heat flux concentration rates.Numerical simulation results indicate that: Conclusions A TEG model with 18 thermo-pin couples is established and analyzed by a finite element analysis method in this paper.We examined the temperature and thermal stress distributions in the TEG model and obtained the most likely crack zone of the model for different heat flux concentration rates.Numerical simulation results indicate that: (1) The un-uniformity of heat flux imposed upon the hot end has a significant effect on the thermal stress of TEG and life expectancy of the device.When the heating uniformity exceed 70%, however, un-uniformity of heat flux have little influence on the maximum thermal stress in the model.Uniform heat flux is favorable for solar energy concentration device design, but it is not a strict requirement.(2) The maximum temperature and thermal stress of the TEG model will increase with the total heat flow.Higher efficiency of the model is at the cost of the life expectancy of the device.(3) The interfaces between the copper strips, ceramic plates, and the thermo-pins are the place that is most likely to be damaged.When designing the TEG modules, in order to prolong the life cycle of device, we should strengthen these positions. Figure 1 . Figure 1.Geometric dimensions of the thermoelectric model. Figure 1 . Figure 1.Geometric dimensions of the thermoelectric model. (a) All surfaces of the model except the hot end and cold end are considered to be heat insulation.(b) Neglect heat convection on all the surfaces.(c) There is no difference in properties as a function of position.Energies 2015, 8, 12584-12602 (d) Electrical contact resistance and thermal contact resistance are not taken into consideration. Figure 6 .Figure 7 . Figure 6.Front view of the referenced thermoelectric power generator. Figure 5 . 11 Figure 5 . Figure 5.The position of cross-section F1, F2 and Line1, Line2 in the TEG model (unit of the legend: MPa for thermal stress). Figure 6 .Figure 7 . Figure 6.Front view of the referenced thermoelectric power generator. Figure 6 .Figure 5 . Figure 6.Front view of the referenced thermoelectric power generator. Figure 6 .Figure 7 . Figure 6.Front view of the referenced thermoelectric power generator. Figure 9 . Figure 9.The effect of heating power on the highest temperature of TEG. Figure 10 . Figure 10.The effect of heating power on the thermal stress of TEG. Figure 9 . Figure 9.The effect of heating power on the highest temperature of TEG. Figure 9 . Figure 9.The effect of heating power on the highest temperature of TEG. Figure 10 . Figure 10.The effect of heating power on the thermal stress of TEG. Figure 10 . Figure 10.The effect of heating power on the thermal stress of TEG. Figure 11 . Figure 11.The effect of heating uniformity on the maximum temperature of TEG. Figure 12 . Figure 12.The effect of heating uniformity on the maximum thermal stress of TEG. Figure 11 . Figure 11.The effect of heating uniformity on the maximum temperature of TEG. Figure 11 . Figure 11.The effect of heating uniformity on the maximum temperature of TEG. Figure 12 . Figure 12.The effect of heating uniformity on the maximum thermal stress of TEG. Figure 12 . Figure 12.The effect of heating uniformity on the maximum thermal stress of TEG. Figure 13 .Figure 13 .Figure 14 . Figure 13.The effect of heating uniformity on maximum temperature of TEG. Figure 14 . Figure 14.The effect of heating uniformity on the maximum thermal stress of TEG. 3. 6 . Temperature Gradient and Thermal Stress Distribution on Line 1 and Line 2
2016-03-22T00:56:01.885Z
2015-11-06T00:00:00.000
{ "year": 2015, "sha1": "ac0e95f3232b58b2718f597531254b8396073423", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/8/11/12332/pdf?version=1446804557", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "ac0e95f3232b58b2718f597531254b8396073423", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
149234973
pes2o/s2orc
v3-fos-license
CITIES, STAGES AND AuDIENCES: RIo DE JANEIRo AND São PAulo IN TWo ACTS PRoloGuE “In the theatre, there are always three of us.” Unpacked, the phrase of the French writer Alexandre Dumas, fils (1824-1895) reveals the singularity of the show compared to the practice of reading. Virtually silent, the book “speaks in a low voice to a single person.” Theatre, though, addresses “the thousand or 1,500 people assembled and has its roots in the tribune and the public arena” (Dumas, fils cited in Charle, 2012: 215). The size of this estimate tells us much about the popular reach of this art form on the stages of the major European cities of the nineteenth century – Paris, London, Vienna and Berlin. As the historian Christophe Charle demonstrates in Théâtres en capitales. Naissance de la société du spectacle à Paris, Berlin, Londres et Vienne, the successful plays of the period spread new social representations far beyond those classes with ready access to literature. Novels with print runs of around 100,000 copies only appeared towards the end of the nineteenth century. But plays performed more than a hundred times to large audiences had been frequent since the 1850s. The growing interest in the theatre was manifested on all sides: from the increasingly diversified public to the writers striving to make a name and some money as playwrights, accompanied by the swelling influx of would-be actors and directors. Social art, collective art, the art of representation, theatre is inseparable from urban life, multifaceted sociability, new methods of transportation, the movements of crowds, and a general increase in circulation at international http://dx.doi.org/10.1590/2238-38752017v727 article | heloisa pontes and rafael do nascimento cesar The first of these moments spans from the end of the nineteenth century to the first decades of the twentieth in what was then the capital of the Brazilian Republic.Drawing from periodicals dedicated to cultural and artistic criticism, we shall see how the theatre, the main form of entertainment during the period, served to symbolically retranslate the hierarchies that structured the social life of the Carioca belle époque.We have chosen the trajectory of composer Chiquinha Gonzaga in order to highlight how the management of these hierarchies, shaped by the artist's career, was fundamental to the elaboration of a repertoire deemed 'popular.' The second moment, centred in the 1940s and 1950s, takes us from Rio de Janeiro to São Paulo.The catching up of the Paulista theatre scene with international developments -enabled by the creation of the Teatro Brasileiro de Comédia, the incorporation of foreign directors, the introduction of new theatrical conventions and the valorisation of dramaturgy -is explored through the career of the actress Cacilda Becker (1921Becker ( -1969) ) and the critic who most closely accompanied her work, Decio de Almeida Prado (1917Prado ( -2000)). Here the comparative framework takes the form of a sociological experiment: we emphasize the central role played by the theatre in the two cities with the aim of reconstructing the not always linear movement of absorption and expansion of this artistic practice in urban environments pushed by the investment in culture as a medium and substrate for the crystallization of diverse elements of the modernity then evolving.One of these elements, relating to the transformations in gender relations, is visualized here through the careers of Chiquinha Gonzaga and Cacilda Becker.Composing and directing musical shows were very clearly marked out as male activities, hence the specific challenges encountered by Chiquinha over the course of her professional trajectory. 2 Very different were those difficulties experienced by Cacilda, the actress who best symbolized the revival of the Paulista theatre scene in the period under analysis. FIRST ACT The city in revue "We've an audience, we've an audience, we've an audience!" 3 It was with real enthusiasm that the playwright and theatre critic Arthur Azevedo (1855Azevedo ( -1908) ) welcomed the opening of the play Casa de bonecas (A Doll's House) in Rio de Janeiro at the end of May 1899.After a period in Lisbon, completed a month earlier, the company of the Portuguese actress Lucinda Simões travelled to the federal capital, bringing the new play directly from the European stages to the Teatro Sant'Anna, located on Praça Tiradentes, the city's cultural epicentre.The staging of the drama by Ibsen (1828Ibsen ( -1906)), previously unperformed in Brazil, was an overnight success among those sectors of the elite tuned to whatever was most modern in the arts, as well as being considered "a major artistic happening" by the press (Azevedo, 1899b). cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 Compared to the other shows staged daily in Rio de Janeiro's theatres at the time, Casa de bonecas deserved special attention.Not so much on account of the overseas origin of the text or cast -given that receiving theatre companies from abroad was commonplace in the capital -but because of what the play represented and indeed acted out.The theme of Nora and Torwald's failed marriage caused an upset among the spectators, who saw the young wife's decision to abandon her home, her husband and her children as a direct assault on existing gender norms.From the stage, Nora's resolute farewell, made moments before the end of the play and contrasting with the incredulous impotence of the husband, announced a redefinition of social roles central to the relationship between men and women. But it was not just the content of Ibsen's drama that caused an impact and, in many cases, discomfort.In his weekly column for the newspaper A Notícia, Arthur Azevedo (1899a), this time somewhat less enthusiastically, reflected on the staging of Lucinda Simões's company.Sparing in his praise for the cast, he argued that "a theatre play should be above all clear, and Casa de bonecas is not."In his view, Ibsen's text failed to meet the requirements of the "well-written play," symbolized by classic French drama: balanced characters who, in a linear plot filled with music, favoured inter-human action rather than subjectivism (Neves & Levin, 2008).The show presented in the Teatro Sant'Anna, with its unexpected and inconclusive ending, leaving the audience's expectations in suspense, diverged from this model.Not unsurprisingly, Arthur Azevedo (1899a) considered it "[un]suitably arranged.""Frankly," the critic went on, "what would the audience make of a Brazilian author who made a stranger enter a family home, at an untimely hour, just to ask for a cigar and announce, in Sibylline fashion, that he was going to die?Imagine the poor Brazilian author who did that!" 4 Indeed, they did not.Until the first decades of the twentieth century, Brazilian dramaturgy was some distance from the foreign theatrical avantgarde. The revolution triggered in Europe by the emergence of modern drama -championed by authors such as Chekhov, Strindberg and Ibsen himself -would take decades to reach the Brazilian stages (Szondi, 2001).Hence, what Arthur Azevedo identified as a dimension 'out of place' in Casa de bonecas alludes to the complex relationship between the theatre, the formation of the public and nationality in Brazil, recurrent topics in the discussions of republican intellectuals concerned to determine the symbolic place of the 'people' in the new regime. In the years following the proclamation of the Republic, the theatregoing audience was accompanied by the idea that certain sectors of the intelligentsia held of it.Far from expressing 'cultural unity,' something impossible for a nation lacking any prior identificatory 'sentiment,' it was imagined as an audience that, being endowed with a "public meaning," should give flesh, blood and voice to city dwellers.Of all the types of symbolic production available in fin de siècle Rio de Janeiro, the theatre was the cultural arena that best knew article | heloisa pontes and rafael do nascimento cesar how to identify the contours of this new sensibility and catalyse them in the elaboration of a new social imaginary. 5While literature, backed by the nascent publishing market or the press, was limited to the literate minority (17.4% of the population in 1890 6 ), theatre shows, especially those associated with socalled revue theatre, attracted audiences with a wide variety socioeconomic and cultural profiles.For the same show, Tiago de Melo Gomes (2004: 35) writes, "an internally highly diverse audience would pay for tickets at a wide range of prices," thereby maintaining the group positions within the social hierarchy. Playwright, columnist, poet and songwriter, Arthur Azevedo arrived in Rio de Janeiro from the northeastern Brazilian state of Maranhão in 1873 and, within a short time, had successfully built a prolific and prestigious literary career.His relations with other intellectuals, developed in the politically turbulent everyday world of the end of Empire, shaped a common social experience that would be reflected in both the artistic production and the lifestyle of the group.Possessing a cultural capital uncommon for the period, but unable to make a living from their trade, these young literary types found in public service an ideal comfort zone between financial stability and a relatively flexible routine, allowing them to socialize intensely in the city's cafés and bars. Arthur Azevedo was no stranger to the vicissitudes of this kind of life: he spent half of his working day in the doldrums of the office buildings and spent the rest of the time in Bohemian conviviality with his colleagues from the profession."They were almost always friends, sometimes opponents, but all sharing the consciousness and determination to fulfil their respective 'missions.'Arthur Azevedo's mission was to dedicate body and soul to the theatre" (Mencarelli, 1999: 47, our italics).His adherence to the nationalist ideology gave him the taste for Brazilian literature and the fight for the "regeneration of the national theatre" (Azevedo, 1895), while the circulation through the elite spaces of a Francophile Rio de Janeiro -like the Alcazar Lyrique and the Teatro São Pedro -made him keenly aware of the fascination that shows like comic operas, operetta and revues of the year exerted over the hearts and minds of the population. 7 The establishment of revue theatre as a profitable genre dates from the final quarter of the nineteenth century.Co-opting a variety of artistic languages, including poetry, music and dance, the narrative constant to the shows was a comic and parodical recapitulation of the events that had marked the year in the capital. 8The 1880s and 1890s saw the apogee of the genre, precisely when the development of mass communication and transport was changing the city's physiognomy.The channels provided by the press -which, as well as newspapers, included weekly magazines such as Fon-Fon and Kosmos -enabled an outpouring of the polyvalent talents of young writers eager to make a living from articles, stories, advertising slogans, verses and caricatures.Undoubtedly the risk inherent to this activity impelled them to venture onto the stages too, especially since, taking the French classic style as an influential cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 model, the theatre produced in Rio de Janeiro frequently turned to a wide variety of dramatic source texts. The high turnover of jobs indicated, on one hand, the difficulties encountered in the specialization and autonomization of the literary craft, leading writers to assume "the roles of the press caricaturist, publicist, revue author and, not infrequently, actor" (Saliba, 2002: 43).On the other hand, the presence of polygraphic artists like Bastos Tigre, Raul Pederneiras and Oduvaldo Vianna was central to the configuration of a transversal theatre style.The constant recourse to humour and parody dialogued directly with a social experience still taking shape and conferred revue theatre "an ambiguous place in comic production, probably on that threshold always difficult to discern between the cult and the popular" (94). The fact that Arthur Azevedo translated French operettas into Portuguese while also authoring revues of the year created an uncomfortable situation for exponents of nationalist purism.As a member of the Brazilian Academy of Letters since its foundation in 1897, he joined in the chorus of accusations that Brazil's dramatic arts were in decline; as a playwright and revue writer (revistógrafo), he was at the forefront of shows that were huge box office successes.Struck by this apparent inconsistency, the novelist Coelho Neto was especially forthright about his indignation with his Maranhense colleague.In a polemic that unfolded in the newspaper A Notícia, in August 1897, he launched an attack on the "lowly author ad usum of a wild band of illiterates," a blatant allusion to Arthur Azevedo and the works written by him to entertain a supposedly ignorant and unruly public.For Coelho Neto, the talent and "comic élan" of the colleague, rather than being wasted on the "riotous and disconnected scenes of the revues" (Coelho Neto, cited in Mencarelli, 1999: 84), would have been better employed in genres like comedy, deemed 'superior' in terms of literary and artistic value. Harsh opinions like those of the novelist were echoed by some sections of the press.In the two years during which it was published, Revista Theatral (1895: 1) deliberated the quality of what was performed on the city's stages with some frequency: Given that the Theatre is the strongest and almost only popular distraction, there is absolutely no reason for playwrights to write and impresarios to produce works without literary merit, gross and idiotic, since the affluence of spectators would be all the greater were the works shown to present some degree of refined artistic form. The opinion of Revista Theatral's editors was echoed by Arthur Azevedo (1894) himself: If the fluminense [Rio inhabitant] prefers to watch the representation of a magic show, an operetta or a revue of the year rather than a drama or comedy, this is because in these inferior genres the performance of the respective roles fully satisfies, while in drama or comedy, our artists generally have no idea of the characters or the feelings that they are playing.What drives away the spectator is not the play itself, but the way in which it is stage and acted. article | heloisa pontes and rafael do nascimento cesar According to Arthur Azevedo, "inferior genres" in terms of "artistic form," like magic shows, operettas and revues of the year, were not "gross and idiotic," just more palatable and realistic to the sensibilities of the audience.In his view, if the comedies and dramas were well staged and performed, they would not 'drive away' the spectator. 9Hence he transferred the question of the supposed intrinsic quality of the genre to the contingencies of its execution on stage. Responding to Coelho Neto's provocation, he argued that, though he wished to reverse "the terrible state of the dramatic arts in Rio de Janeiro" (Azevedo, 1894) his work was based on market interests.Enjoying renown among the theatre producers of Rio de Janeiro, he claimed that there was no "company that [would reject] an original work from me... as long as that work made money" (Azevedo, cited in Mencarelli, 1999: 85).Azevedo's pragmatic and disenchanted tone stemmed from his understanding that the separation between 'public' and 'society' provided a narrow range of possibilities and, consequently, demanded adopting the right attitudes.In his view, "any art wishing to survive the marketplace had to match the 'public,' since 'society,' while it had its literary and dramatic preferences, would not assure the survival of the artistic production" (139-140). Dedicating his career to dramaturgy required, therefore, a double adaptation: on one hand, to the structural limits of the literary craft which, unable to evade economic laws, obliged him to 'make money'; and, on the other hand, the idea, shared with other intellectuals, of constructing a new sense of nationhood through the invention of a modern and republican social imaginary.Wavering between rejecting and welcoming these theatre genres in which the borders of notions like 'high' and 'low' culture became blurred by "flows of languages, ideas, expressive models, works and authors" (Miceli & Pontes, 2014: 9), Arthur Azevedo embodied the dilemmas and specificities of a porous cultural output. Porosity is the metaphor used by the historian Bruno Carvalho (2013) to describe the sociocultural dynamic of Rio de Janeiro from the second half of the nineteenth century to the first decades of the twentieth.Marked by rapid demographic growth, the federal capital witnessed the configuration of new social classes distributed unevenly across urban space.From this process emerged forms of inhabiting and representing the city shaped by the constant "breaching of cultural screens" (Wisnik, 1983: 162, original italics), in which the absence of rigid boundaries between groups promoted the diffusion of cultural products beyond their contexts of production.Porous, these products symbolically retranslated the divisions and contradictions of a place in which modernization affected city dwellers unequally.Animated by the "interplay between tradition and invention, innovation and the past" (Carvalho, 2016: 23), the tropical version of the belle époque fomented the creation of new languages while the processes of social inequality intensified.But while the differentia-cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 tion of theatre genres did not imply spatially segregated audiences, since different social classes would watch the same shows, the mechanisms of distinction operated through the porosity of culture, not in spite of it, disputing and defining understandings of the notion of 'public.' Chiquinha Gonzaga and the republican taste Determining whether the revue O bilontra, by Arthur Azevedo, staged in 1886, mattered more to the Rio public than Casa de bonecas, performed thirteen years later, is less important to us than exploring the meanings associated with their repercussion.The ranking of cultural and artistic products is a discursive device whose logic elucidates the ways through which power relations take on symbolic counterparts in the social world.At an unparalleled moment of new political forces irrupting in Brazil -whose condition as a dominated nation was persistently interposed with the republican project -the criteria of appreciating and evaluating artistic manifestations and works denoted the hegemony of outmoded aesthetic criteria.But while national dramaturgy, finding itself forced to elaborate a native repertoire based on established genres of serious French theatre, was seeking local contents for the same foreign form, other theatrical modalities flourished in the city.Despite the flaw of cultural 'decadence,' they found a ready public and elaborated witty depictions of life in the capital. At the end of the nineteenth century, Rio de Janeiro had become the cultural and artistic epicentre of Brazil, irradiating to the rest of the country, and beyond, the image of a civilized nation that had supplanted the colonial past under the banner of progress.However, the turning point signalled by the Republic, while altering little the political and economic frameworks of the Empire, was crucial to the construction of the expectations of what could be achieved with the theatre shows. 10Amid a full-blown "battle of symbols and allegories" (Carvalho, 1998: 10) in which different sectors of the republican intelligentsia clashed with each other (liberals and positivists, to cite two), the aim was to constitute a popular imaginary that spoke the language of the new regime. A leading figure in this process was the instrumentalist, composer and conductor Chiquinha Gonzaga.A woman of music and theatre, she participated actively in Rio's cultural production for more than 50 years, constructing a substantial career highly regarded by her peers.Her debut as a conductor in 1885, with the operetta A corte na roça, though poorly received by critics, opened the doors to a profession that she would pursue without interruption until her death at the age of 87.During this long period, she not only accompanied the different conceptions of popular taste, supposedly shared by all citizens, she also put these ideas into action, becoming renowned for her capacity to infuse compositions with the best of a shifting, transversal and porous urban culture. Daughter of the Carioca elite, Chiquinha Gonzaga had a privileged childhood.Access to education and the arts, though limited to the home, provided article | heloisa pontes and rafael do nascimento cesar her with the foundations of a social destiny in which music was a sign of class and gender distinction simultaneously (Cesar, 2015a(Cesar, , 2015b)).Her experience on the piano and in musical theory -begun under the tutelage of her family and honed over her youth -and the influence on urban musical genres like polka, modinha and maxixe were considered valuable skills in the artistic world and ended up directing her towards the humorous universe of operettas and revues of the year.However, her entry into this profession also depended on the circumstances of an erratic trajectory full of setbacks.Like Ibsen's Nora, Chiquinha also stretched the gender conventions of the period.By the time that she began to play and compose professionally at the end of the 1870s, she had already abandoned her home, husband and children on two different occasions. 11Materially destitute and considered dead by her family, the only viable alternative was "to transform the piano, a mere ornament, into a means of work and an instrument of liberation" (Diniz, 2009: 103).But while this was only possible thanks to a class and gender experience that had given her the cultural credentials needed to do in public what other women did at home, it also depended on the acoustic landscape that milled through the city. A figure esteemed by musicians and artists, accustomed to moving around in public space, Chiquinha Gonzaga consolidated her career by mediating in an appealing and entertaining form between "social classes and musical genres" (Carvalho, 2013: 92). At the time of her debut as a conductor, Chiquinha Gonzaga was already familiar with the possibilities of theatrical work in Rio de Janeiro.After Festa de São João -a comedy of manners for which she herself wrote the music and libretto -remained unperformed and Viagem ao Parnaso, a revue whose author, Arthur Azevedo, denied her the opportunity to present, Chiquinha Gonzaga finally made her debut with A corte na roça, authored by Palhares Ribeiro, thereby launching her career among Rio de Janeiro's cultural producers. With the difficulty inherent to all beginnings, Chiquinha appeared on the pages of the main newspapers of the time. We sincerely rue the kind of score written by Mrs. Francisca Gonzaga for the farce that, raised to the heights of operetta, was performed at the Príncipe Imperial theatre the day before yesterday under the title of Corte na Roça. We rue it, because the music -good, well-written, original, a true gift, denoting something of real merit to its author -is yoked to an impossible libretto, implausible, and performed in an indecent and repugnant manner. Motivated by the discrepancy between the 'impossible' libretto, the 'repugnant' performance and the 'well-written' music, the review of the show sought to suggest that the conductor's fledgling talent exceeded that of the theatre company employing her.A similar evaluation would be published the next day by the periodical O Mequetrefe (1885): cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 Corte na Roça... It's best we don't talk about such pitiable things.It's a shame really that Mrs. Chiquinha Gonzaga has wasted so much wax on such a bad corpse. The music is good -good in all senses of the word.The distinctive composer must work, and work hard. Excusing the "bad corpse" on the dramaturgy and staging, the positive impact of Chiquinha Gonzaga's debut earned her further opportunities among the capital's light theatre companies. 12The fact of being a composer (rather than an actress) certainly made her an exception among people working behind the scenes of culture, like playwrights, directors, actors and producers.However, the initial difficulties were gradually supplanted by the composer's talent and dedication -and by the social capital of her origins. The clash between production and critique, signalling the search for relative autonomy of the cultural field, primarily expresses the latter's attempt to create and implement its own laws in a challenge to the monopolizing of cultural legitimacy (Bourdieu, 1992).Although it was impossible for cultural producers to proclaim any material and symbolic independence from economic and political laws, signs of the dispute for legitimate cultural forms can be seen in the dialogue between subjects who occupy distinct and frequently antagonistic positions. The common ground where the symbolic battle for the destiny of Rio's theatre unfolded, therefore, was popular taste and the different meanings surrounding it.For it to figure as an object of a civilizing pedagogy idealized by the cultural elite and, at the same time, serve as ballast for the new theatrical enterprises, the public needed to be constantly monitored.With the gradual development of the urban middle classes, more people began to spend their free time out in the city.Between the second half of the nineteenth century and the first years of the twentieth, the options for entertainment were concentrated in the central region, but as the decades unfolded, not only did other spaces proliferate but also other kinds of attractions. 13The certainty that "the public, in the theatre [...], only loves those who entertain or move them" 14 stimulated the producers and impresarios to develop new strategies to diversify the range of entertainment available to Cariocas. One of them was the combination of light theatre with cinema sessions, still a novelty in the first decades of the twentieth century.Running production and maintenance costs lower than those of the theatre, cinema became an alternative form of entertainment whose accessibility to the general population forced theatre impresarios to reassess their own operations.Despite the vocal rejection of the creators of the "art of silence" 15 by more conservative minds, cinema also played a conciliatory role insofar as screen and stage could entertain the same audience rather than dispute for it.For Rui Barbosa (1920), for example, cinema was "condensed and speeded up theatre," taking as "a background reality, nature and universe in all their infinite variety of scenes."The article | heloisa pontes and rafael do nascimento cesar point of contact between these two forms of cultural output arose from a shared social experience that, though limited to the urban middle classes and the elite, closely accompanied the 'pace' with which culture and the city were transforming. Theatre, therefore, was not a term with a univocal meaning.Occupying the epicentre of Carioca symbolic life, its polysemy was both cultural and political.Business for some and project for others, it led to private ventures designed to entertain and generate profit, while also mobilizing sectors of the intelligentsia eager to construct a meaning for Brazilian nationality and see it represented on stage.Theatre companies thus had to vie for an audience with those proposing the nationalization of the dramatic arts. The idea of creating a national company that would provide financial backing to shows by Brazilian authors, allowing them to dispense with the need to "make money" in the cultural market, was supported by various intellectuals at the turn of the century.To transform the city's theatres, "where slang and the coarse jokes corrupt popular taste definitively," 16 into spaces of dramaturgical quality, criteria needed to be established concerning the form and content of the plays and their application ensured. Today, with the exception of one theatre, São Pedro, which has transformed into a vast hall of immoral dances, all the theatres are functioning, but none of them contain a company that we could point to as national -marking a stage of our civilization and where we could find the characteristic sign of our nationality. The municipal theatre [...] is positively fated to be the temporary writing desk for the geniuses who visit us.It does not serve the purposes of the national company, which, as a school for educating popular taste, scares away the simple and imposes the tremendous expenses of the 'toilette' (A Noite, 1913b, our italics). The creation of a 'national company' had clear objectives.As a 'school of education,' it would be opposed to the free supply and demand for entertainment, setting out the pedagogical bases for the constitution of 'popular taste.' Inaugurated in 1909 with a capacity for 1,739 spectators, the Theatro Municipal do Rio de Janeiro symbolized the aspirations of the belle époque elite to imitate the cosmopolitan style typical of the French toilette.Not coincidentally, then, its biggest attractions included shows by Parisian companies.But it was also chosen as a base for endeavours to nationalize the theatre -an idea whose original version could be traced back to a wish of Arthur Azevedo (1894) who, himself taking inspiration from the state-run Comédie-Française, argued in favour of urgent measures from the public authorities. Imbued with diverse meanings and a theme of much controversy, the notion of the theatre runs in parallel to that of the public.The consulted documents show that during the first years of the Republic, the pendular movement between 'art ' and 'entertainment,' 'foreign' and 'national,' 'public' and 'society,' signalled the joint efforts of various subjects to define Brazilian nationality cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 and, in the process, the nature of popular taste.Working at the intersection between the political imaginary -one of the foundations for which was the idea of a civilization by stages -and the set of cultural representations and discourses concerning the proposed destiny of the national theatre, whose centrality in the symbolic life of Rio arose from its intimate relation with urban space, they reflected the eagerness of the intelligentsia to legitimize their political decisions and substantiate their cultural judgments.In this sense, the public, in acquiring the clear contours of a moral subject, became pivotal to the "forming of [Republican] souls." INTERVAL Though originating in France, revue theatre was "until the mid-twentieth century, the most characteristic genre of Brazilian theatre, the one that most excited the public, with its irreverent humour, sometimes in bad taste, its teasing tone, its critique of manners and its allegories concerning national life and politics" (Mattos, 2002: 108).As the decades passed, though, while it still excited the public, it became less and less popular with Rio and São Paulo's ama- .In the view of this French actor and director, dramaturgy reigned supreme, since everything "derives from it" (Jouvet, 1958: 168).Hence all the components of the show had to be subordinate to the text, given that it only "acquires meaning when spoken, pronounced on stage or elsewhere [or] when addressed to someone: partner or public" (Jouvet, 1954: 67).This explained the raison d'être of stagecraft and the importance of the director, whose function of breathing life into the text was, until then, little appreciated by the public or the majority of people directly involved with professional theatre in Brazil. The work of the amateurs combined with the presence of Jouvet and his company in Rio and São Paulo, in 1941, enabled the subversion of the hierarchy of values enshrined in the country's theatrical landscape, where shows "were organized, so to speak, from the parts to the whole" (Prado, 1993: 95).After the works staged by Jouvet, in the view of Decio de Almeida Prado -the critic who became the "privileged conscience" (Magaldi, 2002: ix) of the São Paulo drama scene -"there was no longer any place for comedies of manners or 'fantasies' in festive form" (Prado, 1993: 159). The protagonists and devices of this movement of theatrical renewal registered the presence of Jouvet in the country in distinct ways.Decio de Almeida Prado, directly, through his work as a theatre critic for the magazine Clima; (1975: 16-17), one of the group's members, the presence of the French director forced them to "reflect more carefully" on what "they should present as a repertoire," by calling attention to the fact that "any initiative that aims to establish in Brazil a theatre of quality, a theatre that truly reaches [...] an audience, would not be achieving anything were it to fail to celebrate national literature![...] The point of departure was the Brazilian author." The advice bestowed by Jouvet quickly became a fixed idea for the group. They needed to find a Brazilian dramatist who matched the pretensions of the group and its Polish director, Zbignew Ziembinski (1908Ziembinski ( -1978)).Fleeing the war, the latter had arrived in Rio de Janeiro in 1941 and two years later would direct Vestido de noiva, by Nelson Rodrigues, considered a watershed in the history of Brazilian theatre. SECOND ACT Cacilda Becker: from the aisle to the proscenium The meeting between Brazilian amateurs and foreign professionals sketched the initial framework for the renewal of Brazilian theatre.However, this movement did not unfold in linear fashion.On her debut as an actress in 1941, Cacilda Becker had never met Jouvet and was not even aware that he had been there during the year in question.The first thing that stood out about her stage presence was her extreme thinness for the standards of beauty of the period (her weight oscillated between 40 and 47 kilos) and the absence of ease with the gestures and codes of sociability of everyday life.Neither conventionally beautiful, nor elegant for her era, Cacilda drew the attention of the critic and amateur director Alfredo Mesquita (1907Mesquita ( -1980) ) With the brilliant arrival of the actors and various rounds of whisky, the encounter became livelier and louder.In one corner, huddled alone, a glass of Coca-Cola trembling in her shaking hands, her large eyes open wide, she observed with a startled expression the carousel all around her [...].How she must be feeling abandoned, how she must be suffering, the poor thing!It was painful to see.So much so that I couldn't resist: I went over.I complemented her on her performance that night, trying to get her chatting, cheer her up.In vain.She tried to force a smile, which didn't come.Just her mouth twitched, almost into a grimace, while her eyes stared at me in terror.To avoid prolonging the suffering, I thought it better to abandon the 'mission.'Which is what I did without hearing her voice, not a single word.(Mesquita, 1995: 82-83) cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 The testimony of Alfredo Mesquita is notable for what he says out loud and for what he suggests between the lines.A member of the powerful Mesquita family, owners of the newspaper O Estado de S. Paulo, Alfredo had been socialized in the universe of the Paulista elite and had been able, due to the social, cultural and economic capital accumulated by the family, to support areas of his own predilection: culture in a broad sense and the theatre in particular.In this domain, he shared the company of actors and actresses, encouraging their careers and contributing to the professionalization of a number of them.The social distance between him and Cacilda, however, despite the proximity and affinity that they might have shared at a cultural level, was, at the moment of the encounter, insurmountable. While over time he would help swell the actress's legion of admirers, becoming her friend and godfather to her son, it is also clear that he knew how to identify -without mincing his words and with the condescension typical to the socially highly confident -Cacilda's initial 'weaknesses,' before she became celebrated as the 'first actress' of the TBC (Teatro Brasileiro de Comédia) in the 1950s, and turned into the 'elegant woman' of the 1960s.Namely: her lack of 'beauty' and social 'aptitude.'Each on its own may not have caught Alfredo's attention.Since although actresses like Laura Suarez (one of the vedettes of the era) and Bibi Ferreira were able to shift easily from Portuguese to French and then to English, the same could not be said of the majority of the professional actresses, who, unlike the amateurs, came from humble or lower middle class families, many of them linked to revue theatre or drama troupes with little formal education.But while their origin was 'low,' the 'biggest' were able to compensate for this 'lack' with some particular physical trump card, such as beauty in the case of Tônia Carrero and Maria Della Costa. When Cacilda was born in 1921, her parents were living in Pirassununga in a wattle-and-daub house without piped water (Prado, 2002: 35).At the age of six, she and her two younger sisters, Cleyde (who would also achieve renown as an actress) and Dirce, moved to São Paulo with their mother, Alzira Becker (daughter of Protestant German immigrants who had come to Brazil in 1860) and their father, Edmundo Radamés Yacónis (descendent of Greeks and Calabrese Italians who emigrated to Brazil in 1880).Their father spent most of his time far from his daughters and wife, who practically had to provide for them alone.During this period, they went through one of the most difficult periods of their lives.In the words of the actress, "we even went hungry.One day I was forced to steal a bunch of vegetables for lunch.[...] Stealing I don't think was important: the hunger is what pains me still today" (Becker, 1995). In 1929, Cacilda's parents separated and they moved to the house of their maternal grandparents in Pirassununga.Afterwards, they went to a farm where their mother had obtained work as a primary teacher at a rural school.Finally, they moved to Santos, where they lived in a favela.While the poverty was con-article | heloisa pontes and rafael do nascimento cesar siderable and comfort almost non-existent, Cacilda, Cleyde and Dirce enjoyed a freedom of movement much greater than usual for single young women at the time.Cacilda, who loved to dance, experimented as a modern dancer and made some interesting friends, among them the fine artist Flávio de Carvalho and the critic Miroel Silveira.As well as enabling her to meet the artists and intellectuals of Santos who frequented his parents' home, Miroel was responsible for Cacilda's entry into the world of theatre.Realizing that her wish to become a professional dancer would be difficult to achieve and aware of the transformations taking place in the Rio theatre scene, c mentioned his friend to the director Maria Jacintha.Cacilda debuted in Rio de Janeiro, in 1941, in the role of Zizi in the play 3.200 metros de altura. The actress and the critic Also in 1941, Decio de Almeida Prado made his debut in São Paulo as a theatre critic in the magazine Clima (Pontes, 1998).While the year of entry was the same, their conceptions of the dramatic arts were, at that moment, completely different.Linked to what was happening in French theatre, especially, and committed to building the conditions needed to implant Brazilian modern theatre, Decio was light years away from Cacilda in this domain.Lacking almost any kind of capital (social, economic or cultural), when she took to the stage it was to continue a work routine that had been hindering the adjustment of Brazilian theatre to the transformations occurring on the international scene.She, who in her best moments as an actress would become "a pure flame burning before us" (Prado, 1969), had begun her career in the opposite direction to Decio's vision of the theatre. In 1943, their paths crossed for the first time.Temporarily installed in São Paulo, she was directed by him in the play Auto da barca do inferno, written by Gil Vicente and staged by the Grupo Universitário de Teatro.Playing the role of the go-between Brígida Vaz, who raised the girls for the canons of the See, Cacilda created the character virtually alone.In this production, Decio received two lessons from Cacilda: First, that the vanity of the artist, the legitimate vanity of the artist, which was already considerable in Cacilda and would increase with age, has nothing to do with personal narcissism, with the desire to appear beautiful and attractive.The beauty that she pursued was of another kind.Second, that the art of acting demands just as much creative imagination as the art of writing.The dramatist provides the words.The rest, which at the time of the performance is nearly everything, falls to the actor.(Prado, 1993: 141) This passionate assessment clearly reveals the size of the impact that the actress had on the critic.Despite preferring up to then 'tasty little' plays to those of indisputable literary quality, Cacilda found in Decio her most qualified interpreter, at once generous and demanding.The contact between them was pos-cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 sible thanks to the work of the amateur groups and, especially, the creation of the Teatro Brasileiro de Comédia (TBC) in 1948. 17Headed by Franco Zampari and backed by the support received from the newspaper O Estado de São Paulo, the actors, directors, critics, impresarios, drama teachers and set designers linked to the TBC formed, according to Prado (1993: 95), "a close-knit and combative squad which in a few years had subverted the entire framework of Brazilian theatre, impressing on it new practices and new principles." 18 Decio was the house critic of this successful entrepreneurial and artistic initiative.And Cacilda, the first actress.Surpassing herself with each new role, her rise, truly dizzying, mirrored the growing prestige of the TBC among the Paulista public.As she matured as an actress, Decio was also becoming renowned as a critic.With each new play and each new character interpreted by Cacilda, the eulogies that he lavished on her were multiplied in his unsigned column in the form of editorial notes (as was customary at the time) in O Estado de São Paulo.Coming from someone like himself -university trained, in tune with the international drama scene and dedicated to producing an exhaustive and comprehensive critical review of each show seen -these enthusiastic evaluations were a long way from the 'candied' praise typical of impressionist critics. In her first years of acting, Cacilda had assembled a highly varied portfolio of work that simultaneously contained the best and the worst of the theatre routine of the period: old methods of making and conceiving the theatrical arts were mixed with more modern methods that had emerged in the 1940s.When she joined the TBC at the age of 27, she therefore brought the "before and after" that divided the history of Brazilian theatre.But as her cultural education was still very sparse, it was only at the Teatro Brasileiro de Comédia that her career really took off, the result of a set of very precise circumstances, including a mixture of conditioning factors of a social, institutional, artistic and biographical kind. Playing the part of Queen Mary Stuart in Schiller's drama, directed by Ziembinski, Cacilda proved, Decio argued, how they had been mistaken "to accept any limits for her"; in the role of Pega-Fogo, a child prematurely aged through suffering, she showed that "her immense possibilities (were) even vaster and deeper than we had imagined."Reaching a period of full artistic maturity, she took "our theatre to heights seldom attained even by the best theatre of other countries."On stage, the actress was like "a vibrant bundle of nerves" dispensing with "everything that did not constitute material for her art, everything that [was] not nervous sensibility" (Prado, 1993: 262).Thrilled by Cacilda's rise, the result of a carefully dosed mixture of discipline, talent, hard work and love for the profession, Decio provides a visually vivid picture of the actress's strength. The idea that her strength resided in a "vibrant bundle of nerves" would become one of the aphorisms used to describe her by directors and theatre critics alike, the latter, in the wake left by Decio's thought, making his words about her their own.For the Italian Ruggero Jaccobi (1995: 137), who arrived in article | heloisa pontes and rafael do nascimento cesar Brazil in 1946 and directed her three times, as soon as Cacilda "began to pronounce the words, they ceased to be words: they were threads, the bundles of threads of this nervous system.There was, so to speak, a kind of electricity in Cacilda's words and gestures.She moved across the stage releasing a series of electric shocks.In the words of Ziembinski (1995), the director with whom she most frequently worked over her career, one of Cacilda's mottos whenever he proposed a new show to her was: "Let's work!It's going to be hellish work!"For her, Ziembinski continued, "hellish work was a source of joy, the need for extreme effort.In the heat of work, the fight to achieve new values, she felt herself reborn, at the same time that her fragile body was transformed into the body of giant, a shining body" (142).This body allowed her to move between playing very different types.Not only by virtue of the external devices that she used to imbue the performed characters with verisimilitude -the majestic garments that she wore when portraying a queen or the bandages with which she wrapped her breasts beneath her shirt to make her depiction of a boy all the more credible -but above all by her capacity to convert the experience of humiliation and deprivation experienced in her own childhood and adolescence into a powerful interpretative key.This was something the people closest to her knew how to recognize by being themselves completely immersed, like the actress, in the world of theatre, as actors, directors or critics.Sábato Magaldi, disconcerted by the fact that the two roles that he most liked from the actress's career were male (Pega-Fogo and Estragon, in Esperando Godot [Waiting for Godot]), sheds some light on the significance of this coincidence.In his view, it derived not from the fact that Cacilda "appeared masculine on stage."On the contrary, "she was very feminine in so many creations."Her personal fragility, Sábato postulates, "is what lent Pega-Fogo and Estragon a profoundly human quality.Helplessness, sadness, perplexity in the face of life, c suffering, humiliation -these were the raw material that came from the roots of her childhood and stuck to her characters, making them so authentic" (Magaldi, 1995: 19).Male, these two roles are the striking expression of how, in the case of the great actresses, much of their notoriety is associated with the embodiment of the mechanisms of deception produced by theatrical conventions.Making the body the most important substrate, theatre allows these actresses to circumvent the implacable imperatives of beauty and the constraints imposed by ageing.Something difficult to achieve in other domains, like cinema, classical ballet and fashion, where the body is also central. Neither beautiful or shapely, "marked" forever, in her own words, "by poverty" (Becker, 1995), Cacilda triumphed because she maximized her skill as an actress in a very particular context of renewal in Brazilian theatre.Backed by the accumulated experience of foreign directors, Cacilda was able to make up for the deficiencies in her training, working round her less favourable phys-cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 ical attributes, becoming familiar with and eventually mastering the drama techniques and conventions that made the TBC the model par excellence of Brazilian theatre until the mid-1950s. The city on stage "The women were the bosses in the theatre" (Della Costa, 1984).Contradicting the obvious, the words of the actress Maria Della Costa (1926Costa ( -2015) ) express the mix of objective and subjective conditions that enabled actresses to assume leading roles on and off the stage.In the division of labour that governed the diverse modalities of drama performed during the period, the marks of gender were present in all, but with distinct inflections in each.While the work of acting was available to both sexes, the work of dramaturgy was a privilege or attribute afforded to men. 19Between these two poles were the directors and those women responsible for rehearsals, with the former clearly differentiated.The prestige attained by actresses, beyond the talent and acting skill of each individually, was due to their participation in the movement of implanting and sedimenting the aesthetic principles and work routines of the modern theatre, and to the transfer of social and cultural authority to the audience watching them -especially the bourgeois audience who frequented the Teatro Brasileiro de Comédia, the symbol of Paulista theatre at the turn of the 1940s and an obligatory reference point in the 1950s. The close relationship between the theatre and the city, the basis for drama activity in any capital, acquired specific contours in São Paulo as a result of the influence of the University of São Paulo (USP) on the local cultural dynamic. 20The higher social profile that the university gave to those involved in intellectual activity, the presence of foreigners (professors and directors), the new spaces of sociability and professionalization, the alterations being produced at a rapid pace in the city's social structure and demographics, all this, combined, proved decisive in the creation of new modalities of intellectual work and in the consolidation of the modern theatre. "Exploding in its number of inhabitants, [São Paulo] broke its 400-year-old shell and became internationalized" (Prado, 1998: 7).From 1920 to 1950, the population rose from 579,000 to 2,198,000.The result was a shared belief in the future, "the symmetrical substitution of lifestyles rather than the slow disappearance of a world whose death throes could be accompanied with lucidity" (Mello e Souza, 1980: 110).This was demonstrated by the most inventive Brazilian dramaturgy of the time and by the creation of various companies, where the presence of the first actress continued to be central to the production and to the success of the modern theatrical ventures. In the context of transformation through which Brazilian culture was then passing, signalled by the emergent Cinema Novo and by the intricate interweaving of theatre, radio and the beginning of television, the professionali-article | heloisa pontes and rafael do nascimento cesar zation of the modern drama scene was made possible thanks to the decisive contribution of companies formed by actresses, combined with the presence of foreign directors.Coming from very area of Brazil's lower social classes, these actresses infused modes, dictions, corporeality, expressivity, humour and signals with a social energy that scintillated and reverberated on stage with the geographic and social mobility characteristic of the transformations then taking place in the country's metropolises (Pontes, 2010). In this manifestation of new viewpoints onto Brazilian society, there was a place for the poor or modest social origin of various actresses to be converted into an expressive substrate to their theatrical activity, as we saw in the career of Cacilda Becker.The renown, the delayed cultural education, the access to social circles otherwise unthinkable to someone from their social background, the acquisition of a series of material and symbolic goods and, above all, the rubberstamping of the biggest actress of the period were essential to balancing the public image of the actress with her own self-image.But not to the obliteration of the most painful and tumultuous feelings deriving from the deep experience of poverty, which contained buried within a distinct form of bodily hexis exhibited by the socially excluded. EPILOGUE Situated in distinct contexts, the trajectories of Chiquinha Gonzaga and Cacilda Becker interwove with the history of the cities in which they lived and performed.Rio de Janeiro at the end of the nineteenth century, at once the political and cultural centre of the young Republic, discovered in theatre the symbolic form most able to represent the delights and contradictions of Brazil's modernization.A kind of good-humoured chronicle of everyday Rio de Janeiro, the revues of the year shuffled the hierarchies existing in the theatrical arts, transversally affecting the population's sensibility and imagination.Dramatic, farcical, comic and invariably musical, the theatre was as porous as the city that it sought to present to the public every night. Chiquinha Gonzaga made this theatre her profession.Always dressed soberly to avoid a pronounced femininity, she coordinated the aspects of her personal presentation with the male expectations of her social circle as though she wished to become indistinct from her colleagues.The erasure of the marks of gender took as their substrate a body whose generational distance separated her from most of her peers.It was also anchored in the effects of the fame achieved by Chiquinha over her career.Knowing the piano "inside and out" (Pinto, 2009: 42), she occupied a place in the division of musical labour associated with the image of someone whose technical mastery of her art is sufficient to conduct an orchestra alone.6 Data from the second census conducted in Brazil.Available at <http://seculoxx.ibge.gov.br/populacionais-sociais--politicas-e-culturais/busca-por-temas/educacao>.7 For a more in-depth analysis of Arthur Azevedo's career in relation to the city of Rio de Janeiro, see Siciliano (2014). 8 Written as a commission, O Rio de Janeiro de 1877 was the first review by Arthur Azevedo.In it we can perceive the structure that would consecrate the genre over the 1880s and 1890s.The abundance of characters with suggestive names -like 'Politics,' 'Rumour,' 'City Improvements' -combined with a fragmented comic narrative alluding to everyday life in the city. article | heloisa pontes and rafael do nascimento cesar 9 In his view, "in drama or comedy, our artists generally have no idea of the characters or the feelings that they are playing.What drives away the spectator is not the play itself, but the way in which it is stage and acted" (Azevedo, 1894). 10 The transition from a social structure based on the 'precapitalist' mechanisms of the old regime for a properly capitalist structure, generating the psychocultural developments of the 'bourgeois era' in Brazil, dates from the 1930s with the breaking of the oligarchical pact and the alliance between an incipient bourgeoisie and the State (Fernandes, 2005: 241). 11 The first of them was Jacinto Ribeiro do Amaral, with whom she married in 1863, at the age of 16, and had three children: João Gualberto, Hilário and Maria.In 1870, already living with João Batista de Carvalho, she decided to move to the interior of the state.After the birth of Alice, Chiquinha returned to the capital, leaving behind her new partner and their recently-born daughter. 12 In an article published on January 7 th 1897 for the newspaper A Notícia, the playwright Arthur Azevedo counted 1,896 shows in the city of Rio de Janeiro, 59.13% of them corresponding to the presentation of revues of the year, operettas, magic shows and zarzuelas. 13 Although regions like Avenida Rio Branco and Praça Tiradentes maintained "a central position in the panorama of mass entertainment in the federal capital in the 1920s, [they] were far from monopolizing it" (Gomes, 2004: 58).Indeed, the 1920s saw the multiplication of spaces dedicated to offering the public a wide variety of entertainment at different prices, dominated by the city centre but also extending to the districts of São Cristóvão, Botafogo, Méier, Jardim Botânico and Tijuca. cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| rio de janeiro, v.07.02: 491 -519, agosto, 2017 18 Decio de Almeida Prado's striking assertion should be read with a degree of caution.On one hand, because the traditional companies and genres, though challenged by the presence of the amateurs and the TBC, "remained active and successful at the box office" -in the words of the anonymous reviewers of our article, whom we thank for the careful reading and precise observations.On the other hand, because of the revision of this evaluation made by the recent literature on the history of Brazilian theatre.On this point, see Iná Camargo Costa (1998) andTânia Brandão (2014). teur groups, who were striving to transform dramatic values and instil new ways of conceiving the work of actors, actresses, directors and set designers.The Brazilian theatre scene's capacity to catch up with what was happening internationally depended on the work of amateurs, the localized attempts to renew local dramaturgy and the emergence of new theatrical conventions.An important figure in this process of renewal in Brazil was Louis Jouvet article | heloisa pontes and rafael do nascimento cesar Nelson Rodrigues, indirectly, through the mediation of the Rio group Os Comediantes [The Comedians], responsible for staging Vestido de noiva in 1943.In the recollection of Gustavo Dória due to the almost complete absence of these attributes.Having seen Cacilda act for the first time in 1941, in the play Coração (staged by the Raul Roulien Company), Alfredo Mesquita recalls the timid image of the young actress at the reception held by the painter Di Cavalcanti and his wife, Noêmia, for the company's cast at the couple's "adorable duplex" in the downtown São Paulo. Sustained by the text, grounded in the disciplined work of the cast and the overview of the director, the modern theatre in São Paulo translated into a cities, stages and audiences: rio de janeiro and são paulo in two acts sociol.antropol.| riode janeiro, v.07.02: 491 -519, agosto, 2017 NoTES 1 Without losing sight of the distinction between the 'first' actresses, in general bosses of their companies, and the other performers, the historian AnneMartin-Fugier (2001) shows that, in France, over the course of the nineteenth century, an important change occurred to the social status and work conditions of these women.An eloquent example in this respect is given by the diverse treatment that they received on being buried.While in 1730, the Church refused to grant a Christian burial to the tragic actress Adrienne Lecouvreur, in 1923 Sarah Bernhardt was buried in the Panthéon, almost like a Head of State, such was the volume of people who went to bid farewell to her (more than 30,000 people according to the newspapers of the time) and the personalities present at the funeral ceremony.2For an analysis of the social construction of female musical vocations, see the excellent work of Dalila Vasconcellosde Carvalho (2012).3Azevedo,1899a.4 The excerpt from the play in question refers to the moment when Dr. Rank, a friend of the Helmers, announced, with an easy air, that he would die in a question of weeks from spinal tuberculosis.5According to the overview made in the editorial of Revista Theatral, in 1895, the city of Rio de Janeiro possessed nine theatres: "S.Pedro and Lyrico, vast and well-suited to grand opera," as well as the PhenixDramatica, Eden--Theatro, Teatro Sant'Anna, Teatro Lucinda, Teatro Apollo, Teatro de Variedades and Recreio Dramático.
2019-05-11T13:05:57.413Z
2017-08-01T00:00:00.000
{ "year": 2017, "sha1": "b656977d08247c1b9ae24094c8b0f05c8fb25e7a", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/sant/v7n2/2238-3875-sant-07-02-0491.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "b656977d08247c1b9ae24094c8b0f05c8fb25e7a", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
227172532
pes2o/s2orc
v3-fos-license
Cell-to-cell heterogeneity in Sox2 and Brachyury expression guides progenitor destiny by controlling their movements Although cell-to-cell heterogeneity in gene and protein expression has been widely documented within a given cell population, little is known about its potential biological functions. We addressed this issue by studying posterior progenitors, an embryonic cell population that is central to vertebrate posterior axis formation. These progenitors are able to maintain themselves within the posterior region of the embryo or to exit this region to participate in the formation of neural tube or paraxial mesoderm tissues. Posterior progenitors are known to co-express transcription factors related to neural and mesodermal lineages, e.g. Sox2 and Brachyury (Bra), respectively. In this study, we find that expression levels of Sox2 and Bra proteins display a high degree of variability among posterior progenitors of the quail embryo, therefore highlighting spatial heterogeneity of this cell population. By over-expression/down-regulation experiments and time-lapse imaging, we show that Sox2 and Bra are both involved in controlling progenitor motility, acting however in an opposite way: while Bra is necessary to progenitor motion, Sox2 tends to inhibit cell movement. Combining mathematical modeling and experimental approaches, we provide evidence that the spatial heterogeneity of posterior progenitors, with regards to their expression levels of Sox2 and Bra and thus to their motile properties, is fundamental to maintain a pool of resident progenitors while others segregate to contribute to tissue formation. As a whole, our work reveals that heterogeneity among a population of progenitor cells is critical to ensure robust multi-tissue morphogenesis. Introduction Cells are the functional units of living organisms. During embryogenesis, they divide and specify in multiple cell types that organize spatially into tissues and organs. Specification events take place under the influences of the cell's own history and of environmental clues. Over the last years, access to new technologies has revealed that embryonic cells often display an unappreciated level of heterogeneity. For instance, gene expression analysis suggest that, within the same embryonic tissue, cells which were thought to be either equivalent or different, are actually organized into a continuum of various specification states (1,2). The impact of this new level of complexity on morphogenesis has not been extensively explored due to the difficulty of experimentally manipulating expression levels within targeted populations of cells in vivo. The progenitor cells located at the posterior tip of the vertebrate embryo (called here posterior progenitors) constitute a great model to study how a population of stem cell-like cells being specified into distinct cell types. The use of fluorescent tracers in bird and mouse embryos revealed that these progenitors contribute to the formation of the presomitic mesoderm (PSM), the mesodermal tissue that gives rise to the muscle and vertebrae, and of the neural tube the neuro-ectodermal tissue that gives rise to the central nervous system (3)(4)(5)(6). These experiments have also shown that while some cells are leaving the progenitor zone, other progenitors remain resident in this area. Grafting experiments have later confirmed these properties by showing that these resident progenitors are indeed capable of self-renewal while giving progeny in different tissues (7,8). The posterior region therefore contains a population of different types of progenitors able to give progeny in the mesodermal or the neural lineages along the antero-posterior axis of the vertebrate embryo. Retrospective clonal analysis studies performed in the mouse embryo (9) confirmed heterogeneity of this progenitor population. Indeed, these studies revealed the existence of single progenitors able to give rise to only neural or mesodermal cells but also pointed out to the existence of a third type of progenitor able to generate both neural and mesodermal cells. These bi-potential progenitors, named neuro-mesodermal progenitors (NMPs), have later been shown to exist in zebrafish (10) and in bird embryos (48,50,51). In the early bird embryo (stage HH4-7), the future posterior progenitors are initially located in anterior epithelial structures: the epiblast and the primitive streak. At later stages (stage HH8 and onward), these progenitors are re-located caudally in the embryo in a dense mesenchymal structure that prefigure the embryonic tailbud where they will give rise to their progeny in the neural tube and PSM (48,(11)). To sustain the formation of tissues that compose the body axis, the posterior progenitor population must adopt an equilibrium between maintenance and specification/exit of the progenitor zone. Two transcription factors, Sox2 (SRY sex determining region Y-box 2) and Bra (Brachyury), have been described for their roles in respectively neural and mesodermal specification during embryonic development (12,13). Sox2 is known to be expressed in the neural progenitors that form the neural tube where it contributes to the maintenance of their undifferentiated state. Its involvement in neural specification has also been revealed by a study showing that ectopic expression of Sox2 in cells of the presomitic mesoderm is sufficient to reprogram these cells, which then adopt a neural identity (14). The Bra protein was initially identified for its essential function in formation of the paraxial mesoderm during the posterior extension phase (12,15). Its crucial role in mesodermal specification has been demonstrated, in particular, by phenotypic study of chimeric mouse embryos composed of both Bra mutant and wild-type cells, and in which only wild-type cells are capable of generating mesodermal cells (16). More recent studies have shown that Sox2 and Bra proteins are expressed in posterior progenitors of developing embryos, indicating that activation of their expression takes place in progenitor cells before these cells colonize the neural tube or presomitic mesoderm (17,18). In addition, these studies have shown that both proteins are co-expressed in progenitor cells, an observation consistent with the presence of bi-potential progenitors in this tissue. Work done in mouse embryo and in vitro systems derived from embryonic stem cells indicates that Bra and Sox 2 influence the choice between neural and mesodermal lineages by their antagonistic activities on the regulation of neural and mesodermal gene expression (19). The purpose of this study is to understand further the relations between specification and tissue morphogenesis processes within the progenitor region of the developing vertebrate embryo. In particular, we want to investigate which cellular mechanisms can underlie maintaining enough progenitor cells while others are contributing to the elongating paraxial mesoderm and neural tissues. Analyzing the relative level of protein expression, we show that the specification factors Sox2 and Bra are expressed with a high degree of spatial heterogeneity in the progenitor region of the quail embryo. Over-expression and down-regulation experiments show that the ratio of Sox2/Bra expression is key in maintaining progenitors posteriorly and in controlling the exit of progenitor cells in their destination tissues. Using time-lapse experiments, we show that progenitor cells display diffusive migration properties comparable to PSM tissue. We developed a mathematical model to explore how heterogeneous expression of Bra and Sox2 combined with their opposite action on non-directional motility can drive both progenitor maintenance and multi-tissue morphogenesis. By overexpression and downregulation in vivo experiments, we further validate that Sox2 inhibits progenitor motility whereas Bra promotes it. Finally, by using our modeling approach we propose that spatial heterogeneity plays an important role in maintaining progenitor tissue shape during axial elongation. Results Levels of Sox2 and Bra proteins display high spatial cell-to-cell variability in the posterior zone. The transcription factors Sox2 and Bra are known to be co-expressed in cells of the progenitor zone (PZ) (17,18). As they specify from these progenitors, neural cells maintain Sox2 expression and downregulate Bra while mesodermal cells downregulate Sox2 and maintain Bra. Although Sox2 and Bra are recognized to be key players in driving neural and mesodermal cell fates, the spatial and temporal dynamics of these events remains to be elucidated. As a first step to address this question, we carefully examined expression levels of the two proteins in the PZ of the quail embryo stage HH10-11. As expected, analyses of immunodetection experiments revealed co-expression of Sox2 and Bra in nuclei of most, if not all, PZ cells ( Fig. 1 A-C) (n=8 embryos). Noticeably, we observed a high heterogeneity in the relative levels of Sox2 and Bra protein between neighboring cells. Indeed, in the progenitor zone, we found intermingled cells displaying high Sox2 (Sox2 high ) and low Bra (Bra low ) levels or, conversely, Bra high and Sox2 low levels as well as cells in which both proteins appear to be at equivalent levels. This cellular heterogeneity was very apparent when compared to the adjacent nascent tissues, i.e. the neural tube and the PSM, where Sox2 and Bra protein levels were found to be very homogenous between neighboring cells ( Fig. 1 D-F). We detected such cell-to-cell heterogeneity in progenitor as early as stage HH5-6, a stage corresponding to initial activation of Sox2 and Bra coexpression in the quail embryo (Supplemental Fig. 1). We also observed heterogeneous levels of Sox2 and Bra proteins in PZ cells of chicken embryo, indicating that it is not a specific feature of quail (Supplemental Fig. 2). To infer how Sox2 and Bra protein levels goes from being co-expressed in heterogeneous manner in the PZ to being expressed homogeneously in the nascent tissues we analyzed variations of their respective levels in a series of seven volumes (containing around 100 cells in each volume) located in a posterior to anterior path (from the PZ to the maturating tissues) corresponding to PSM or neural tube putative trajectories ( Figure 1G-G'). Data showed that expression level of Sox2 increases (+ 2.22 folds, n = 7 embryos) while that of Bra decreases (-3.81 folds) following the neural path ( Figure 1G). On the contrary, on the paraxial mesoderm path Sox2 level decreases (-2.12 folds, n =7 embryos) while Bra level first increases in the posterior PSM (1.14 fold, position 1 to 2) and decrease later on (-5.06 folds, position 2 to7) (Figure 1 G'). Next, to define whether the cellular heterogeneity found in the PZ depends more on variability of one of the two transcription factors, we quantified protein levels per nuclei of cells populating the PZ. By plotting Sox2 and Bra levels in individual cells we noticed a broader distribution for Sox2 levels (coefficient of variation of 41.8%) compared to Bra levels (coefficient of variation of 30.75%) (Fig 1 H), indicating that cell-to-cell heterogeneity in the PZ is preferentially driven by differences in Sox2 levels. To better quantify and visualize Sox2 and Bra heterogeneity we calculated the Sox2-to-Bra ratio (Sox2/Bra) for each cell of the PZ and compare it to neural tube and PSM. Comparison of Sox2/Bra values showed high divergences between the three tissues and confirmed the high heterogeneity previously observed in the PZ cells ( Fig 1I). It must however be noticed that these quantitative data revealed a broad range of cell distribution, highlighting, in particular, the presence of cells in the PZ displaying similar Sox2/Bra values as mesodermal or neural cells. We next asked whether the cellular heterogeneity caused by differences in Sox2 and Bra levels is present in the whole volume of the PZ or displays regionalization in this tissue. To do so, we analyzed the spatial distribution of the same Sox2/Bra values on optical transverse sections performed at anterior, mid and posterior positions of the PZ ( Figure 1J-J'''). This analysis confirmed that heterogeneous Sox2/Bra values are equally represented in the mid area of the PZ ( Figure 1J''). Cells with a high ratio (Sox2 High Bra Low ) were found to be more represented in the most dorso-anterior part of the PZ ( Figure 1J') and cells with a low ratio (Bra High Sox2 Low ) were found to more represented in the most posterior part of the PZ ( Figure 1J'''). This particular anteroposterior distribution was further confirmed by tissue expression analysis (Supplemental Fig. 3). However, variations of Sox2/Bra values were also noticed in these areas, indicating that cell-to-cell heterogeneity is present in the whole PZ. Altogether, our data, highlighting significant variability in Sox2 and Bra protein levels within progenitors of the PZ, evidence an unexpected cell-to-cell heterogeneity of this cell population. Noticeably, despite an overall enrichment of Sox2 high cells in the dorsal-anterior part of the PZ and Bra high cells in the most posterior part, no clear spatial regionalization of these cells was detected, indicating that the PZ is composed of a complex mixture of cells displaying variable Sox2/Bra levels. This variability is further lost as cells enter the neural tube or the PSM. Relative levels of Sox2 and Bra in PZ cells influence their future tissue distribution. The fact that cell-to-cell heterogeneity caused by differences in the Sox2 and Bra levels is observed in PZ cells but not in PSM and the neural tube cells is suggestive of a role of these relative protein levels in the decision to leave or not the PZ and to locate in a specific tissue. To test this possibility, we developed functional experiments aiming at increasing or decreasing Sox2 and Bra levels in PZ cells. To do this, we performed targeted electroporation of progenitors in the anterior primitive streak/epiblast of stage HH5 embryos to transfect expression vectors or morpholinos and further analyzed subsequent distribution of targeted cells, focusing on the PZ, the PSM and the neural tube ( Figure 2). As early as 7hrs after electroporation, we could detect the expected modifications of Sox2 or Bra expression in PZ cells for both overexpression and downregulation experiments (Supplemental Figure 4,5). We observed a significant decrease in the Sox2/Bra levels either by overexpressing Bra or downregulating Sox2 while this value increased when Sox2 is overexpressed or when Bra is downregulated (Figure 2 A,F). Consistent with previous studies (19), we found that cross-repressive activities of Sox2 and Bra contributed to amplify such ratio's modifications (Supplemental Figure 5). After transfection of expression vector or morpholino, we let the embryos develop until stage HH10-11 and examined fluorescent cell distribution in the different tissues. For this, we measured the fluorescence intensity of the reporter protein (GFP) in the PZ, the PSM and the neural tube and calculated the percentage of fluorescence in each tissue. We obtained reproducible data using control expression vector with less than 20% of the fluorescent signal found in the PZ (16.78% ±2.83), a little more than 20% in the PSM (22.64 % ± 3.30) and about 60% in the neural tube (60.57% ± 4.39) (Figure 2 B, E). We next found that overexpression of Bra leads to a marked reduction of the fluorescent signal in the PZ (1.17% ± 0.57) and to an increased signal in the PSM (33.04% ± 4.06) but has no effect on the neural tube signal (Figure 2 C, E). Elevating Bra levels is thus sufficient to trigger cell exit from the PZ and to drive cells to join the PSM. However, this is not sufficient to impede PZ cell contribution to form the neural tube. Similarly, we found that overexpression of Sox2 drives exit of the cells from the PZ (1.16% ± 0.67) favoring their localization in the neural tube (75.40% ± 4.57) without significantly affecting proportions of cells in the PSM (Figure 2 D,E). Spatial distribution of the fluorescent signals obtained using control morpholinos appeared very similar to that observed using the control expression vector (18.93% ± 3.06, 22.68 ± 4.09 and 58.38% ± 3.63 for the PZ, the PSM and the neural tube, respectively). We found that downregulation of Bra leads to exit of cells from the PZ (4.16% ± 1.57), favors cell localization in the neural tube (88.23%, ± 2.04) at the expense of the PSM (7.59% ± 1.81) ( These data thus point out that the relative levels of Sox2 and Bra proteins are key determinant of PZ cell choice to stay in the PZ or leave it to enter more mature tissues. Major changes of the Sox2 to Bra ratios, tending either towards higher or lower values, are sufficient to trigger cell exit from the PZ. These experiments also pointed to the critical influence of the relative levels of Sox2 and Bra in controlling the final destination of cells exiting the PZ, with Sox2 high (Bra low ) cells and Bra high (Sox2 low ) cells preferentially integrating the neural tube and the PSM, respectively. Intriguingly, we also observed that the preferential cell colonization of one given tissue is not always clearly associated with depletion of cells in the other (see discussion). PZ cells are highly motile without strong directionality To have a better understanding of how progenitors either stay or leave the PZ we needed to study their behavior dynamically using live imaging. We electroporated stage HH5 quail embryos with an empty vector encoding for nuclear GFP and preformed time-lapse imaging was further performed from stage HH8 to stage HH12. We focused our interest on the PZ but also on the PSM and the posterior neural tube in order to be able to compare migration properties between tissues ( Figure 3A, Supplemental Movie 1). Because these three tissues have a global movement directed posteriorly due to the embryonic elongation, we generated two types of cellular tracking: the raw movement, in which the last formed somite is set as a reference point and the "corrected" movement, in which the cellular movements are analyzed in reference to the region of interest ( Figure 3B). Tracking cell movements allowed for quantification of motility distribution, directionality of migration and time-averaged mean squared displacement ( Figure 3C-E) (n=7 embryos). First, we noticed that PZ cell raw motilities are higher than that of PSM or neural tube cell motilities ( Figure 3C, top panel). PZ raw directionality is also more pronounced in the posterior direction ( Figure 3D, upper panel). These results confirm that the PZ is moving faster in a posterior direction than surrounding tissues as previously measured using transgenic quails embryos (20). Analysis of local (corrected) motility reveals that PZ cells move globally as fast as PSM cells and significantly faster than neural tissue cells ( Figure 3C, bottom panel). The distribution of corrected PZ cell motilities is however different to the one of PSM cells as it showed slower moving cells (PZ corrected motility violin plot in Figure 3C is larger for slow values than the PSM counterpart and Supplemental Figure 6). After tissue correction, the directionality of PSM cell motion was found mostly non-directional as previously described (20) with a slight anterior tendency which is expected for a posteriorly moving reference ( Figure 3D, red plot in the lower panel). The distribution of corrected angles of PZ cell motilities is also globally non-directed with the exception of a slight tendency toward the anterior (to some extent more than the PSM cells), suggesting that our method is able to detect trajectories of cells exiting the region of interest ( Figure 3D, yellow plot lower panel). As PZ cell movement was found being mostly non-directional, we wanted to better characterized their diffusive motion by plotting their mean squared displacements (MSD), measured in each tissue over time, as it has been previously done for PSM cells (20). This analysis showed that progenitor MSD is linear after tissue subtraction, as intense as the PSM cell MSD and significantly higher than neural tube cell MSD, thus demonstrating the diffusive nature of PZ cell movements (Fig. 3E). Together, these data evidenced that, in the referential of the progenitor region, PZ cell migration is diffusive/without displaying strong directionality (expect a slight anterior tendency), with an average motility that is comparable to PSM cells and significantly higher than that of neural tube cells. Modeling spatial cellular heterogeneity and tissue morphogenesis To understand better how heterogeneity in Sox2/Bra can control the choice of progenitors to stay or leave the PZ we designed an agent based mathematical model ( Figure 4). The main hypothesis of this model is that motility in a given progenitor is directly driven by the Sox2/Bra value; Bra is promoting non-directional motility whereas Sox2 is inhibiting it. We chose some initial conditions in which the neural tube, the PSM and the axial progenitor are already formed. This choice was motivated by the fact that our experiments focus on a developmental window, which is passed the formation of these tissues (HH8-HH12). At this period of development the main tissues deformations are in the anteroposterior and medio-lateral axis (21), thus we decided to design a 2D model (X,Y) evolving through time. We implemented cell numbers, proliferation, and tissue shape to be as close as possible to biological measurements (21) (Supplementary data). We set up the PZ cells to express random and dynamic Sox2/Bra levels with a defined probability to switch into a Sox2 high (Bra low ) state (neural tube state) or a Bra high (Sox2 low ) state (PSM state) ( Figure 4A). To take into account our biological observation that PZ cells are as motile as PSM cells we set up the threshold at which the motility is switching from high to low at a Sox2/Bra level of 1.6 (the level above which cells become committed to neural fate). Finally, to consider the physical boundaries existing between tissues, we integrated a non-mixing property between cell types. We first verified that our mathematical model (Supplemental Movie 2) recapitulates the basic properties of the biological system: in particular, it allows for recreating spatial heterogeneity of relative Sox2/Bra levels in cells of the PZ ( Figure 4B) and reproduces general trends in tissue motility and non-directionality ( Figure 4C,D). Simulations of our model also showed that relative cell numbers (taking into account proliferation) evolve as expected with a stable number of PZ cells and an increase in neural tube and PSM cells ( Figure 4E). To check if spatial heterogeneity of Sox2/Bra levels can self-organized in our model we made a simulation in which every progenitor start with equivalent levels of Sox2 (50%) and Bra (50%). We observed that spatial heterogeneity was emerging after few time-points and persisting all along the simulation suggesting that this feature is indeed able to self-organize independently of the initial levels of Sox2/Bra (Supplemental Movie 3). We next explored the model's ability to reproduce maintenance of progenitors and elongation of posterior tissues during the elongation process. By looking at different time points in the simulation, we observed that the progenitor region stays posterior to the neural tube. Neural tube and paraxial mesoderm both extend posteriorly ( Figure 4F, Supplemental Movie 2). These results show that our mathematical model is able to reproduce the main properties of our biological system. Interestingly, simulations also showed dynamic deformation of the progenitor region, which adopts asymmetric shapes and then gets back to symmetric, thus highlighting self-corrective properties of the system. To challenge further this model and test if it can recapitulate the experimental results we obtained by over-expression and down-regulation of function of Sox2 and Bra we explored the consequences of numerically deregulating the Sox2/Bra values on tissue and cell behavior. As a result, Bra High values increase PZ cell motility in the model ( Figure 4G), lead to more PSM cells ( Figure 4H The results of our numerical simulations are therefore coherent with our experimental data showing that Sox2/Bra levels control the balance between maintenance of progenitors in the PZ and continuous distribution of cells into the neural tube and the PSM. Together, the results obtained by our model strongly suggest that progenitor behavior can be guided by creating Sox2/Bra-dependent heterogeneous cell motility. Sox2 and Bra control motility of PZ cells Our mathematical model suggests that the control of cellular motility by Sox2/Bra levels is critical to influence progenitor behavior (staying or leaving the PZ to go to the neural tube or the PSM). To test this prediction we needed to define whether Sox2 and Bra are indeed involved in controlling progenitor motility in vivo. Therefore, we proceeded to over-expression and down-regulation experiments followed by time-lapse imaging in quail embryos ( Figure 5 A-F). We focused on monitoring the PZ to define how cells behave in this region, either staying or leaving this tissue. As for control embryos (Figure 3), we first monitored raw cell motilities (Supplemental Figure 7) and conducted subtraction of the tissue motion to gain insight into local motility and directionality ( Figure 5). We found that Braoverexpressing PZ cells display higher motility without significant differences in directionality when compared to control embryos. By contrast, when PZ cell overexpress Sox2, we detected a significant reduction of their motility compared to control accompanied by an anterior bias in angle distribution (Figure 5 B,C,D, Supplemental Movie 6). Bra down-regulation leads to similar significant reduction of cell motility, as well as a change in directionality toward the anterior direction 6. Conversely, Sox2 down-regulation did not result in significant effect on average cell motility or directionality, even though a tendency towards a slight increase in motility was noticed. (Figure 5 B,E,F, Supplemental Movie 7). These data, showing that changing the respective levels of Sox2 and Bra is sufficient to modulate PZ cell motility/migration properties, highlight a key role for these transcription factors in controlling PZ cells movements with Sox2 and Bra inhibiting and promoting cell motility, respectively. In that sense, these results validate the importance of the regulation of motility by Sox2/Bra levels hypothesized by our mathematical model. When cells have high Sox2/Bra levels they migrate less and are left behind the PZ to be integrated in the neural tube. When cells have a low Sox2/Bra ratio they tend to migrate more, mostly in a diffusive manner, explaining how they leave the PZ to be integrated in the surrounding PSM tissues. Modeling the importance of spatial heterogeneity in morphogenesis. Our experimental and theoretical data point out that heterogeneity in Sox2 and Bra expression are key to control progenitor behaviors during axis elongation. However, we still do not know the importance of the spatial randomization of this heterogeneity (the fact that we observe very different Sox2/Bra levels in neighboring cells without clear pattern) in comparison to a heterogeneity that is spatially organized. To assess this particular point, we created a second mathematical model in which Sox2 and Bra levels, instead of being randomly distributed, are simply patterned in two opposite gradients as we have observed in the dorsal part of the PZ ( Figure 6A, Figure 1 J, and Supplemental Figure 3). In this area, Sox2 was found displaying an antero-posterior decreasing gradient while Bra displays an opposite antero-posterior increasing gradient. This new version of the model has been set up to reproduce similar dynamics in cell specification ( Figure 6B) and relationships between Sox2/Bra and motility as in the previous model (Supplemental methods). We observed that the motility of the PSM, and PZ of the gradient model are globally comparable to the spatially heterogeneous model ( Figure 6C, 4C) and mostly non-directional (Supplemental Figure 8). We observed maintenance of PZ cells caudally and elongation of the different tissues ( Figure 6D, Supplemental Movie 8) suggesting a gradient in Sox2 and Bra expression can also explain progenitor maintenance and distribution. Despite the observed similarities, we noticed several crucial differences between our heterogeneous and our biologically inspired gradient model. Indeed, the speed of elongation is less important in the graded simulation (0.8 a.u.) versus the spatially heterogeneous simulation (1.2 a.u.) ( Figure 6E). To check if, at the cellular level, resident progenitors are displaced differentially in the posterior direction between the two models we tracked cells at the center of the PZ and calculated the distances they travelled in the Y direction. Analysis showed resident progenitors move more posteriorly in the heterogeneous (mean distance of 3.37 a.u.) versus the gradient model (mean distance of 3.18 a.u.) suggesting that the heterogeneous model is more efficient at maintaining resident progenitor posteriorly ( Figure 6F). We also noticed less transient deformations and self-corrective behavior of the PZ in the gradient simulation when compared to the heterogeneous simulation (Supplemental Movie 9). By analyzing the PZ shape on longer time scales (beginning to end of the simulation) in each model, we found that, the shape of the PZ changes considerably in the gradient simulation. Indeed, it became larger (mediolateral) and shorter (antero-posterior), showing less conservation of proportions 41% (gradient) vs 86% (heterogeneous) ( Figure 6G). Finally, to test if the changes we observed could be due changes in the diffusivity of cellular migration we plotted the MSD through time for each of the models. We found that the MSD of the spatially heterogeneous model is higher that the gradient model ( Figure 6H) suggesting that the spatial heterogeneity in expression of Sox2 and Bra is enhancing the diffusive behavior of the PZ cells. This higher diffusivity can therefore bring more fluidity in the tissue in order to remodel more efficiently and to maintain its shape on longer time scales and allow for more posterior displacements of resident progenitors. Together, data obtained from our mathematical models argue in favor of spatial cell-to-cell heterogeneity being able to create an appropriate balance between two possible decisions for a PZ cell, i.e. staying in place or going away to contribute to adjacent tissues. Moreover, it argues that spatial cell-to-cell heterogeneity is important to allow tissue fluidity and remodeling in order to maintain PZ shape during axial elongation. Discussion Our results reveal the existence of spatial heterogeneity in expression levels of Sox2 and Bra proteins in the posterior progenitors of bird embryos. Experimental and theoretical approaches converge toward the point that spatial heterogeneity of Sox2/Bra ratio play essential roles in regulating progenitor maintenance and their allocation to the neural tube or PSM through the control of cellular motility. Altogether, our data lead us to propose the following working model. Progenitor cells expressing intermediate levels of Sox2 and Bra stay motile and remain resident within the posteriorly moving progenitor tissue. Cells expressing high levels of Sox2 reduce their motility and therefore are forced to leave the progenitor zone to integrate in the neural tube, while progenitors expressing high level of Bra "diffuse" more actively than other progenitors and exit the PZ to integrate the PSM. This model shed new light on how specification and morphogenesis is coupled during vertebrate embryo axis elongation and highlights the fact that heterogeneity can be a beneficial feature to ensure robustness in morphogenesis. The fact that we can detect different levels of Sox2 and Bra expression between progenitors is indicative that axial progenitor cells are in different specification states, some progenitors are more engaged toward the mesodermal fate, some toward the neural fate while others are still in between these states. These different specification states could explain why in our gain and loss-of-function experiments preferential distribution of electroporated cells in the neural tube is not always paralleled by a decrease in participation in the PSM (or inversely) ( Figure 2E, J). Indeed a progenitor, which is for instance, already engaged toward a neural fate, might not be competent to become mesoderm anymore and might follow its path toward neural tube even if experiencing a sudden rise in Bra level. Single cell sequencing of the progenitor region of mouse embryos have revealed that different types of specification states co-exist within posterior progenitors (19). It is likely that the different states that we observe by visualizing different Sox2/Bra proteins ratio is also defined by differential gene expression of mesodermal and neural genes. To test this hypothesis further it would be interesting to analyze other neural and mesodermal genes and test if they have heterogenic pattern of expression in the progenitor region. Due to its technical limitation, single cell studies does not reveal the exact locations of the different progenitor states found within the posterior region. The first heterogeneity that we have observed is patterned spatially in a gradient along the antero-posteriorly axis (Sox2 high anteriorly, Bra high posteriorly) in the dorsal part of the region. This graded expression has been described in chicken embryo (22) (49) and is coherent with fate maps studies at earlier stages showing that the antero-posterior axis of the epiblast/streak gives rise to progeny along the medio-lateral axis (4,6). For instance, anterior cells expressing high levels of Sox2 can give rise to neural cells and more posterior cells expressing high levels of Bra to PSM (and eventually to lateral mesoderm to cells located even more caudally). However, our analysis also reveals a spatial heterogeneity between neighboring cells in the progenitor region (dorsally and ventrally). This finding is suggestive of a more complex picture where position in the progenitor region do not systematically prefigure final tissue destination. In this case, Sox2/ Bra ratio would be determinant in assigning progenitor fate independently of the initial position. This scenario involve much more cell mixing and could participate in explaining why prospective maps of this region have shown multi-tissue contribution (3)(4)(5)(6). In our functional experiments aiming at biasing PZ cells toward neural state (Pcig-Sox2, Bra-Mo) we have stronger effects in motility and tissue distribution than in experiments in which we aim at biasing PZ cells toward mesodermal fate (Pcig-Bra, Sox2 Mo). These differences can be explained and reinforced by our finding that the motilities of PZ cells are much more similar to PSM motilities than to neural tube. Therefore, a change of progenitor behavior toward a neural tube behavior is much likely to affect motility and progenitor distribution than a change toward a mesodermal state. The action of graded signaling pathways such as Wnt, FGF and RA have been described to positively regulate the expression of Bra and Sox2 genes and to affect progenitor destiny (23)(24)(25)(26)(27)(28)(29)(30). It will be necessary and useful to test whether these pathways regulate Sox2 and Bra in posterior progenitors of quail embryos. However, no data allow us to say that the activity of these signaling pathways is sufficient to explain the spatial heterogeneity that we observed between neighboring cells in this region. Our data, as well as data from the literature, suggest the existence of a mutual repression mechanism of Sox2 and Bra in posterior progenitors (Supplemental Figure 5 and (19)). The expression of these transcription factors could therefore be controlled positively through signaling pathways and negatively by cross-inhibition activities. This synergy between signaling and cross-repressive activity could therefore allow for a temporal dynamic responsible for the spatial heterogeneity we observe. In our mathematical model, we propose that the expression of Sox2 and Bra is spatially and temporally dynamic within progenitors. This hypothesis needs to be investigated with specific reporter and live imaging. However if true, one could postulate that a progenitor that oscillate between a high Bra low Sox2 state and a low Bra high Sox2 state give rise to progeny in the neural or mesodermal lineages depending on these ratios at the moment of cell division. Studies in mouse and recent studies in birds suggest that a single progenitor clone (true NMP) can give rise to progenies in the neural tube and the paraxial mesoderm (9). Interestingly, the authors observe that the final positions of these progenies are different along the antero-posterior axis suggesting a sequential production: progenitor cells give rise to mesoderm and then switch to neuroectoderm (or the opposite). An oscillation between cellular specification states of the NMP could therefore be a possible explanation to these observations. Recent studies have shown that during the course of axis elongation axial progenitor cells undergo an EMT before reaching their full potential and give rise to progeny in the neural tube and the paraxial mesoderm (23,31) (48). We observe that, if analyzed between stage HH5 and HH8, the progenitor region displays a low range of local movements in comparison to stages HH8 to HH11 (data not shown). Therefore, it is likely that local motility is very low when progenitors are still epithelial and become high and non-directional when the progenitors become more mesenchymal and are relocated caudally. However, it is interesting that even if the tissue is more mesenchymal at later stages it keep its global posteriorly directed tissue motion. Works in bird embryo have shown that the posterior movement of the progenitor zone is the result of physical constrains exerted by the neighboring tissues PSM and neural tube (21,32). PSM expansion can indeed exert pressure on axial tissues generate a general posterior movement of the progenitor zone. Our data indicate that the range of cellular motility of progenitors belonging to this moving region is key in determining their fate, high Bra expressing cells move actively and leave the region laterally whereas high Sox2 cells move less, and are left behind in the neural tissue. In our experiments, over-expression of Sox2 and down-regulation of Bra leads to an anterior bias in the direction of progenitor movements. However If we consider the posteriorly moving progenitor region, this would correspond to an absence of movement for neural progenitors. Indeed, it has been proposed that the progressive depletion of progenitors committed toward the neural fate occurs without excessive cell mixing (33) and therefore could not require active migration. Although the role of Sox2 on progenitor cell migration has, to our knowledge, never been described, recent work has demonstrated that Sox2 is implicated in the transition from neural progenitor to neural tube during chick embryo secondary neurulation (22). Concerning Brachyury, it has been previously shown that this transcription factor has a role in cell migration. Mouse cells that have a mutation in the Brachyury gene have lower migration speed than wild type cells when isolated and cultured, explaining part of the mouse embryonic phenotype (34). The molecular mechanisms that act downstream of Sox2 and Bra to promote or inhibit cell migration are still to be discovered. From the result of our mathematical models, we can propose that both a spatially random and a patterned heterogeneity in Sox2 and Bra expression are able to maintain progenitors caudally and to guide their progeny in the neural tube and mesoderm. In our biological system we observe a superposition of spatially randomized and a patterned distribution of Sox2/Bra levels. It is therefore tempting to suggest that both systems (spatially random and gradient) are at work in the embryo. Interestingly, little is known about the role of spatial heterogeneity in morphogenesis. We propose that this random pattern allow for more cell rearrangements/tissue fluidity in the progenitor zone. This fluid like state and the opposite solid like state of the anterior PSM tissue have been shown to be key for zebrafish embryo axis elongation (35,36). Furthermore, the fact that we observe more self-correction in our model also suggests that spatial heterogeneity can provide plasticity to the system. Interestingly, several studies have shown that this particular region of the embryo is able to regenerate after deletion of some of its parts (37,38). In that regards, spatial heterogeneity could be easily re-organized in remaining cells in comparison to gradients. Indeed, if gradients of Sox2 and Bra are controlled by secreted signals, one could imagine that the absence of tissue could be more detrimental to the diffusion of this signal and therefore to re-patterning. Spatial heterogeneity in gene and protein expression is a common trait of living system and has been found in many contexts including cancer cells (39). The link between cellular spatial heterogeneity and the robustness of morphogenetic processes that we describe can therefore be relevant beyond the scope of developmental biology. Quail Embryos and cultures: Fertilized eggs of quail (Coturnix japonica) obtained from commercial sources, are incubated at 38°C at constant humidity and embryos are harvested at the desired stage of development. The early development of quail being comparable to chicken, embryonic stages are defined using quail (40) or chicken embryo development tables (41). Embryos are grown ex ovo using the EC (early chick) technique (42) 6 to 20 hours at 39°C in a humid atmosphere. Electroporation : We collected stage 4-6 quail embryos. The solution containing the morpholinos (1mM) and pCIG empty (1-2μg/μL) as a carrier or the DNA solution containing expression vectors Pcig , pCIG-Bra or pCIG-Sox2 (2-5μg/μL) were microinjected between the vitelline membrane and the epiblast at the anterior region of the primitive streak (45). The electrodes were positioned on either side of the embryo and five pulses of 5.2V, with a duration of 50ms, were carried out at a time interval of 200ms. The embryos were screened for fluorescence and morphology and kept in culture for up to 24 hours. Image acquisition and processing: Image acquisition for immunodetection was performed using a Zeiss 710 laser confocal microscope (20x and 40x objectives). Quantification of Sox2 and Bra levels in 3D have been made with Fiji or with the spot function (DAPI Staining) of Imaris. Expression have been normalized to DAPI to take into account loss in intensity due to depth, expression and ratios have been calculated and plotted using Matlab. Protein expression after gain and loss of function experiments has been done 7 hours after electroporation by analysing immunodetection signal levels within GFP positive progenitors and normalizing to endogenous expression. Fluorescence distribution in tissues has been acquired on a wide field microscope Axio-imager type 2 (Colibiri 8 multi-diode light source, 10X objective). The images of electroporated embryos were processed with the Zen software that allows the assembly of the different parts of the mosaic ("Stitch" function). The images were then processed with the "Stack focuser" plugin of the Image J software. The different tissues were delineated on ImageJ with the hands-free selection tool and then the images were binarized using the threshold tool. The total fluorescence intensity emitted for the cells transfected with the different constructs were measured and the sum of the positive pixels for the different tissues was calculated. The percentage distribution of fluorescence in the different tissues was then calculated. Live imaging and cell tracking Live Imaging has been done using Zeiss Axio-imager type 2 (10X objective) and was described here (31). Briefly, stage 7-8 electroporated embryos were cultured under the microscope at 38 degrees under humid atmosphere. Two channels (GFP and brightfield), 3 fields of views, 10 Z levels have been imaged every 6 minutes for each embryo (6 embryo per experiments). Images were stitched and pixels in focus were selected using Stack Focuser (ImageJ). X, Y drift was corrected using MultiStackReg adapted from TurboReg (ImageJ) (46). Image segmentation was done after background correction using Background Subtractor plugin (from the MOSAIC suite in ImageJ) and cell tracking was done using Particle Tracker 2D/3D plugin (ImageJ) (47). A reference point was defined for each frame at the last formed somite using manual tracking. Region of interest were defined manually and their posterior movement was defined by manual tracking of the tailbud movement. Subtraction of the tissue movement was done by defining the average motions of cells in the region. Violin plots were generated on Prism 8 (Graphpad). MSD and distribution of angles were calculated and plotted with a Matlab routine. Angle distribution was calculated from trajectories, weighted with velocities and plotted as rosewind plot using Matlab. Mathematical modeling We initially distribute 1100 progenitor cells, 1200 neural cells, and 3200 PSM cells, randomly in their respective areas. Each cell type is endowed with its proliferation rate: 11.49 hours for the progenitor cells, 10.83 hours for the neural cells and 8.75 hours for PSM cells. Each cell I is characterized by its ratio of Sox2/Bra R_I(t) between 0 and 1 (depicted as 0-2in Figure 4A to match with biological ratios) and its position in 2D (x_I(t),y_I(t)), each of these variables is time-dependent. In the heterogeneous case, we initially randomly attribute a ratio between 0.15 and 0.85 to the progenitor cells. At each time step, these cells update their ratio through a first order ODE using the function represented in Figure 5A (+ noise), and then update their position (x,y), depending on their ratio, by a biased/adapted random motion. Interaction properties between cells such as adhesion, maximum density, packing are implemented in the bias of the random motion, as detailed in the Supplementary Materials and Methods. In the simulations we represent a portion of the posterior body (1 unit=150 micrometers), block the movements of the cells in the most anterior region as we consider it a very dense area (somites, epithelium neural tube), and block their passage to either side of the PSM as we consider the lateral plate to be a solid structure. A-F : Sox2 (green) and Bra (red) expression has been analyzed at the cellular scale in the caudal part of stage HH11 quail embryo, either in the progenitor region (A-C) or in the nascent neural tube and PSM (D-F). Overlay images are presented in C and F. Note that cell-to-cell heterogeneity in Sox2 and Bra levels are apparent in the progenitor region, with neighboring cells having higher Bra (red arrow), higher Sox2 (green arrow) expression levels or both proteins at similar levels (yellow arrow) which is not apparent in the nascent neural tube and PSM tissues. G, G': Levels of Sox2 and Bra in different embryonic locations corresponding to neural tube maturation (G) and paraxial mesoderm maturation (G'). Images on the left display locations for the measurements (blue squares), charts on the right display the different levels measured. H: Distribution of normalized cell-to-cell expression for Sox2 and Bra in the progenitor region (n=8 embryos). Coefficient of variation are more important for Sox2 (41.48%) than for Bra (30.75%). I: Cell distribution of Sox2/Bra levels in the progenitor zone (n=9 embryos), the neural tube (n=7 embryos) and the PSM (n=8 embryos). J-J''': Representation of the Sox2 and Bra ratio (green to red) in digital transversal sections (40 µm) A: Graphical representation of the mathematical function defining the ratio Sox2/Bra dynamics. In each progenitor cell, Sox2/Bra ratio oscillates randomly between 0.4 and 1.6, noise in the system ensures that some cells pass below 0.4 to specify into paraxial mesoderm (red) and some cells pass above 1.6 to become neural tube (green). Medium and low ratios (below 1.6) confer high-motility, high ratios (above 1.
2020-11-26T09:03:47.987Z
2020-11-18T00:00:00.000
{ "year": 2020, "sha1": "6df1f4bc605815ca0bb6f60006d80e7733b5032f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.66588", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "b3f50ae5e194eab26693af86a455b4b7b97f8b0c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
268690631
pes2o/s2orc
v3-fos-license
Should I Take Aspirin? (SITA): randomised controlled trial of a decision aid for cancer chemoprevention Background Australian guidelines recommend that people aged 50–70 years consider taking low-dose aspirin to reduce their risk of colorectal cancer (CRC). Aim To determine the effect of a consultation with a researcher before an appointment in general practice using a decision aid presenting the benefits and harms of taking low-dose aspirin compared with a general CRC prevention brochure on patients’ informed decision making and low-dose aspirin use. Design and setting Individually randomised controlled trial in six general practices in Victoria, Australia, from October 2020 to March 2021. Method Participants were recruited from a consecutive sample of patients aged 50–70 years attending a GP. The intervention was a consultation using a decision aid to discuss taking aspirin to reduce CRC risk while control consultations discussed reducing CRC risk generally. Self-reported co-primary outcomes were the proportion of individuals making informed choices about taking aspirin at 1 month and on low-dose aspirin uptake at 6 months, respectively. The intervention effect was estimated using a generalised linear model and reported with Bonferroni-adjusted 95% confidence intervals (CIs) and P-values. Results A total of 261 participants (86% of eligible patients) were randomised into trial arms (n = 129 intervention; n = 132 control). Of these participants, 17.7% (n = 20/113) in the intervention group and 7.6% (n = 9/118) in the control group reported making an informed choice about taking aspirin at 1 month, an estimated 9.1% (95% CI = 0.29 to 18.5) between-arm difference in proportions (odds ratio [OR] 2.47, 97.5% CI = 0.94 to 6.52, P = 0.074). The proportions of individuals who reported taking aspirin at 6 months were 10.2% (n = 12/118) of the intervention group versus 13.8% (n = 16/116) of the control group, an estimated between-arm difference of −4.0% (95% CI = −13.5 to 5.5; OR 0.68 [97.5% CI = 0.27 to 1.70, P = 0.692]). Conclusion The decision aid improved informed decision making but this did not translate into long-term regular use of aspirin to reduce CRC risk. In future research, decision aids should be delivered alongside various implementation strategies. 50-70 years attending a GP.The intervention was a consultation using a decision aid to discuss taking aspirin to reduce CRC risk while control consultations discussed reducing CRC risk generally.Self-reported co-primary outcomes were the proportion of individuals making informed choices about taking aspirin at 1 month and on low-dose aspirin uptake at 6 months, respectively.The intervention effect was estimated using a generalised linear model and reported with Bonferroni-adjusted 95% confidence intervals (CIs) and P-values. Conclusion The decision aid improved informed decision making but this did not translate into long-term regular use of aspirin to reduce CRC risk.In future research, decision aids should be delivered alongside various implementation strategies. Introduction In 2020, colorectal cancer (CRC) was the second most common cause of cancer deaths in Australia, and there were an estimated 1.9 million cases diagnosed globally. 1,2Meta-analyses of randomised controlled trials (RCTs) of low-dose aspirin have demonstrated reduced relative incidence and mortality of CRC by up to 25% and 33%, respectively. 3eta-analyses of trials of aspirin for primary prevention of cardiovascular disease (CVD) demonstrate a reduced risk of ischaemic stroke, but an increased risk of non-fatal bleeding. 4The side effects of aspirin are well defined, 5 but the likelihood of preventing death from cancer is 5-10 times greater than causing death from taking aspirin in this age group. 6These data informed Australian guidelines published in 2017, which recommend that clinicians consider prescribing low-dose aspirin for people aged between 50 and 70 years to prevent CRC. 7cision aids are effective in general practice for communicating the benefits and risks of an intervention, 8 particularly for preference-sensitive decisions.Decision aids can support informed choices about aspirin by individuals with Lynch syndrome (a genetic condition that increases cancer risk), 9 and may also support this decision for the general population in primary care. The Should I Take Aspirin? (SITA) trial is an efficacy trial of a consultation in general practice using a novel decision aid demonstrating the potential harms and benefits of low-dose aspirin for CRC and CVD prevention on informed decision making and low-dose aspirin uptake compared with general information about CRC. Method Brief methods are presented here, which summarise the published trial protocol. 10 Participant inclusion and exclusion criteria Participants were eligible if they were aged 50-70 years, literate in written English, and provided informed consent.Exclusion criteria included taking low-dose aspirin or an anticoagulant regularly, a previous diagnosis of CRC, a known genetic predisposition to CRC, or an extensive family history suggesting a genetic predisposition. 11n extensive family history includes ≥2 first-or second-degree relatives on the same side with CRC, or ≥3 first-or second-degree relatives on the same side of the family with CRC or other Lynch syndrome-related cancer. 11 Recruitment The practices provided researchers with appointment lists of patients who were in the eligible age range and had an appointment booked the following day to see if they might be interested in taking part in the SITA trial.Once contacted, if they were interested and eligible, the researcher invited them to attend a consultation either in a private consulting room at the general practice or using password-protected video-conferencing with them before their planned GP consultation.Informed consent was obtained during the consultation, followed by baseline data collection and randomisation, and depending on the trial arm, either the intervention or control protocol was delivered.Participants were informed that the trial was called 'The Bowel Cancer Prevention Study' and were not explicitly told that it was about aspirin.After this, patients had a consultation with their GP.Patients who refused or were ineligible were reassured that this would not be recorded in their medical records, and their clinical care would not be compromised. Intervention The intervention involved a consultation delivered by a trained research assistant, in which the decision aid was discussed before the participant's scheduled GP appointment.The decision aid is a sex-specific, tri-fold brochure that uses expected frequency trees to present the absolute changes in risk in people taking daily low-dose aspirin to the incidence of CRC, myocardial infarction, stroke, all-cause mortality, and gastrointestinal bleeding (see Supplementary Figures S1 and S2 for details). 10The decision aid refers to the Cancer Council Australia guidelines, prompts participants to discuss their decision with their GP before commencing low-dose aspirin, and lists contraindications for aspirin use. 12In response to the COVID-19 pandemic, an alternative teletrial model was developed that involved a video version of the decision aid, 10 which was shown to all intervention participants. 13 Control The control arm involved a consultation with participants before seeing their GP delivered by two researcher assistants, in which a 'Reduce your bowel cancer risk' tri-fold control brochure with an accompanying video was presented (see Supplementary Figure S3 for details).The control brochure and video focused on modifiable risk factors and CRC screening, with limited reference to low-dose aspirin. Changes to trial method There was a deviation to the published protocol.In the protocol, a short message service (SMS) was planned to be sent 2 weeks after randomisation to remind intervention participants to discuss taking aspirin with their GP.However, at the end of the trial, it was discovered that the messages had not been automatically How this fits in Australian guidelines recommend that people aged 50-70 years take low-dose aspirin to help prevent colorectal cancer (CRC).When the guidelines were published in 2017, there was no formal plan to implement them in clinical practice.This randomised controlled trial tested the use of a decision aid in general practice to communicate the benefits and harms of aspirin compared with general information about ways to prevent CRC.Additional implementation strategies with greater engagement with GPs may be necessary to increase aspirin use for CRC prevention. British Journal of General Practice, August 2024 dispatched as a result of a technical issue and no one had received an SMS. Study outcomes and measures Outcomes were measured at baseline before randomisation and at 1 and 6 months after randomisation.The 1 and 6 month follow-up questionnaires sent to participants can be found in the SITA trial protocol paper supplementary files I and J. 10 The two co-primary outcomes were the difference between the study arms in the proportion of participants making an informed choice at 1 month and in the proportion who self-reported daily adherence to low-dose aspirin at 6 months. Primary outcome: proportion of participants making an informed choice at 1 month. The multi-dimensional measure of informed choice (MMIC) was used to evaluate participants' informed decision making regarding their self-reported behaviour of taking low-dose aspirin. 14An informed choice was considered to be one where the individual has sufficient knowledge about taking aspirin to prevent CRC and CVD, and where their behaviour (taking aspirin regularly in the past month or not) is concordant with their attitudes towards taking aspirin (positive or negative). 14Sufficient knowledge was defined using a total score (range 0 to 1), which was the sum of 11 statements about aspirin, for which the participant answered true, false, or unsure, plus one open-ended item (see Supplementary Box S1 for details).The threshold for sufficient knowledge was a score of 8.2 determined using the Angoff method. 15Attitude about taking aspirin consists of four items on a seven-point Likert scale. 14The total attitude scores range from 4 to 28, with higher scores reflecting a more negative attitude.The cut-off for a positive or negative attitude was set at the mid-point of the scale (positive attitude = 4-15; negative attitude = 16-28). 14 Primary outcome: proportion of participants who self-reported daily adherence to low-dose aspirin at 6 months. Participants were asked whether they had taken aspirin for at least 5 days per week, consistently, since consent (yes/no). Secondary outcomes. Secondary outcomes included the differences between the study arms in: 1. mean decisional conflict scale at 1 month; 16 2. the proportion of participants who self-reported daily adherence to low-dose aspirin at 1 month; 3. the proportion of participants who had discussed aspirin with their GP between baseline and 6 months, which was collected in an electronic medical record audit at 6 months by a research assistant blinded to participant allocation; and 4. the proportion of participants who reported behavioural changes made to reduce their risk of CRC at 1 and 6 months, including dietary changes, quitting smoking, screening for CRC, and whether they spoke to their GP about these changes. Baseline measures.Participant demographics and CRC and CVD risk factors were collected at baseline.Family history was used to evaluate CRC risk considering close relatives diagnosed with CRC before age 55 years or multiple relatives diagnosed with CRC, indicating an elevated CRC risk, while self-reported risk factors including diabetes, high cholesterol, current use of high blood pressure medication, family history of CVD, and history of cigarette smoking indicated increased CVD risk for the participant. Socioeconomic status was based on the Index of Relative Socio-economic Advantage and Disadvantage 17 using the participant's postcode of residence.The Subjective Numeracy Scale (SNS) 18 was used to determine individuals' preferences for numerical versus prose information.SNS is an eight-item questionnaire, where each item is rated on a Likert scale to calculate the total score, and then the average score is calculated.A higher score indicates a higher preference for numerical information.Other data collected were details of participants' age, sex, country of birth, number of medications they were taking, education, and language spoken at home. Randomisation and blinding Participants were randomly allocated 1:1 to the intervention or control arms using a computer-generated allocation sequence generated by the trial statistician and stratified by general practice, sex, and mode of intervention delivery (face-to-face or teletrial) using permuted blocks of random sizes of two and four within the stratum.GPs and research assistants who delivered the intervention and control could not be blinded but were not involved in the collection of follow-up data or data analysis.Before consenting, GPs were shown the decision aid and were made aware that their patients may ask about taking low-dose aspirin to prevent CRC.GPs were advised against changing their usual clinical practices during the trial. Trial investigators were blinded to participant allocation.Participants were blinded and advised that they would be randomly assigned to one of two groups and, in either, they would receive information about reducing their CRC risk. Sample size A total of 258 participants (129 per arm) were required to achieve 80% power with a two-sided Bonferroni-adjusted 2.5% alpha level for the two co-primary outcomes to estimate a minimum 20% between-arm difference in the proportion of participants regularly using low-dose aspirin at 6 months (39% versus 19%), and the proportion making an informed choice about low-dose aspirin use at 1 month (54% versus 34%).This allowed for 15% attrition at 6 months. Statistical analysis The detailed statistical analysis plan (SAP) is available on the Australian New Zealand Clinical Trials Registry (ID: ACTRN12620001003965). 19 All analyses were conducted using Stata (version 17). Descriptive statistics were used to compare baseline participant demographic characteristics between the two study arms.Analysis was intention to treat for the two co-primary outcomes and secondary outcomes 1-3, where all randomised participants were included in the analysis using a multiple imputation approach (see Supplementary Box S2 for details).Exceptions were participants who explicitly withdrew their data before data analysis.For binary outcomes, logistic regression, adjusted for general practice (metropolitan versus rural), brochure type based on sex (male versus female), and mode of trial delivery (face-to-face or teletrial), was used to estimate the odds ratio (OR) (relative measure).Adjusted estimates of the between-arm difference in proportions (absolute measure) were generated using Stata margins command after fitting the logistic model. 20It was not possible to estimate the between-arm difference in proportions using the generalised linear model with Research the identity link function and binomial family as originally planned because of model convergence issues for several binary outcomes.The between-arm difference in means for the decisional conflict scale was estimated using linear regression adjusted for general practice, brochure type, and delivery mode.In addition, three sensitivity analyses were conducted: 1. adjusted for pre-specified baseline variables, general practice, sex, and mode of delivery using the same regression models; 2. the same as 1, adjusted for age in years and numeracy using the SNS; and 3. participants with complete follow-up only. Estimates of the intervention effect were presented with Bonferroni-adjusted 95% confidence intervals (CIs) and P-values for two comparisons for the co-primary outcomes and with 95% CIs for all other secondary outcomes.The 97.5% CIs were used due to there being co-primary outcomes.The standard 95% CIs were used for the secondary outcomes and sensitivity analyses. Flow of participants in the trial and loss to follow-up Between October 2020 and March 2021, 264 participants consented (87.1% of 303 eligible patients) from six general practices and were randomly allocated to the two trial arms (Figure 1).Three participants allocated to the intervention were found to be taking anticoagulants, which for the purposes of this trial were considered as contraindicated with taking low-dose aspirin and were excluded from analyses.Survey response rates were high at 85.6% and 89.2% at 1 and 6 months, respectively.Participant characteristics in the two arms were similar, apart from a family history of CVD or CRC (Table 1). Co-primary outcomes Nearly 18% of participants in the intervention arm reported making an informed choice about taking low-dose aspirin compared with 7.6% in the control arm, an estimated increase of 9.1% (Bonferroni-adjusted 95% CI = 0.29 to 18.5; OR 2.47 [Bonferroni-adjusted 97.5% CI = 0.94 to 6.52; Bonferroni-adjusted P = 0.074]) (Table 2).There was no statistical evidence to support a difference in the proportion of participants reporting daily use of low-dose aspirin at 6 months between the intervention and control arms (10.2% versus 13.8%; between-arm difference of -4.0% [Bonferroni-adjusted 95% CI = -13.5 to 5.5]; OR 0.68 [Bonferroni-adjusted 97.5% CI = 0.27 to 1.70, Bonferroni-adjusted P = 0.346]).Similar results were observed in the sensitivity analyses; 41.6% of participants in the intervention groups had insufficient knowledge about taking aspirin, had a negative attitude about aspirin, and were not taking low-dose aspirin at 1 month (41.6% intervention; 60.2% control), forming the most common group of uninformed choices (Table 3). Secondary outcomes There was no statistical evidence to support between-arm differences in mean decisional conflict (Table 2).In the medical records audit, a higher proportion of intervention participants (17.5%) were identified as having discussed taking aspirin with their GP compared with 9.0% of controls (between-arm difference 8.6% [95% CI = -0.39 to 17.7]; OR 2.09 [95% CI = 0.95 to 4.56, P = 0.066]).Similarly, there was strong evidence that a greater proportion of participants in the intervention arm (30%) reported discussing aspirin with their GP than did control arm participants (12.0% and 15.1% of participants at 1 and 6 months, respectively) (Table 4). There was no statistical evidence to support between-arm differences in the proportion of participants for other self-reported behaviour change of other modifiable risk factors, which were included in the control brochure, except for self-reported discussion about screening for colorectal cancer at 1 month. Summary To the authors' knowledge, this is the first trial to assess the efficacy of a decision aid to support discussions about low-dose aspirin to prevent CRC in an average-risk general practice population.][23][24] Meta-analyses of aspirin trials for CVD prevention and CRC informed the Australian CRC chemoprevention guidelines, 7 but implementation plans were lacking.The researchers developed the first sex-specific decision aids for low-dose aspirin use as a potential route for clinical implementation of these guidelines. 25is RCT showed an increase in participants' mean knowledge scores and informed decisions about taking low-dose aspirin to prevent CRC at 1 month, with a higher proportion of participants discussing taking aspirin with their GP in the intervention arm, but with little impact on uptake of low-dose aspirin after 6 months.Most participants who made informed choices decided not to take low-dose aspirin, and the estimated between-arm difference of 9.1% in the proportion making an informed choice to take low-dose aspirin at 1 month fell below the predetermined minimum threshold of 20%, which was considered clinically important by trial investigators. Strengths and limitations Individuals were randomised as the risk of contamination was expected to be low based on a similar trial, 26 and the intervention was delivered at an individual level, further reducing the risk of contamination because it targeted individuals rather than groups or communities, making it less likely for control group members to be inadvertently exposed to the intervention.Further, a larger sample size would have been required if the unit of randomisation was the practice.To minimise the risk of RESEARCH | e503 Research contamination in the control arm, the trial's focus on aspirin was concealed from all participants.However, the intervention effect may have been attenuated through multiple questionnaires about aspirin use to participants.GPs' involvement in the trial might have raised their awareness of aspirin guidelines and led them to discuss this with their patients.Overall, 15% of control participants reported a discussion about aspirin with their GP and, even though more participants in the intervention arm (30%) reported discussing taking low-dose aspirin with their GP, the content of those discussions is not known, neither is how GPs' attitudes towards aspirin might have influenced patients' decisions.Fewer discussions about aspirin were recorded in participants' medical records than were self-reported by participants.This might be a result of social desirability bias influencing participants' self-reported responses, or GPs not referring to this conversation in their records. Recruitment and retention rates of trial participants were high, achieved with the use of a novel teletrial and an adapted protocol for online trial delivery during the COVID-19 pandemic. 13A limitation of the efficacy trial design was that it was not possible to determine whether the use of the decision aid by a practice nurse or GP, rather than in a standardised way by a researcher, would result in increased low-dose aspirin use. Most participants did not demonstrate sufficient knowledge to make an informed decision based on the MMIC measure.A limitation of the MMIC is the need to define a binary cut-point on the knowledge and attitude scales.The Angoff method was used, in which a group of researchers and clinicians reached a consensus on the cut-point for sufficient knowledge.The cut-point for sufficient knowledge may have been set too high (higher than midpoint for knowledge score), resulting in a lower proportion of participants classified as making informed choices.The MMIC was measured at 1 month to allow sufficient time to observe a behaviour change (taking aspirin).Intervention participants might have potentially reduced knowledge scores at 1 month than if surveyed immediately on receipt of the intervention, which may have attenuated the estimated intervention effect.Retained knowledge may be a more important measure of informed decision making than short-term recall. Comparison with existing literature To date, only one decision aid about using aspirin for primary prevention of CRC has been evaluated and shown to be acceptable to GPs and pharmacists, but its effectiveness in improving low-dose aspirin uptake and patient-informed choice has not been tested in a trial. 27 systematic review of decision aids for complex healthcare decisions found that they increased knowledge, facilitated discussions between clinicians and patients, and reduced decisional conflict, but their effect on informed decision making was less consistent across trials. 8n the current RCT, the proportion of intervention participants reporting making an informed choice about taking aspirin increased by 9.1% compared with control participants at 1 month, but the overall proportion making informed choices was low.Most participants had insufficient knowledge, leading them not to make informed choices about taking aspirin. Implications for research and practice This RCT of a decision aid to implement aspirin guidelines to prevent CRC led to differences in participants' knowledge and informed choice, and prompted discussions between patients and GPs; however, there was no difference in aspirin uptake between the study arms.Since the Australian guidelines recommending low-dose aspirin for CRC prevention were published in 2017, 7 the ASPREE trial results cast doubt on the benefits of low-dose aspirin in primary prevention for many conditions in healthy Australians aged >70 years. 28Furthermore, the US Preventive Services Task Force has modified its recommendations about the use of aspirin for CRC prevention due to including aspirin trials with short-term follow-up into their systematic review of the evidence. 29Although the ASPREE trial involved an older population than the Australian guidelines that recommend considering aspirin, the publicity and media coverage surrounding the ASPREE trial in Australia possibly created confusion for GPs and the general public at the time of this trial.The decision aid described in this RCT was designed to clarify the evidence about the relative benefits and harms of low-dose aspirin for primary prevention of CRC in people aged 50-70 years for both patients and GPs.To implement the guidelines, other Figure 1 . Figure 1.Participant flow diagram.a Number of follow-up questionnaires not completed.b Participants excluded after randomisation as researchers became aware of them taking blood thinners, which were contraindicated for aspirin, so they were excluded from the trial.c Follow-up questionnaires.qxs = questionnaires. Table 1 . Continued. Descriptive statistics of baseline characteristics for all participants and by study arm Participants were asked if they had more than one relative who had bowel cancer at any age, a family history of bowel cancer that did not meet the exclusion criteria for the trial.IRSAD = The Index of Relative Socio-economic Advantage and Disadvantage.SD = standard deviation. a Unless otherwise stated.b British Journal of General Practice, August 2024 Table 2 . Co-primary outcomes and secondary outcomes by study arm for the SITA trial (N = 261) a Estimated using logistic regression adjusted for general practice, sex, and mode of delivery.Estimated using multiple imputation.Bonferroni-adjusted 95% confidence intervals and P-values reported for co-primary outcomes.b Sensitivity analysis was the same as ' a ', except also adjusted for age in years and numeracy scale.Estimated using multiple imputation.c Sensitivity analysis was the same as ' a ' using only participants that completed follow-up.d Unless otherwise stated.e Estimated using linear regression adjusted for general practice, sex, and mode of delivery.Estimated using multiple imputation.f Sensitivity analysis was the same as ' e ', except also adjusted for age and numeracy scale.Estimated using multiple imputation.g Sensitivity analysis was same as ' e ' using only participants who completed follow-up.Difference = difference in percentages between arms.OR = odds ratio.SD = standard deviation.SITA = Should I Take Aspirin? Table 3 . Informed and uninformed choices across the three domains of the multi-dimensional measure for informed choice, at 1 month post-randomisation a indicate having sufficient knowledge, a positive attitude, and behaviour or a decision to take aspirin.Crosses (X) indicate having insufficient knowledge, negative attitude, and behaviour or a decision to not take aspirin.Participants must have sufficient knowledge about aspirin for colorectal cancer prevention to make an informed choice.Additionally, they need to have an attitude concordant with their behaviour, that is, a positive attitude and a decision to take aspirin (choice 1), or a negative attitude, and a decision not to take aspirin (choice 2).All other combinations of knowledge, attitude, and behaviour are considered uninformed choices (choices 3 to 8). a Tick marks ()
2024-03-27T06:18:30.528Z
2024-03-25T00:00:00.000
{ "year": 2024, "sha1": "c76cb85a3699cfd45a0779bd634279dd203ece82", "oa_license": "CCBY", "oa_url": "https://bjgp.org/content/bjgp/early/2024/03/24/BJGP.2023.0385.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d74733eb99a6419e2372ef3bb09b269b249e349c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119318235
pes2o/s2orc
v3-fos-license
On the Generalizations of Br\"{u}ck Conjecture We obtain similar types of conclusions as that of Br\"{u}ck [1] for two differential polynomials which in turn radically improve and generalize several existing results. Moreover, a number of examples have been exhibited to justify the necessity or sharpness of some conditions used in the paper. At last, we pose an open problem for future research. Introduction Definitions and Results Let f and g be two non constant meromorphic functions in the open complex plane C. If for some a ∈ C ∪ {∞}, f and g have same set of a-points with the same multiplicities, we say that f and g share the value a CM (counting multiplicities) and if we do not consider the multiplicities then f , g are said to share the value a IM (ignoring multiplicities). When a = ∞ the zeros of f − a means the poles of f . Let m be a positive integer or infinity and a ∈ C ∪ {∞}. We denote by E m) (a; f ) the set of all a-points of f with multiplicities not exceeding m, where an a-point is counted according to its multiplicity. Also we denote by E m) (a; f ) the set of distinct a-points of f (z) with multiplicities not greater than m. If for some a ∈ C ∪ {∞}, E m) (a, f ) = E m) (a, g) (E m) (a, f ) = E m (a, g)) holds for m = ∞ we say that f , g share the value a CM (IM). It will be convenient to let E denote any set of positive real numbers of finite linear measure, not necessarily the same at each occurrence. For any non-constant meromorphic function f , we denote by S(r, f ) any quantity satisfying S(r, f ) = o(T (r, f )) (r −→ ∞, r ∈ E). A meromorphic function a( ≡ ∞) is called a small function with respect to f provided that T (r, a) = S(r, f ) as r −→ ∞, r ∈ E. If a = a(z) is a small function we define that f and g share a IM or a CM according as f − a and g − a share 0 IM or 0 CM respectively. We use I to denote any set of infinite linear measure of 0 < r < ∞. Also it is known to us that the hyper order ρ 2 (f ) of f (z) is defined by Nevanlinna's uniqueness theorem shows that two meromorphic functions f and g share 5 values IM are identical. Rubel and Yang [14] first showed for entire functions that in the special situation where g is the derivative of f , one usually needs sharing of only two values CM for their uniqueness. Two years later, Mues and Steinmetz [13] proved that actually in the above case one does not even need the multiplicities. They proved the following result : Theorem A. [13] Let f be a non-constant entire function. If f and f ′ share two distinct values a, b IM then f ′ ≡ f . Subsequently, there were more generalizations with respect to higher derivatives as well. Natural question would be to investigate the relation between an entire function and its derivative counterpart for one CM shared value. In 1996, in this direction the following famous conjecture was proposed by Brück [1]: Conjecture: Let f be a non-constant entire function such that the hyper order ρ 2 (f ) of f is not a positive integer or infinite. If f and f ′ share a finite value a CM, then f Brück himself proved the conjecture for a = 0. For a = 0, Brück [1] showed that under the assumption N (r, 0; f ′ ) = S(r, f ) the conjecture was true without any growth condition when a = 1. f −1 is a nonzero constant. Following example shows the fact that one can not simply replace the value 1 by a small function a(z)( ≡ 0, ∞). . So in this case additional suppositions are required. However for entire function of finite order, Yang [15] removed the supposition N (r, 0; f ′ ) = 0 and obtained the following result. Theorem C. [15] Let f be a non-constant entire function of finite order and let a( = 0) be a finite constant. Theorem C may be considered as a solution to the Brück conjecture. Next we consider the following examples which show that in Theorem B one can not simultaneously replace "CM" by "IM" and "entire function" by "meromorphic function". Clearly f (z) − 1 = tanz and f ′ (z) − 1 = tan 2 z share 1 IM and N (r, 0; f ′ ) = 0. Clearly Here N (r, 0; f ′ ) = 0 So in both the examples we see that the conclusion of Theorem B ceases to hold. From the above discussion it is natural to ask the following question. Question 1.1. Can the conclusion of Theorem B be obtained for a non-constant meromorphic function sharing a small function IM together with its k-th derivative counterpart? Zhang [17] extended Theorem B to meromorphic function and also studied the CM value sharing of a meromorphic function with its k-th derivative. Meanwhile a new notion of scalings between CM and IM known as weighted sharing ( [5]), appeared in the uniqueness literature. In 2004, Lahiri-Sarkar [8] employed weighted value sharing method to improve the results of Zhang [17]. In 2005, Zhang [18] further extended the results of Lahiri-Sarkar to a small function and proved the following result for IM sharing. Theorem D. [18] Let f be a non-constant meromorphic function and k(≥ 1) be integer. Also let a ≡ a(z) ( ≡ 0, ∞) be a meromorphic small function. Suppose that f − a and f (k) − a share 0 IM. If We now recall the following two theorems due to Liu and Yang [10] in the direction of IM sharing related to Theorem B . Theorem E. [10] Let f be a non-constant meromorphic function. If f and f ′ share 1 IM and if [10] Let f be a non-constant meromorphic function and k be a positive integer. If f and f (k) share 1 IM and . In 2008, improving the result of Zhang [18], Zhang and Lü [19] further investigated the analogous problem of Brück conjecture for the n-th power of a meromorphic function sharing a small function with its k-th derivative and obtained the following theorem. Theorem G. [19] Let f be a non-constant meromorphic function and k(≥ 1) and n(≥ 1) be integers. Also let a ≡ a(z) ( ≡ 0, ∞) be a meromorphic small function. Suppose that f n − a and f (k) − a share 0 IM. If for r ∈ I, where 0 < λ < 1 then f (k) −a f n −a = c for some constant c ∈ C/{0}. At the end of [19] the following question was raised by Zhang and Lü [19]. What will happen if f n and [f (k) ] m share a small function ? In order to answer the above question, Liu [9] obtained the following result. Theorem H. [9] Let f be a non-constant meromorphic function and k(≥ 1), n(≥ 1) and m(≥ 2) be integers. Also let a ≡ a(z) ( ≡ 0, ∞) be a meromorphic small function. Suppose that f n − a and (f (k) ) m − a share 0 IM. If Next we recall the following definitions. The numbers d(P ) = min{d(M j ) : 1 ≤ j ≤ t} and k(the highest order of the derivative of f in P [f ] are called respectively the lower degree and order of P As (f (k) ) m is simply a special differential monomial in f , it will be interesting to investigate whether Theorems D-H can be extended up to differential polynomial generated by f . In this direction recently Li and Yang [11] improved Theorem D in the following manner. Theorem I. [11] Let f be a non-constant meromorphic function P [f ] be a differential polynomial generated by f . Also let a ≡ a(z) ( ≡ 0, ∞) be a small meromorphic function. Suppose . So we see that Theorem I always holds for a monomial without any condition on its degree. But for general differential polynomial one can not eliminate the supposition (t − 1)d(P ) ≤ We also observe that the afterward research on Brück and its generalization, one setting among the sharing functions has been restricted to only various powers of f not involving any other variants such as derivatives of f , where as the generalization have been made on the second setting. This observation must motivate oneself to find the answer of the following question. Question 1.2. Can Brück type conclusion be obtained when two different differential polynomials share a small functions IM or even under relaxed sharing notions ? The main intention of the paper is to obtain the possible answer of the above question in such a way that it improves, unifies and generalizes all the Theorems D-H. Following theorem is the main result of the paper. Henceforth by b j , j = 1, 2, . . . , t and c i i = 1, 2, . . . , l we denote small functions in f and we also suppose that P two differential polynomial generated by f . Theorem 1.1. Let f be a non-constant meromorphic function, m(≥ 1) be a positive integer or infinity and a ≡ a(z) ( ≡ 0, ∞) be a small meromorphic function. Suppose that P [f ] and Q[f ] be two differential polynomial generated by f such that Q[f ] contains at least one derivative. Following five examples show that (1.7) is not necessary when (i) and (ii) of Theorem 1.1 occurs. Then 7) is not satisfied. Here we note that (e z +1) 4 share 1 z CM and .7) is not satisfied. We now give the next five examples the first two of which show that both the conditions stated in (ii) are essential in order to obtain conclusion (a) in Theorem 1.1 for homogeneous differential polynomials P [f ] where as the rest three substantiate the same for non homogeneous differential polynomials. Then clearly P [f ] = e iz + 2 and Q[f ] = −e −iz and so they share 1 CM. Here (1.7) is satisfied, Though we use the standard notations and definitions of the value distribution theory available in [4], we explain some definitions and notations which are used in the paper. (i) N (r, a; f |≥ p) (N (r, a; f |≥ p))denotes the counting function (reduced counting function) of those a-points of f whose multiplicities are not less than p. (ii) N (r, a; f |≤ p) (N (r, a; f |≤ p))denotes the counting function (reduced counting function) of those a-points of f whose multiplicities are not greater than p. (2 E (r, a; g). We denote by N f ≥k+1 (r, a; f | g = a) (N g≥k+1 (r, a; g | f = a)) the reduced counting functions of those a-points of f and g for which p ≥ k + 1 and q = 0 (q ≥ k + 1 and p = 0). Definition 1.5. [6] Let a, b ∈ C ∪ {∞}. We denote by N (r, a; f | g = b) the counting function of those a-points of f , counted according to multiplicity, which are not the b-points of g. Definition 1.6. [5] Let f , g share a value a IM. We denote by N * (r, a; f, g) the reduced counting function of those a-points of f whose multiplicities differ from the multiplicities of the corresponding a-points of g. Lemmas In this section we present some lemmas which will be needed in the sequel. Let F , G be two non-constant meromorphic functions. Henceforth we shall denote by H the following function. where d = max{n, m}. [2] Let f be a meromorphic function and P [f ] be a differential polynomial. Then Proof. Let z 0 be a pole of f of order r, such that b j (z 0 ) = 0, ∞ : 1 ≤ j ≤ t. Then it would be a pole of P [f ] of order at most rd(P ) + Γ P − d(P ). Since z 0 is a pole of f d(P ) of order rd(P ), it follows that z 0 would be a pole of P [f ] f d(P ) of order at most Γ P − d(P ). Next suppose z 1 is a zero of f of order s(> k), such that b j (z 1 ) = 0, ∞ : 1 ≤ j ≤ t. Clearly it would be a zero of M j (f ) of order s.n 0j + (s − 1)n 1j + . . . If z 1 is a zero of f of order s ≤ k, such that b j (z 1 ) = 0, ∞ : 1 ≤ j ≤ t then it would be a pole of P Proof. For a fixed value of r, let E 1 = {θ ∈ [0, 2π] : f (re iθ ) ≤ 1} and E 2 be its complement. Since by definition k i=0 n ij ≥ d(Q), for every j = 1, 2, . . . , l, it follows that on E 1 Also we note that Since on E 2 , 1 |f (z)| < 1, we have So using Lemmas 2.5, 2.6 and the first fundamental theorem we get Proof of the theorem Proof of Theorem 1. G) except the zeros and poles of a(z). Now we consider the following cases. Let z 0 be a simple zero of F − 1. Then by a simple calculation we see that z 0 is a zero of H and hence where C, D are constants and C = 0. From (3.5) it is clear that F and G share 1 CM. We first assume that D = 0. Then by (3.5) we get If C D = 1 we get from (3.5) i.e., When Hence by the first fundamental theorem, (3.6), (3.10), Lemmas 2.4, 2.5 and 2.6 we get that From (3.11) it follows that which is absurd. If P [f ] is a differential polynomial then we consider the following two subcases. Concluding Remark and an Open Question From the statement of Theorem 1.1 one can see that when (ii) happens one can not obtain the conclusion of Brück conjecture as a special case. We also see from (3.6) that if N (r, ∞; f ) = S(r, f ) then conclusion of Brück conjecture is satisfied for any two arbitrary differential polynomials P [f ] and Q[f ] where Q[f ] contains at least one derivative. The problem arises for those class of meromorphic functions whose poles are relatively small in numbers such as entire functions and thus poles have a vital contributions in this perspective. We point out that the counter examples (1.9)-(1.13), which demonstrate the indispensability of the conditions in (ii), have also been formed for entire functions. So the following question still remain open for further investigations. Can Brück type conclusion be solely obtained for two arbitrary differential polynomials P [f ] and Q[f ] generated by the class of meromorphic functions containing relatively small number of poles sharing a small function a ≡ a(z) ( ≡ 0, ∞) IM ?
2016-08-08T18:28:26.000Z
2016-04-30T00:00:00.000
{ "year": 2016, "sha1": "de8a35de6da5c3fa7c404532e9f5e286f8e712c7", "oa_license": null, "oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201614139533054&method=download", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "de8a35de6da5c3fa7c404532e9f5e286f8e712c7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
3262665
pes2o/s2orc
v3-fos-license
Adult-onset Chronic Recurrent Multifocal Osteomyelitis with High Intensity of Muscles Detected by Magnetic Resonance Imaging, Successfully Controlled with Tocilizumab Chronic recurrent multifocal osteomyelitis (CRMO) is an autoinflammatory bone disorder that generally occurs in children and predominantly affects the long bones with marginal sclerosis. We herein report two cases of adult-onset CRMO involving the tibial diaphysis bilaterally, accompanied by polyarthritis. Magnetic resonance imaging (MRI) showed both tibial osteomyelitis and high intensity of the extensive lower leg muscles. Anti-interleukin-6 therapy with tocilizumab (TCZ) effectively controlled symptoms and inflammatory markers in both patients. High intensity of the lower leg muscles detected by MRI also improved. These cases demonstrate that CRMO should be included in the differential diagnosis of adult patients with bone pain, inflammation, and high intensity of the muscles detected by MRI. TCZ may therefore be an effective therapy for muscle inflammation of CRMO. Introduction Chronic recurrent multifocal osteomyelitis (CRMO) is an autoinflammatory bone disorder characterized by chronic nonbacterial osteomyelitis, multifocal bone lesions, and multiple recurrence (1)(2)(3). Generally, CRMO is a pediatric disease, with a mean age at onset of 10 years old and it is seen more frequently in girls (1,3). The metaphyses of the long bones and clavicle are the most commonly affected sites, but the pelvis, vertebrae, and mandible are also frequently involved (1,3,4). Bone lesions with marginal sclerosis are one of the typical feature of CRMO (5). Several diseases have been reported to accompany CRMO, including palmo-plantar pustulosis, psoriasis, Crohn's disease, acne, and Sweet syndrome (1). In addition, CRMO has also been categorized as a juvenile form of SAPHO (synovitis, acne, pustulosis, hyperostosis, and osteomyelitis) syndrome, implying that adult-onset CRMO should be included in the definition of SAPHO syndrome. The latter consists of several overlapping diseases, including pustulotic arthro-osteitis, sternocostoclavicular hyperostosis, and CRMO, and a high frequency of cutaneous manifestations (56-84%) (6). In the absence of skin lesions, the diagnosis of SAPHO syndrome, especially in patients with CRMO alone, is often difficult. Furthermore, because CRMO is mainly a pediatric disease, the diagnosis of adult-onset CRMO may be delayed. Indeed, there are only a few reports of adult patients with SAPHO syn- drome in which the phenotype was CRMO without any skin manifestations. These cases were characterized by clavicular (7), vertebral, and sacroiliac (8) joint pain or involved the vertebrae and femoral neck (9). We herein report the cases of two adult patients with CRMO without any typical skin lesions. The diaphyses of the long bones were involved and polyarthritis was present as well. In both patients, high intensity of muscles detected by magnetic resonance imaging (MRI), complicated the diagnosis. These cases demonstrate the importance of including adult-onset CRMO in the differential diagnosis of patients presenting with bone pain, inflammation and high intensity of muscles detected by MRI. Case 1 A 48-year-old man was admitted to our hospital because of right lower leg pain. From the age of 17, he had been experiencing right lower leg pain of one week's duration that recurrently occurred 2-3 times per year. These episodes were not accompanied by fever. At age 38, he was admitted to a local hospital for swelling and redness of his right lower leg. His C-reactive protein (CRP) level was 3.56 mg/dL and his white blood cell (WBC) count 8,500/μL. A biopsy of the right tibia showed no malignant cells and non-bacterial osteomyelitis was therefore tentatively diagnosed. Although no treatment was started, his right lower leg pain gradually improved, but did not resolve entirely. At age 41, he was admitted to another hospital because of fever, polyarthralgia bilaterally involving his shoulders, and his right elbow and right wrist. His body temperature was 38-39 and his right elbow, right wrist, and left ankle were swollen. His CRP was 10 mg/dL; his rheumatoid factor (RF), anti-cyclic citrullinated peptide (CCP), and anti-nuclear antibody were all negative. His matrix metalloproteinase-3 (MMP-3) level was normal (104.0 ng/mL) as well. An MRI scan showed enhancement of the fascia of his anterior right lower leg, suggesting fasciitis. The bone marrow was of slightly heterogeneous intensity, but was not enhanced, consistent with his previous osteomyelitis. Reactive arthritis was considered, and he was started on prednisolone (PSL) at 5 mg/day and salazosulfapyridine (SSZ) therapy. Treatment reduced the polyarthritis, but the abnormal CRP level (1-3 mg/dL) and pain in his right lower leg persisted. After moving to the city where our hospital is located, the patient began treatment under our supervision. Since autoinflammatory disease was suspected, he was started on 0.5 mg colchicine/day, but it was not effective. He was therefore treated with methotrexate (MTX) and minodronate and his PSL dose was increased to 10 mg/day, but his right lower leg pain and elevated CRP continued. Loxoprofen was therefore prescribed for pain relief. At 48 years of age, he was admitted to our hospital. He did not have fever (36.6 ), but he complained of pain in his right lower leg and right upper arm. His CRP level was 4.14 mg/dL and his WBC count was 10,700/μL. His creatinine kinase (CK) level was low (29 U/L) and he had no elevations in the levels of anti-myeloperoxidase, and proteinase 3 antineutrophil cytoplasmic antibodies. On X-rays, bilateral osteosclerosis of the tibias was observed (right>left) (Fig. 1). On the MRI scan of his lower legs, the tibial bone marrow and muscles enhanced bilaterally (Fig. 2a, b). On 99m Tc-hydroxymethylene diphosphonate (HMDP) bone scintigraphy there was an uptake from the proximal metaphysis to the diaphysis of the tibias bilaterally and at the right humeral diaphysis (Fig. 3). His serum interleukin (IL)-6 and tumor necrosis factor (TNF)-α levels were elevated: 57.2 pg/ mL (normal range: ! 4.0) and 6.5 pg/mL (normal range: 0.6-2.8), respectively; his serum IL-1β level was ! 10 pg/mL, and his serum IL-10 level was negative (<2 pg/mL). Chronic recurrent non-bacterial osteomyelitis was therefore suspected and he was started on tocilizumab (TCZ, 8 mg/kg) as an in- c. d. travenous drip over 4 weeks. The CRP level was 5.04 mg/ dL just before the second administration of TCZ, but it decreased to 0.31 mg/dL just before the third administration. Loxoprofen could be stopped because his symptoms of pain in his right lower leg and right upper arm improved. After the fourth administration of TCZ, the CRP levels stayed negative and the PSL dose was tapered. The high intensity of lower leg muscles detected in MRI also diminished over eight months. However, the higher linear enhanced intensity in the tibial diaphysis remained and chronic or old osteomyelitis was considered (Fig. 2c, d). Case 2 A 26-year-old man was admitted to our hospital because of bilateral lower leg pain and polyarthralgia. He had been taking levetiracetam and valproic acid because of epilepsy, which had been diagnosed when he was 13 years old. At 23 years of age, the patient developed polyarthritis of the third and fourth metacarpophalangeal (MP) joints bilaterally, the right fourth proximal interphalangeal (PIP) joint, and his elbows bilaterally. The pain in his right upper arm was presented. He did not have fever. His CRP level was 4.41 mg/ dL, but his RF, anti-CCP and anti-nuclear antibody levels were negative. The MRI scan of his right hand revealed synovitis and bone marrow edema of the fourth and fifth MP joints (Fig. 4a), suggestive of rheumatoid arthritis (RA). He was started on MTX, which improved his arthritis slightly, but his CRP level remained elevated. At 24 years of age, he experienced bilateral lower leg pain. The MRI showed enhanced bone marrow of the right tibial diaphysis associated with cortical bone hypertrophy suggesting chronic osteomyelitis (Fig. 4b). The muscles of his right lower leg (Fig. 4b) and the right triceps brachii muscle (Fig. 4c) were also enhanced, suggesting myositis. The X-ray and CT images showed hypertrophy of the cortical bone of the tibia bilaterally (Fig. 5). 67 Ga scintigraphy showed slightly increased uptake in the middle of both lower legs. His CRP level was high (6.35 mg/dL), but his CK (87 IU/L), WBC count (6,730/μL), acetylcholinesterase (11.6 U/L), and soluble IL-2 receptor (279 U/mL) levels were all within the normal range. Anti-Jo-1 antibody was negative (<5.0 index). His electromyogram was normal and a muscle biopsy was not performed. Dermatomyositis, polymyositis, sarcoidosis, and lymphoproliferative syndrome were excluded. A subcutaneous nodule in the left upper arm was palpable and a biopsy was performed. Necrotizing vasculitis of a subfascial small artery was diagnosed pathologically (Fig. 6). Since rheumatoid vasculitis was suspected at that time, he was therefore started on PSL (10 mg/day), with the dose later increased to 20 mg/day. He was also treated with MTX (12 mg/week). At 26 years of age, he was taking 13 mg PSL/day and 12 c. d. mg MTX/week, but his bilateral lower leg pain and polyarthralgia became worse. He was admitted to our hospital. His temperature was 36.5°C and his left knee was swollen. He complained of bilateral lower leg pain, which was exacerbated by pressure, but he had no muscle weakness. His CRP was 7.07 mg/dL and his WBC count 8,430/μL. Both his CK and his MMP-3 levels were normal (22 IU/L and 96.7 ng/ mL, respectively) and his RF and anti-CCP antibody levels remained negative. On MRI, the muscles of his lower legs were bilaterally enhanced and his bone marrow was of slightly higher than normal intensity (Fig. 7a, b). On X-rays, there was no evidence of the joint destruction or erosion typically associated with RA. His serum IL-6 and TNF-α levels were elevated: 74.4 pg/mL (normal range: ! 4.0) and 22.1 pg/mL (normal range: 0.6-2.8), respectively; his serum IL-1β level was 11 pg/mL (normal range: ! 10) and his serum IL-10 level was negative (<2 pg/mL). Autoinflammatory disease was suspected and he was started on 1 mg colchicine /day, but it was not effective. Chronic recurrent nonbacterial osteomyelitis was suspected, and the patient was then treated with TCZ (162 mg subcutaneously biweekly). After the first administration of TCZ, bilateral lower leg pain improved and the CRP level decreased to normal levels. The PSL dose could be tapered and his left knee swelling decreased after three months. The high intensity of the lower leg muscles detected in MRI also decreased after 10 months (Fig. 7c, d). The high intensity of bone marrow was still slightly higher than normal intensity, which was suggestive of chronic or old osteomyelitis (Fig. 7c). Discussion Both of the patients presented herein had adult-onset nonbacterial CRMO of the diaphysis, high intensity of muscles detected by MRI, and polyarthritis. One patient also had cutaneous nodules of necrotizing vasculitis. TCZ was effective for controlling the disease in both cases. Generally, the bone pain of CRMO is multifocal, typically involving the femoral and tibial metaphyses, the pelvis, clavicles, and vertebrae (1, 4, 10-12); however, as in our patients, the diaphyses of the long bones may also be involved (11). Arthritis is associated with CRMO in 11-38% (1,13). In our patients, synovitis was detected on MRI, but no joint destruction, which was seen in patients with RA, was observed. CRMO is mainly seen in children, with only a few reports of patients with adult-onset CRMO (7,8). SAPHO syndrome covers a broad spectrum of findings (6) and it is diagnosed when a patient has as few as one of the four following nonspecific features: joint lesions accompanying severe acne; joint lesions accompanying palmoplantar pustulosis; osteohypertrophy of the extremities, spine, or sternocostoclavicular joints; and CRMO (9,14). Since CRMO is one of the criteria for SAPHO syndrome, regardless of the presence or absence of skin lesions, our patients with CRMO could also have been diagnosed with SAPHO syndrome. However, skin manifestations are a major feature of SAPHO syndrome and CRMO without skin lesions complicates its diagnosis. Thus, CRMO in adults should be distinguished diagnostically from SAPHO syndrome. Since bacterial osteomyelitis and malignancy are most important for making a differential diagnosis of CRMO, many patients with CRMO undergo bone biopsies. Jansson et al. proposed a clinical score system for nonbacterial osteitis to avoid unnecessary bone biopsies (5). Wipff et al. used this score system in patients with CRMO, considered to be the most severe form of nonbacterial osteitis (1). The clinical score was based on seven factors, each of which was assigned a score: normal blood cell count (score: 13), symmetric lesions (10), lesions with marginal sclerosis (10), normal body temperature (9), vertebral, clavicular, or sternal lesions (8), radiologically proven lesions ! 2 (7), and CRP ! 1 mg/dL (6). A score ! 39 (out of a maximum of 63) is considered to be diagnostic of nonbacterial osteitis. In our patients, both patients had scores of 55, corresponding to 6 out of 7 risk factors (except vertebral, sternal, or clavicular lesions) ( Table). Thus, both of our patients were diagnosed with CRMO. The high intensity of muscles detected on MRI, suggested the presence of myositis in both patients, but the CK levels were normal and muscle weakness was absent. There have been several reports of patients with CRMO and myositis. Whole-body MRI was performed in nine pediatric patients with CRMO and in one patient with myositis of the tibias and femurs (15). A soft-tissue reaction along the femoral diaphysis was detected by MRI in a pediatric patient with CRMO (11). These reports suggest an association of either myositis or soft-tissue inflammation with osteomyelitis because of the close anatomic relationships of the foci of myositis and osteomyelitis in these cases. A patient with CRMO associated with interstitial myositis has also been reported (16). In an 11-year-old boy, CRMO associated with myositis of the quadriceps femoris and gastrocnemius muscles, detected on MRI and fluorodeoxyglucose (FDG)positron emission tomography/CT, was diagnosed, but the osteomyelitis did not involve his femurs and tibias. His CK level was normal and muscle weakness was not appreciable. A muscle biopsy revealed interstitial myositis. Similarly, in our patients, myositis was present not along, but extensively around the areas of osteomyelitis, and the CK levels were normal. An electromyogram of patient 2 was normal, as were his muscle fibers. Interstitial myositis may be associated with systemic and local inflammation, but muscle biopsies of both patients were not performed because muscle weakness was absent and the CK levels were normal. Further studies are needed to clarify the mechanism of myositis in patients with CRMO. In patient 2, necrotizing vasculitis was detected following a biopsy of the subcutaneous nodule. There have been few reports of patients with necrotizing vasculitis and CRMO/ SAPHO syndrome. In a patient with cutaneous necrotizing vasculitis and familial Mediterranean fever (FMF), one of the most frequent autoinflammatory diseases in adults, the subcutaneous nodules arising from the necrotizing vasculitis were successfully controlled with colchicine (17). In another patient with FMF and polyarteritis nodosa, interferon-alpha therapy was successful (18). Since, like FMF, CRMO is an autoinflammatory disease, necrotizing vasculitis can be considered to be a feature of CRMO, as was the case in our patient. The pathogenesis of CRMO is still not well understood. Familial cases of CRMO have been reported and the susceptibility gene has been pinpointed to chromosome (18q21.3-18q22) (19), but this does not pathologically explain the non-familial cases. Recently, an imbalance between proinflammatory (TNF-α, IL-6) and anti-inflammatory (IL-10) cytokines was proposed to play a key role in CRMO (20). Lipopopolysaccharide stimulation increases IL-10 expression by monocytes from patients with CRMO, but the level of elevation is significantly lower than that seen in normal controls (21), which may reflect an impaired recruitment of the transcription factor specificity protein (Sp)-1 to the IL-10 promoter and reduced H3 serin 10 (H3S10) phosphorylation (21). Thus, a reduced activation of mitogen-activated protein (MAP) kinase, which is upstream of Sp1 and H3S10 phosphorylation signaling, was suggested to be involved in CRMO (22). In both of our patients, the serum IL-6 and TNF-α levels were elevated while the serum IL-10 levels were negative, and these findings are also in line with those of a previous study (21). In children with CRMO, non-steroidal anti-inflammatory drugs (NSAIDs) are the first-line treatment and they are effective in 73% of cases (1). Naproxen results in a symptomfree status in 6 months in 43% of treated patients (13). Pamidronate also dramatically improves symptoms in patients with CRMO (23). The authors of the latter study suggested that pamidronate changes the relative proportions of cytokines (23). In addition to NSAIDs and pamidronate, patients with refractory CRMO have been treated with glucocorticoids, MTX, sulfasalazine, and anti-TNF therapy (1,3,13,24). Wipff et al. performed a cluster analysis of a CRMO cohort; all of the patients with the severe phenotype were male and 97% of them had multifocal disease (1). These severe phenotype patients also had the worst prognosis, with a remission rate of 22%, despite a higher rate of treatment with bisphosphonate and/or anti-TNF antibodies (33%) (1). Both of our patients were male and they had multifocal lesions; their symptoms and inflammation had continued for many years, even after treatment with NSAIDs, PSL, and MTX. Thus, male patients with adultonset CRMO characterized by multifocal lesions might develop more severe forms of predominantly pediatric CRMO. Despite many reports in which autoinflammatory diseases were successfully treated using TCZ (25)(26)(27), ours is the first report of the successful use of TCZ for the treatment of CRMO. Our decision to use TCZ was based on the high serum IL-6 levels in our patients and previous reports of lower remission rates with anti-TNF antibodies in severe phenotype patients. The reductions in both the symptoms and inflammatory parameters indicate that TCZ may be effective for treating severe forms of CRMO. The high intensity of the lower leg muscles detected by MRI significantly improved by TCZ therapy, but otherwise the high intensity of the bone marrow did not change in our patients. In Case 2, the bone marrow enhancement in MRI prior to TCZ therapy (Fig. 7a) had already improved by PSL and MTX therapy, as shown in Fig. 4b. In Case 1, the bone marrow showed a slightly heterogeneous intensity without any treatment at 41 years of age, although a biopsy of the right tibia showed non-bacterial osteomyelitis at 38 years of age. MRI at 48 years of age showed a slightly higher intensity of bone marrow with PSL, and MTX therapy and TCZ did not improve the intensity. In our cases, adding TCZ to PSL and MTX did not significantly improve osteomyelitis according to the MRI findings. However, PSL could be tapered without an exacerbation of osteomyelitis and TCZ was therefore thought to have successfully controlled osteomyelitis in our patients. In summary, we herein described the cases of two patients with adult-onset CRMO involving the diaphyses of the long bones and accompanied by high intensity of muscles on MRI and polyarthritis. Necrotizing vasculitis in one patient was also evident, based on a biopsy of a cutaneous nodule. In adult-onset CRMO, anti-IL-6 therapy should therefore be considered in patients with muscle inflammation not re-sponding to NSAIDs, MTX and PSL.
2017-08-30T21:19:26.791Z
2017-08-10T00:00:00.000
{ "year": 2017, "sha1": "7ac5d98162d54fa952d49d8e20b7706c83171f56", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/56/17/56_8473-16/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7ac5d98162d54fa952d49d8e20b7706c83171f56", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
208021393
pes2o/s2orc
v3-fos-license
Function Based Brain Modeling and Simulation of an Ischemic Region in Post-Stroke Patients using the Bidomain BACKGROUND Several studies have shown that post-stroke patients develop divergent activity in the sensorimotor areas of the affected hemisphere of the brain compared to healthy people during motor tasks. Proper mathematical models will help us understand this activity and clarify the associated underlying mechanisms. New Method. This research describes an anatomically based brain computer model in post-stroke patients. We simulate an ischemic region for arm motion using the bidomain approach. Two scenarios are considered: a healthy subject and a post-stroke patient with motion impairment. Next, we limit the volume of propagation considering only the sensorimotor area of the brain. Comparison with existing methods. In comparison to existing methods, we combine the use of the bidomain for modeling the propagation of the electrical activity across the brain volume with functional information to limit the volume of propagation and the position of the expected stimuli, given a specific task. Whereas just using the bidomain without limiting the functional volume, propagates the electrical activity into non-expected areas. RESULTS To validate the simulation, we compare the activity with patient measurements using functional near-infrared spectroscopy during arm motion (n=5) against controls (n=3). The results are consistent with empirical measurements and previous research and show that there is a disparity between position and number of spikes in post-stroke patients in contrast to healthy subjects. CONCLUSIONS These results hold promise in improving the understanding of brain deterioration in stroke patients and the re-arrangement of brain networks. Furthermore, shows the use of functionality based brain modeling. Introduction According to the World Health Organization (WHO), stroke is the second leading cause of death and the third leading cause of disability worldwide (Organization et al., 2000(Organization et al., -2012. Stroke may lead to motion impairment in adults, such as post-stroke hemiplegia or hemiparesis caused by the effects of focal lesions in the brain Rathore et al. (2001). When a stroke occurs, a portion of the brain could be affected and impair motor functions, generally within one hemisphere Bonita and Beaglehole (1988). Most stroke survivors present motion impairment in their arms and there is high variability in the recovery process. Hence, neuromuscular modeling and simulation have vast potential to improve patient care by helping to elucidate the cause-effect relationships in subjects with neurological and musculo-skeletal impairments and establish effective rehabilitation treatments Cramer (2008). The early mathematical models of the brain, modeled the spatiotemporal dynamics of the brain's electrical activity, simulating sources as dipoles following Poisson's equation He et al. (2002), He (2010). More recent models, focus on the relationship between the neuron interactions governed by ion currents and stimuli at a microscopic scale and the brain as a whole at a macroscopic scale considering reactiondiffusion systems Somogyvári and Érdi (2019). One of these models, is the one described in the Virtual Brain. The Virtual Brain is a complex network characterized by functional structural connections, and it is able to simulate the brain's activity at different scales Sanz Leon et al. (2013). For example, one previous study Falcon et al. (2015) used the Virtual Brain to simulate brain activity in stroke patients to determine potential therapeutic interventions. The model in the Virtual Brain depends on structural connections, but after brain damage, there may be altered conductivity of the ischemic region Abboud et al. (1995), changing the connections. Thus, as an alternative, we propose using the bidomain model to simulate electrical activity in the brain. The bidomain model assumes that the electrical activity in excitable tissue is generated by the depolarization of the cell membrane between the intracellular and extracellular domains, where both domains share the same space Sundnes et al. (2007), Henriquez (1993). The bidomain approach was developed to describe electrical activity in cardiac tissue; but since then, it has been adapted to other systems like the brain Szmurlo et al. (2006Szmurlo et al. ( , 2007, Yin et al. (2013), and skeletal muscles Pascual-Marqui (1999), Weinstein et al. (1999) and . This model has the advantage of combining the overall electrical activity in an organ and microscopic nonlinear cell model ion interactions like the current-voltage dynamics given by cell models, such as the Fitzhugh-Nagumo Izhikevich and FitzHugh (2006) and Hodgkin-Huxley Noble (1962) models. In contrast, with the connections representation in other models, the activity diffusion is given by fiber directions. Each point in the mesh solves a node-wise cell model; hence, through a diffusion tensor calculates the overall electrical activity in a two-variable system of partial differential equations (PDEs). In addition, the simplified version of the bidomain model, the monodomain model Potse et al. (2006), reduces the system from a two-variable PDE system to one Sundnes et al. (2006). This simplification, allows us to use the finite element method (FEM) to require less CPU-power to simulate a volume section of the brain Dumont et al. (2019), and the effect of a ischemic region. Human motion, whether voluntary or involuntary, is developed by spatial and temporal patterns of muscle contractions, controlled by neural circuits in the brain and spinal cord. Upper motor neurons in the primary motor cortex control voluntary motion Rizzolatti and Luppino (2001). E.g., in arm motion, the corresponding area of the contra-lateral hemisphere generates electrical signals to execute movement. Then, these signals propagate into the brain stem and spinal cord. Next, these signals descend through the corticospinal tract in the motor pathways until reaching the corresponding lower motor unit Purves et al. (2008). In post-stroke patients, the upper motor neurons in the primary motor cortex activation is changed, often preventing patients from moving their limbs Shao et al. (2009). This study develops the process of simulating motion impairment in post-stroke patients by modeling a brain injury. We simulated a brain ischemic region by modifying the electrical conductivity of some mesh elements, similar to what is reported in Abboud et al. Abboud et al. (1995), but keeping the fiber directions organization. For validation, we compare the global simulated activity in the brain between 5 post-stroke patients and 3 healthy subjects executing a motor task and measuring the brain activity using functional near-infrared spectroscopy (fNIRS). fNIRS is a specialized research imaging technique that uses near-infrared light to measure changes in the hemodynamic characteristics of the brain and allows to register concentration changes in deoxy-hemoglobin, oxy-hemoglobin, and total A. Lopez -Rincon, et al. Journal of Neuroscience Methods 331 (2020) 108464 hemoglobin (HbT) Huppert et al. (2006). For our analysis, we used the HbT signal, as it is considered to be proportional to cerebral blood volume and brain activation Krieger et al. (2012) and Jasdzewski et al. (2003). Bidomain Formulation for Electrical Activity Propagation The bidomain model is described by the following equations Tung (1978): (1) where v is the transmembrane potential, u e is the extracellular potential, M i is the intracellular conductivity tensor, and M e is the extracellular conductivity tensor. M i is a 3x3 size diagonal matrix with diagonal values σ l , σ t , and σ n . C m is the transmembrane capacitance, β is the membrane surface-to-volume ratio, I ion is the total ionic current, I app is the external applied current source, and w represents the ionic model variables. For our simulations, we use the Fenton-Karma model with three variables Fenton and Karma (1998). If we assume a linear relationship between the intra and extra-cellular conductivity tensors such as M e = λM i , where λ is a constant scalar, we can reduce the bidomain model to the monodomain: with the boundary condition: and the following equation to get the extra-cellular potential: We then scale the equations with: in order to simplify the notation. Modeling the Brain and Stimuli The bidomain and monodomain models depend on the fiber directions for approximating the propagation of electrical activity. To create the fiber directions in the brain, we consider the white matter and draw its fiber directions tangent to the surface by taking into account its anisotropy (the gray matter anisotropy is negligible, Hallez et al. (2007)); see the fiber directions in Fig. 1. The execution of voluntary movements begins in the motor cortex, which is the outermost layer. Hence, we focus our stimuli there, which is analogous to I app in Eq. (3). The position of the stimuli comes from the selection of the points given by hand motion and execution of motion in the LinkRBrain database (Fig. 2). LinkRBrain contains the information of thousands of peaks and coordinates from blood-oxygen-level dependent signals, obtained by functional magnetic resonance imaging, extracted from several published neuroimaging studies Mesmoudi et al. (2015). The stimuli is shown as the central point of the maroon volume shown in Fig. 3. Simulating Functional Section The bidomain and monodomain models are useful in tissue cells analysis via gap junctions, considering them as a syncytium. A classical A. Lopez -Rincon, et al. Journal of Neuroscience Methods 331 (2020) 108464 example of this is the heart Sundnes et al. (2007). In contrast, in the brain neural signals propagate from one part of the brain to another, through discrete axons while neighboring axons and neurons remain at rest. In the case of motor movement, only a fraction of the brain is activated in comparison to the whole volume. Thus, to improve the simulation, we reduce the simulated volume to the sensorimotor cortex Jack et al. (1994), and to the left hand movement using LinkRbran Mesmoudi et al. (2015) (Fig. 4). Ischemic Region Ischemic tissue has very low conductivity Clay and Ferree (2002). Therefore, to model a focal stroke lesion, we modify the properties in certain elements from the brain mesh. In practice, we reduce the conductivity tensor values at several elements in the mesh as shown in Fig. 3, where the green volume represents the ischemic region. A similar approach is used in Swanson et al. to simulate a glioma Swanson et al. (2002). Numerical Example To test our setup, we reduce the brain system considering I ion and I app as the source function f(x, t) with σ l = σ t = σ n = 1 and = + a . Thus, reducing the system to: where g(x, t) and h(x, t) are the Dirichlet and Neumann boundary conditions; this is equivalent to the heat equation with a source. To test our system we use two spheres with a mesh of 17056 elements (Fig. 5). We divide the boundary, being the inner sphere ∂B 1 Dirichlet conditions and the outer sphere ∂B 2 Neumann conditions. The analytical solution is: for t 0 < t < t max . The error is given by the norm v || || 2 between the analytical solution and the calculated values at t max . We run the system from t 0 = 1 to t max = 2.5 with Δt = 0.01. The comparison of the calculated and the analytical isovalues solution at t max is shown in Fig. 6 for an error at t max of 0.00536. Fig. 15. Comparison between the measurements in the control subject (top) and the post-stroke patients for the different tasks. Column 1; assisted movement column 2; movement of both arms and column 3; self-assisted movement of the arm. A. Lopez-Rincon, et al. Journal of Neuroscience Methods 331 (2020) Non-Functional Brain Simulation Comparison Given the same parameters, stimuli, and fiber directions (Fig. 1), we simulate the electrical activity in the brain for the healthy subject and post-stroke patient with a focal lesion. In this case we consider the whole brain volume as a syncytium, and as expected the electrical activity propagates to functional areas, beyond what we expected; see Fig. 7. Functional Brain Simulation To make a more realistic approach to the brain's electrical propagation, we reduce the volume propagation to a functional volume, based in biological behaviour. Thus, we simulate the electrical activity, but we limit the volume of propagation as explained in section 2.3, to only the sensorimotor cortex; see Figs. 8 and 9 . Comparison with Measurements In order to verify the above simulations, we measured 5 post-stroke patients brain activity and 3 healthy controls with fNIRS. The representative position of the fNIRS leads is shown in Fig. 10. The patients were asked to perform 3 tasks movements: First, to raise both arms (BA) Fig. 11. Next, to raise both arms with therapist assistance for the affected arm (AA) Fig. 12. Finally, to raise both arms using self-assisted arm movement, where the subject raises the affected arm with the non-affected arm (SA) Fig. 13. The control subject performed the movement with the dominant arm. The protocol consisted of 20 sec. rest plus 20 sec. for the task with 10 repetitions each. Next, we compared the sum of the activity of the HbT signal for the different tasks in the patients Fig. 14, and controls Fig. 15. Finally, we compared the measured activity with the overall activity in our simulations (Figs. 16 and 17 ). In addition, we compared the projected electrical activity into the whole brain volume in Fig. 18. To compare the simulated and measured activity in the Fig. 14-17, we remove the signals offset and consider the number of peaks in each hemisphere for patients (Table 1 ) and controls (Table 2). For our simulation, we simulate only in one hemisphere Fig. 18. The simulated healthy brain presents 1 peak, and the ischemic simulation 2 peaks. Discussion The correlation between the measured hemodynamic response and simulated membrane voltage is given by a spatial map that correlates the excitatory membrane activity and the HbT signal, such as the one presented by Ma et al. (2009), which relates the balloon model of the hemodynamic response and the electrophysiological model. As expected, there is a divergence in post-stroke patient simulation and measurements with controls. The simulated results are consistent with measurements from the literature and extensive studies. For example, Mihara and Miyai (2016) noted that patients with a post-stroke condition and chronic degenerative ataxia showed higher prefrontal activation in a steady gait study. A similar association was observed by Takeda et al. (2014), who found patients with moderate hemiparesis present increased activation during affected-hand grasping or higher intra-hemispheric activity in the brain Baldassarre et al. (2016). From the visual inspection of Figs. 14-17, it is clear that the activity is concentrated for the control subject, whereas the activity expands for post-stroke patients. This is consistent with the findings of Falcon et al. (2015), who identified re-arrangement in the brain networks after a stroke. As a comparison, the HbT signal is consistent with the measurements from the dataset of Buccino et al. (2016), where a healthy subject is asked to move right and left arms and is measured with a 34 channels fNIRS; see show there is one source for the simulated healthy subject, whereas there are two for the simulated post-stroke ones. This relationship is similar to the results between the control and the patients in Figs. 14, 15. A numerical explanation obtained from simulations is that the energy builds up around the focal lesion, which in turn makes close elements to depolarize in different directions. Therefore, generating more peaks in total, even if the only change between the two systems is the conductivity decrease in some mesh elements. This relationship is more clear when comparing the number of peaks between control 1 and patient 2 Fig. 20, with the simulation Fig. 21. In both cases an extra peak appears. Conclusion In this research we propose a computational model to simulate the effects of focal brain lesions in post-stroke patients for hand motion. As mentioned before, this is an abstraction of the inner-workings of the brain, given we do not consider the subnetworks in the brain that are involved in motion and as a result, it is a reduced system that allows us to perform the simulation. This method uses a numerical simulation with FEM in an open loop control model, which simulates the brain activity with and without a focal lesion. The simulations and comparisons with real measured data show that this model could be a suitable approach to modeling human motion and blockages in the brain originated from strokes or chronic diseases. In addition, it is a step forward to build a complex electrophysiological simulator in order to state functions of brain activity and thus, develop new therapies. Implementation Several applications were created in the C# programming language. The mesh used in all the brain examples consisted of 6534 nodes. Visualization of results was implemented with the Gmsh format Geuzaine and Remacle (2009). Table 3 contains the model values. In addition, for the I ion we used the Fitz Hugh Nagumo cell model with rates and constants from the cell ML repository Lloyd et al. (2008). Code available at https://github.com/steppenwolf0/brainSim. Table 3 Variables values for the simulations.
2019-11-15T17:13:37.826Z
2019-11-15T00:00:00.000
{ "year": 2020, "sha1": "d9b1f9a53eafb55baa59d60d1a28dc725fd9bcd5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.jneumeth.2019.108464", "oa_status": "HYBRID", "pdf_src": "Elsevier", "pdf_hash": "d9b1f9a53eafb55baa59d60d1a28dc725fd9bcd5", "s2fieldsofstudy": [ "Biology", "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
223714860
pes2o/s2orc
v3-fos-license
A revision of the Early Jurassic ichthyosaur Hauffiopteryx (Reptilia: Ichthyosauria), and description of a new species from southwestern Germany Hauffiopteryx typicus is an Early Jurassic ichthyosaur species from Europe, for which geographically partitioned morphological variation between specimens from England and Germany has been described. We provide a complete anatomical description of the German material to address this taxonomical issue. We also identify and describe a new species of Hauffiopteryx from the southwest German Basin, Hauffiopteryx altera sp. nov., differring from H. typicus in the morphology of the arrangement of cranial elements surrounding the external nares. A phylogenetic analysis recovers the German and English material referred to H. typicus as sister taxa, suggesting that these are indeed conspecific. H. typicus forms a monophyletic group with H. altera and a specimen from the Pliensbachian of Switzerland previously referred to Leptonectes tenuirostris but consistent with H. typicus. We conclude that Hauffiopteryx represents a valid genus, defined by a set of synapomorphies from both the skull and postcranium. Parsimony analysis recovers Hauffiopteryx as sister taxon to Stenopterygius + Ophthalmosauridae. Erin E. Maxwell. Staatliches Museum für Naturkunde, Rosenstein 1, 70191 Stuttgart, Germany, erin.maxwell@smns-bw.de Dirley Cortés. Redpath Museum, McGill University, 859 Sherbrooke St. W., Montreal QC H3A 0C4, Canada. Smithsonian Tropical Research Institute, Balboa-Ancón 0843–03092, Panamá, Panamá. dirley.cortes@mail.mcgill.ca INTRODUCTION The Posidonienschiefer Formation (Posidonia Shale) of southwestern Germany is considered one of the classic Mesozoic marine fossil lagerstätte, and has yielded thousands of exceptionally preserved fish and reptile remains (Urlichs et al., 1994). The fossil marine reptiles, in particular, have garnered a great deal of research attention. The most abundant and best-preserved of these are the ichthyosaurs, represented in museum collections by hundreds of specimens including examples with fossilized soft-tissues, gastric contents, and embryos preserved inside the body cavity (Hauff, 1921;Böttcher, 1989Böttcher, , 1990). The first ichthyosaur formally named from the Posidonia Shale was Temnodontosaurus trigonodon, named over 175 years ago (Theodori, 1843), and the most recently erected species still regarded as valid were named almost 90 years ago (Stenopterygius uniter von Huene, 1931 andHauffiopteryx typicus [von Huene, 1931]). Ongoing research has emphasized questions pertaining to paleobiology, including intraspecific variation (e.g., Maxwell, 2012a), ontogeny (e.g., Dick and Maxwell, 2015), diet (e.g., Dick et al., 2016), and prevalence of skeletal pathologies (e.g., Pardo-Pérez et al., 2019). The discovery of new species following so many years of intensive research effort was thought to be unlikely. The monotypic genus Hauffiopteryx (Maisch, 2008) is the most recently named and least distinctive ichthyosaur genus from the Posidonienschiefer Formation. The genus has a convoluted history. Hauffiopteryx typicus (von Huene, 1931) was initially referred to Stenopterygius as S. hauffianus / S. hauffianus forma typica due to its small size and similarities in forelimb morphology (von Huene, , 1931; the unusual morphology of the pelvic girdle was interpreted as an abnormality (McGowan, 1978). Maisch (2008) recognized Hauffiopteryx as distinct from Stenopterygius based on the small, round temporal fenestra, extensive gastralia, and medially unfused pelvic girdle. He considered Hauffiopteryx to be nested within Thunnosauria, potentially forming a sister group to Stenopterygius + Ophthalmosauridae. This hypothesis was supported by subsequent cladistic analyses (Fischer et al., 2016;Moon, 2017). However, Hauffiopteryx has also been recovered in a more basal position, as sister group to Thunnosauria (Ji et al., 2016). Maisch (2008) initially identified seven specimens referable to Hauffiopteryx typicus: one from the Toarcian of the UK, five from southwestern Germany, and one from Luxembourg. We were unable to examine the latter skull personally but consider most likely referable to the more abundant genus Stenopterygius based on the apparent exclusion of the prefrontal by the lacrimal from the posterior edge of the external nares (Godefroit, 1994: pl. 3a). Two additional specimens from the Toarcian of the UK were later referred (Caine and Benton, 2011). The specific referral of BRLSI M 1399, part of the original referred material cited by Maisch (2008), was later questioned following CTscanning and detailed description, as it failed to form a monophyletic group with H. typicus in phylogenetic analysis (Marek et al., 2015). This raised questions of provincialism and endemism among Early Jurassic ichthyosaurs that can only be resolved by restudy of the German material. In addition to the material from the Toarcian of Germany and the UK, a small skull from the Pliensbachian of Switzerland (NMO 26575) referred to Leptonectes tenuirostris was described as varying from the typical morphology observed in Hettangian-Sinemurian specimens of L. tenuirostris by a series of features such as small size, shorter rostrum, participation of the prefrontal in the external narial opening, and notching of the forefin elements (Maisch and Reisdorf, 2006). These characters are now known to differentiate Hauffiopteryx and Leptonectes, thus meriting reevaluation of the taxonomic affinities of NMO 26575. Problematically, L. tenuirostris was partially scored based on NMO 26575 in the analysis of Moon (2017), and possibly other analyses as well. Here, we provide a complete anatomical description of Hauffiopteryx from the Southwest German Basin. We re-examined the five specimens cited by Maisch (2008), and included five additional specimens from the collections of the SMNS, GPIT, and the Werkforum Dotternhausen. This study led to the recognition of a second species of Hauffiopteryx from southwestern Germany, Hauffiopteryx altera sp. nov. We also reconsider the affinities of NMO 26575 from the Early Jurassic of Switzerland described as Leptonectes tenuirostris by Maisch and Reisdorf (2006), which shares many morphological similarities with H. typicus. DESCRIPTION The description is based on GPIT 1491/4, GPIT/RE/12905, MHH 9, SMNS 51552, and SMNS 80226; selected measurements can be found in Appendix 1. Variation in the shape of the posterior premaxilla and anterior lacrimal in GPIT/RE/12905 is attributed to differences in preservation and intraspecific variation between this and the other skulls; these variants are clearly noted in the description. The English and Swiss material is excluded from the description. Premaxilla. The premaxilla is gracile and dentigerous, decreasing rapidly in depth anteriorly ( Figure 2A-2C). Posteriorly, a subnarial process extends ventral to the anterior portion of the external narial opening. A robust, extensive supranarial process is absent; however, in GPIT/RE/12905 a slender process is present ( Figure 3A). It is shorter than the Abbreviations: an,angular;bo,basioccipital;de,dentary;hc,hyoid corpus;hy,hyoid element;en,external narial opening;ex,exoccipital;f,frontal;j,jugal;la,lacrimal;lj,lower jaw;mx,maxilla;n,nasal;op,opisthotic;pa,parietal;pal,palatine;pf,prefrontal;pl,palate;pm,premaxilla;po,postorbital;pof,postfrontal;pt,pterygoid;Q,quadrate;qj,quadratojugal;sa,surangular;scr,sclerotic ring;sp,splenial;sq,squamosal;st,supratemporal;s,stapes;utf,supratemporal fenestra. 6 subnarial process, and extends less than one quarter of the length of the external narial opening. Although apparently absent in GPIT 1491/4, MHH 9, and SMNS 51552, the quality of preservation of this region in these specimens does not preclude the erosion of such a delicate bony structure through either taphonomic processes or preparation. The premaxillary fossa is deepest at its midpoint, becoming rapidly shallower posteriorly. The premaxillary rostrum is short in all diagnostic material, with prenarial segment ratios of ~0.49 (GPIT/ RE/12905), 0.48 (MHH 9), ~0.42 (SMNS 80226), and ~0.37 (GPIT 1491/4). The rostral tips (both premaxillary and mandibular) of SMNS 51552, often cited as an example of Hauffiopteryx with a long rostrum (Maisch, 2008), are of problematic authenticity; as preserved the prenarial segment = 0.46. Abbreviations: an, angular; bo, basioccipital; de, dentary; en, external narial opening; ex, exoccipital; f, frontal; j, jugal; la, lacrimal; mx, maxilla; n, nasal; pa, parietal; pf, prefrontal; pm, premaxilla; po, postorbital; pof, postfrontal; sa, surangular; scr, sclerotic ring; sq, squamosal; st, supratemporal; utf, supratemporal fenestra. Maxilla. The maxilla in H. typicus is short, and does not extend more than a few millimeters anterior to the external narial opening in lateral view (Figures 2A-2C, 3A). The dentigerous portion continues posterior to the external narial opening. The maxilla forms the ventral edge of the external narial opening in most specimens, but is excluded in GPIT/RE/12905 by contact between the premaxilla and lacrimal. An ascending process posterior to the narial opening is absent. The orbital process of the maxilla is moderately elongated, but does not reach the orbital mid-point and is shorter than the suborbital process of the lacrimal. Anteriorly, the maxilla is dorsally overlapped by the premaxilla. Posteriorly, it contacts the jugal and lacrimal. Nasal. The nasals are exposed dorsally over approximately two thirds of the antorbital rostrum ( Figure 2A, 2C). The presence of an internasal foramen could not be evaluated in the extremely compressed material, but an internasal depression was certainly present medial to the external nares. The nasal forms the dorsal edge of the external narial opening and contacts the prefrontal at the posterodorsal edge of the external narial opening. On the dorsal skull roof, the nasals contact the prefrontal laterally and the frontal posteriorly. A small, superficial contact between the nasal and the postfrontal, overlapping the prefrontal, may occur posterolaterally ( Figure 2C). Lacrimal. The lacrimal is small, with a very short subnarial process in most skulls (Figure 2A-2C). It usually participates in only the posteroventral corner of the external narial opening. However, in GPIT/RE/12905 the subnarial process of the lacrimal is slender but elongate, extending over more than half the total length of the external narial opening to contact the subnarial process of the premaxilla ( Figure 3A); we consider this to be intraspecific variation, as documented in other Early Jurassic ichthyosaur species (see discussion). The suborbital process of the lacrimal articulates ventrally with the jugal and is relatively elongate. The suborbital process is separated from the lateral portion of the element by a ridge, forming part of the circumorbital area. The ascending process contacts the prefrontal via an interdigitating suture, and does not contact the nasal. Jugal. The jugal is a relatively slender, deeply bowed element. It is overlapped by the postorbital posterolaterally, excluding the ventral postorbital from the posterior edge of the orbit (Figures 2A-2C, 3A). The dorsal process of the posterior jugal is narrow and mediolaterally compressed, lacking pronounced anteroposterior expansion. Anteriorly, the jugal narrows, and slots between the lacrimal and maxilla. It does not reach the external narial opening. Prefrontal. The prefrontal forms a large narial process, and participates extensively in the posterior portion of the external narial opening (Figure 2A-2C). It forms approximately 50% of the dorsal orbital margin. The prefrontal extends medially to contact the frontal, but may be overlapped by processes of the postfrontal and the nasal. Frontal. The frontals are relatively narrow and slightly convex, tapering anteriorly. They form the external anterior and lateral margins of the parietal foramen (Figure 2A, 2C). Parietal. The parietal forms approximately half of the anterior edge of the supratemporal fenestra (Figure 2A, 2C). The medial process of the supratemporal excludes the parietal entirely from the posteromedial edge of the fenestra, best seen in GPIT/RE/12905. The interparietal suture is straight. A weak ridge separates the posterior lamina from the dorsal surface of the parietal. A weakly developed ridge lateral to the parietal foramen runs anterolaterally to posteromedially. Postfrontal. The postfrontal forms the anterolateral margin of the supratemporal fenestra ( (Figure 2A-2D). It is a narrow semi-lunate element. It contacts the squamosal posteriorly, and the supratemporal dorsally, with a very small anterodorsal contact with the postfrontal. Ventrally, the postorbital is excluded from the orbital margin in all specimens by the ascending process of the jugal. Supratemporal. The supratemporal is large, forming a posterior lamella with radiating ridges, giving it a scalloped appearance in external view ( Figure 2A, 2C). These ridges correspond to the anteromedial and anterolateral processes of the anterior ramus, as well as to small medial and lateral processes of the medial ramus. A well-developed ventral ramus is present. It is not significantly longer than the medial or lateral processes. Squamosal. The squamosal is well-preserved only in GPIT/RE/12905 ( Figure 3). It is exposed on the posterolateral skull and is roughly triangular in shape, with a descending process posterior to the postorbital. Dorsally it contacts the supratemporal. Quadratojugal. The ventral portion of the quadratojugal is preserved in the lectotype (Figure 2A). In 8 external view, it appears to be excluded from contact with the jugal by the postorbital, but this could be due to taphonomic displacement. The ventral edge is thickened for articulation with the quadrate. Quadrate. The posterolateral portion of the quadrate is exposed in both MHH 9 and in GPIT 1491/4 ( Figure 2A, 2C). The quadrate is laterally concave. The ventral articular end is thickened, and a small occipital lamella is present. Braincase Basioccipital. The basioccipital is not well-preserved in the lectotype. It is best-exposed in SMNS 80226 and GPIT/RE/12905 in ventral view ( Figure 2D-2E). It has an extensive extracondylar area (ECA), with the portion ventral to the condyle being wider than the lateral ECA. The ventral extracondylar area is anteromedially concave, with small lateral tubers. The condyle itself bears a notochordal pit and is clearly offset from the extracondylar area. Opisthotic. The right opisthotic is preserved in SMNS 80226 ( Figure 2D-2E), in what is interpreted as anterior view. The medial edge bears a notch interpreted as the vagus foramen. The paroccipital process is short. The anterior surface of the opisthotic ventral to the paroccipital process is concave, clearly setting off the process. Stapes. An element thought to be the left stapes is preserved in SMNS 80226 ( Figure 2D). The lateral shaft is slender, and the quadrate articulation is not expanded. The ventral surface of the shaft is more concave than the dorsal surface. The medial head is robust. Exoccipitals. Exoccipitals are preserved in SMNS 51552 as well as in the lectotype (Figure 2A-2B). These are short, squat elements forming the lateral edges of the foramen magnum. The medial edge is concave. The number of foramina cannot be assessed with confidence. Prootic, supraoccipital, parabasisphenoid. None of these elements could be identified with confidence in the specimens determined to be diagnostic at the specific level. Palate The palate of H. typicus is best-exposed in SMNS 51552 ( Figure 2B; see also Maisch, 1998a: fig. 3). Pterygoid. The pterygoid is preserved in ventral view. The quadrate ramus is preserved posteriorly and is offset from the palatal ramus by a pronounced constriction. The lateral process of the quadrate ramus is more slender than the medial process. The palatal ramus bears a robust postpalatine process. Palatine. The palatine is present in SMNS 51552; however, due to poor preparation little can be said about either its anatomy or position. Vomer. The vomer could not be identified in the specimens determined to be diagnostic at the specific level. Mandible A small overbite is consistently present (Figure 2A, 2C-2D, 2G); this does not appear to become more pronounced in larger specimens. The lower jaw is bowed ventral to the orbit. The dentaries are fine and pointed anteriorly, and make up approximately 45% of the length of the mandibular symphysis ( Figure 2D, 2G). In lateral view, the dentary fossa is well-developed and deep, but disappears towards the anteriormost tip of the mandible (Figure 2A-2B). Aulacodont tooth implantation is seen throughout the dentary, with the alveolar groove narrowing significantly towards the anterior tip ( Figure 2C). The splenials form the medial surface of the mandible and a substantial portion of the mandibular symphysis. They are most robust at the posteriormost symphysis, thinning rapidly anteriorly in ventral view ( Figure 2B, 2D). Posteriorly, they also attenuate in ventral view, but more slowly. At their posterior ends, the splenials are concealed in ventral view in articulated specimens by the medial curvature of the angular, which forms a shelf. The posterior lateral lower jaw is made up of the surangular and angular. The lateral exposure of the former is much greater than that of the latter. The angular forms the ventral edge of the lower jaw, extending from the retroarticular process posteriorly to approximately the level of the external narial opening. The surangular extends further anteriorly than the angular. Ventral to the orbit, the surangular forms a well-developed surangular fossa. A strong ridge and ventral depression is developed on the lateral surangular contribution to the retroarticular process, presumably indicating a point of muscle attachment. Prearticular. The prearticular is preserved in SMNS 80226, forming a fine splint of bone in ventral view posterior to and medially overlapped by the splenial. Articular. The articular is not well-exposed in any of the diagnostic material. Dentition. The teeth are small and acutely pointed, with the crowns typically showing some lingual curvature. The enamel is smooth ( Figure 2F, 2G). Crown height decreases notably towards the anterior tips of the jaws: in MHH 9, for instance, midpremaxillary tooth crowns are approximately 4.5 mm in height, whereas the anteriormost dentary crowns are approximately 2 mm in height. The tooth roots show strong apicobasal ridges indicative of plicidentine; large quantities of cellular cementum appear to be absent. Sclerotic ring. The sclerotic ring in the GPIT 1491/ 4 preserves approximately 16-17 ossicles (poor preservation makes an exact count impossible); 17 ossicles are also preserved in SMNS 51552 and GPIT/RE/12905. In all three specimens, the sclerotic ring appears to fill the orbit, although taphonomic breakage and distortion are moderate to severe in all skulls (least so in GPIT/RE/12905). Hyoid apparatus. Paired ceratobranchials are ossified but often compressed, making it difficult to assess morphology. Shape differs widely between specimens, with the ceratobranchials being later-ally concave in SMNS 51552 ( Figure 2B) and laterally convex in SMNS 80226 ( Figure 2D). An ossified hyoid corpus is present in SMNS 51552 ( Figure 2B). Postcranium Vertebral column. 45-46 presacral vertebrae, 81 preflexural vertebrae, ~3 apical vertebrae, and >55 postflexural vertebrae are preserved in the lectotype, becoming very small towards the end of the caudal fin ( Figure 1A). There is some uncertainty in presacral count because the last vertebra with unfused di-and parapophyses is posterior to the position of the pelvic girdle. The atlas and axis are preserved in SMNS 80226 and are exposed in lateral view ( Figure 4B). They are completely fused, with a suture persisting on the lateroventral surface. The remaining presacral centra are round in articular view and are deeply amphicoelous. Cervical centra are small, and dimensions increase pos- teriorly. Separate di-and parapophyses are present in all dorsal vertebrae. Apophyses begin to shift to a mid-centrum position anterior to the tail bend. Apophyses are absent in the apical centra and all postflexural vertebrae. The vertebral column is gently curved along its length, with the points of inflection located in the anterior dorsal region and mid-preflexural region. In the lectotype, the axial neural spine is much broader than more posterior neural spines at its base, but tapers dorsally ( Figure 4A). A substantial facet for the atlantal neural spine is preserved on its anterolateral surface. The atlantal spine itself is not preserved. The anterior dorsal neural spines are rectangular, but become broader and rounded in the posterior 2/3 of the vertebral column ( Figures 1A, 4D). Posterior to the pelvic girdle, neural spines become T-shaped, and lose articulation with each other. "Extra-neural processes" (sensu McGowan, 1992), as previously described in Stenopterygius, are present ventral to the posterior end of the dorsal fin and also on several more anterior vertebrae. Although positioned dorsal to the ossified neural spines, these lack any visible connection with the underlying skeletal tissues. Ribs and gastralia. The cervical ribs are somewhat bicapitate, but the tuberculum and capitulum are less deeply separated than in the dorsal ribs (e.g., Figure 4A, 4C). Cervical and dorsal ribs are deeply grooved on the anterior and posterior surfaces. Caudal ribs are small, holocephalous, and paddle-like. Posterior to the apical vertebrae, ribs are absent -the presence of extensive soft-tissue preservation suggests that this is not an artefact. Chevrons are absent throughout the caudal skeleton. Gastralia are relatively robust. They are long and thin, tapering at either end. There are two gastralial elements per side. In SMNS 80226, gastralia from the right and left sides are fused, forming a single dorso-ventrally flattened and slightly anteroposteriorly expanded boomerang-shaped bone along the midline ( Figure 4C). However, this specimen also shows signs of callus development on many gastralia (Pardo-Pérez et al., 2019); it is unclear whether the formation of a median element may have been caused by trauma rather than representing normal morphology. Body outline. The dorsal and caudal fins are preserved in the lectotype ( Figure 1A). The dorsal fin is triangular in outline, located in the posterior half of the dorsal region and terminating anterior to the pelvic girdle. The caudal fin is semi-lunate in outline, and is symmetrical, with a very high aspect ratio and a strongly concave posterior edge. However, the degree to which preparation may have (SMNS 51552). Scale bars in cm (parts A, C). Abbreviations: 2-4, distal carpals; cl, clavicle; co (r/l), right and left coracoids; H, humerus; i, intermedium; ic, interclavicle; pi, pisiform; R, radius; re, radiale; sc, scapula; U, ulna; ue, ulnare; v, metacarpal V. enhanced this shape is uncertain. The vertebral column extends into the distalmost tip of the ventral lobe. Pectoral girdle and forelimb. The interclavicle is T-shaped, with the transverse bar being longer than the posterior stem. The posterior edge of the transverse bar is medially broader than at its lateral ends ( Figure 5A). The clavicles are wing-like, tapering laterally, and medially broad ( Figure 5A, 5C). The coracoids are large, anteroposteriorly longer than proximodistally wide ( Figure 5A, 5C). A small anterior notch is present. The scapular facet is much smaller than the glenoid contribution. A medial facet for the scapula appears to be absent. The scapula itself is divided into a medial blade and a lateral shaft ( Figure 5A, 5B). The medial blade is anteriorly expanded, but a well-developed acromion process is absent. The coracoid facet is smaller than the glenoid contribution. The humerus is relatively elongate, with the distal end greatly expanded relative to the proximal end ( Figure 5). A well-developed humeral head is present only in the lectotype ( Figure 5B); in all of the referred specimens the proximal humerus is rather flattened. The lectotype is the only specimen in which the humerus is exposed in dorsal view; unfortunately it is also the most strongly compressed specimen. The dorsal process is small, if present. The deltopectoral crest is robust, located close to the anterior edge and extending over more than half of the total length of the humerus ( Figure 5A, 5C). The distal humerus bears a flattened, anteriorly directed facet on its leading edge ("tubercle" of Moon, 2017). Two distal facets are present, for articulation with the radius and ulna. The ulnar facet is larger than the radial facet. All referred specimens show a similar forelimb configuration, in which the radius, radiale, distal carpal 2, and metacarpal II bear notches on their anterior surfaces ( Figure 5B, 5C). In most specimens, the first phalanx of digit II is also notched anteriorly. The radius articulates posteriorly with the ulna, and distally with the radiale and intermedium. The ulna articulates distally with the intermedium and ulnare. A small pisiform posterior to the ulnare is present only in the lectotype ( Figure 5B). The paddle is four digits wide; in the lectotype, a series of very small accessory ossicles lies posterior to digit V. The posterior two digits (IV and V) are the longest, with ~10 phalanges. Proximal limb elements are polygonal and tightly packed, whereas more distal elements are rounded and widely spaced. Based on the preserved soft tissue outline, the flipper was elongate and tapering, with digit II positioned close to the anterior edge. Pelvic girdle and hind limb. The ilium is not wellpreserved in any of the referred specimens. In SMNS 80226, the element inferred to be the ilium is straight, shorter than the ischium and pubis (Figure 6A). The pubis is slender, being straight medially and posteriorly curving laterally to fuse with the ischium. Its medial end appears to be medially flattened. The ischium is the largest element in the pelvic girdle. It is thickened laterally and is triangular in cross-section, with a flat anterior surface. The medial end is also slightly anteroposteriorly expanded, and is gently rounded. The femur is wider distally than proximally. The proximal end is strongly convex. The dorsal process is pronounced and more rounded ( Figure 6B-6D). The ventral process is narrower, and extends further distally than the dorsal process ( Figure 6A). There are two distal facets, for articulation with the tibia and fibula. The tibial facet is smaller than the fibular facet. The hind fins are short and rounded ( Figure 6B-6E). There are three elements in the zeugopodial row: the tibia, fibula, and a large pisiform that appears to give rise to a postaxial digit (in the sense that all elements in the digit are ossified and decrease in size from proximal to distal). The tibia is smaller than the fibula. The tibia and next three distal elements are notched anteriorly. The astragalus is positioned between the distal tibia and fibula. There are four digits in the metatarsal row, assuming that the posterior digit discussed above can be homologized with the 'true' digits. As discussed by Maisch (2008), the digits converge distally and so are of equal length. There are only approximately 4-5 phalanges per digit. Holotype. Holotype and only referred specimen FWD-129; a skull and partial postcranium preserved in a calcareous concretion (Figures 7-8; see Table 1 for selected measurements). The bones on the right side of the skull remain in articulation; those on the left side of the skull have suffered some taphonomic displacement. Etymology. The specific epithet is derived from the Latin altera, which means different from/other and refers to the anatomical divergence from the type species, H. typicus. Geographical distribution. Dotternhausen-Dormettingen, Baden-Württemberg, Germany. Stratigraphic distribution. Posidonienschiefer Formation, lower Toarcian, serpentinum Zone, exaratum Subzone (SWGB: bed εII 4/5 ). Diagnosis. Hauffiopteryx altera differs from H typicus in: deepest lateral exposure of the maxilla located posterior rather than ventral to the external narial opening; broad, triangular lacrimal excluded from the external narial opening by the descending process of the prefrontal; nasals extending further posteriorly on dorsal skull roof; dorsal exposure of prefrontal substantially greater than that of postfrontal. Skull Premaxilla. The premaxilla has been broken and deformed anteriorly (Figure 7). Posteriorly, it is narrow and straight with a deep premaxillary fossa. A supranarial process is absent. The posterior tip of the premaxilla overlaps the maxilla almost to the Maxilla. The maxilla has a small contact with the ventral edge of the external narial opening, and extends less than one narial length anterior to the nares ( Figure 7C-7D). It is deeper posteriorly than anteriorly. Its deepest point in lateral view contacts the anteroventral tip of the prefrontal at the posterior edge of the narial opening ( Figure 7C). The ventral edge of the maxilla is long and straight and posteriorly extends only to the anterior edge of the orbit on the right-hand side, being overlapped dorsally by the jugal; on the left the suborbital process FIGURE 8. Hauffiopteryx altera, sp. nov., holotype FWD-129. A, left dorsal temporal region, illustrating the palmate morphology of the supratemporal; B, premaxillary and dentary teeth illustrating the thin, smooth enamel; scale bar is in mm; C, anterior neural arches, note the size and shape differentiation between the neural spines of the atlas, axis, and more posterior vertebrae; D, proximal right forelimb in dorsal view; E, pectoral girdle in external view. Abbreviations: cl, clavicle; co (r)/co (l), right and left coracoids; H, humerus; i, intermedium; ic, interclavicle; ns1, neural spine (atlas); ns2, neural spine (axis); ns3, neural spine (C3); pa, parietal; pof, postfrontal; R, radius; re, radiale; sc, scapula; st, supratemporal; st.a, medial process of the anterior ramus of the supratemporal; st.m, medial ramus of the supratemporal; U, ulna; ue, ulnare; utf, supratemporal fenestra. is exposed in lateral view and is very long, extending to the orbital midpoint. Nasal and external narial opening. The external narial opening is 25 mm long (right-hand side). It is simple in shape, with a small posterodorsal notch ( Figure 7C). In lateral view, the dorsal surface of the nasal has a weak dorsal inflection anterior to the orbit. The ventral edge of the nasal is deflected ventrally at the level of the external narial opening, and the nasal forms a small ventrolateral shelf dorsal to the posterior narial opening. The nasals are relatively narrow anteriorly, but become broader at the level of the orbit, and are relatively limited in posterior extent. In dorsal view, the right and left nasals diverge posteriorly, creating a V-shaped contact with the frontals. An internasal depression is present, and an internasal foramen is possibly present ( Figure 7G-7H); the dorsal surface of each nasal is strongly convex. The nasal contacts the frontal posteriorly and prefrontal laterally. Lacrimal. The lacrimal is excluded from the external narial opening in lateral view by the prefrontal (Figure 7C-7D). The lacrimal is a blocky element, narrowing anteriorly with a scalene triangle shape in lateral view; it contacts the prefrontal dorsally and the jugal ventrally. The suborbital process is short. The lacrimal does not contact the maxilla but converges with the jugal and the prefrontal anterior to the orbit and posterior to the narial opening. The lacrimal is anteroposteriorly longest at the level of the sclerotic aperture. The anterior tip of the lacrimal appears to be laterally deflected relative to the posterior part of the element, and the portion making up the circumorbital area is more strongly curved than the ventral edge of the element. Jugal. The jugal is deeply bowed ventral to the orbit. It tapers anteriorly between the lacrimal and the maxilla, extending past the anterior edge of the circumorbital area. Posteriorly, the jugal is overlapped by the postorbital. The posterior ramus of the jugal is offset approximately 120º from the ventral one, being wider and shorter with a flat external surface. The dorsal margin is evenly curved, whereas the ventral one forms a small projection at the posteroventral orbit ( Figure 7C-7D). Prefrontal. The prefrontal is a large element forming the anterodorsal orbital rim, with an anterior process reaching the external narial opening and forming its entire posterior edge. The prefrontal contacts the anterior tip of the jugal ( Figure 7C-7D). The body of the prefrontal is massive, being as broad in dorsal view as the nasal ( Figure 7G-7H). In lateral view, the anterior portion forming the orbital rim tapers ventrally and is considerably thinner than the posterior portion. Posterodorsally, the prefrontal contacts the postfrontal, parietal, and frontal (medially). Frontal. The frontals are small and elongate, and are dorsally convex. They enclose the parietal foramen medially ( Figure 7G-7H). The frontal is overlapped by the nasal (anteriorly), prefrontal (laterally), and parietal (posteriorly). Parietal. The parietal has an irregular external surface ( Figures 7G-7H, 8A). There is a depression between the medial supratemporal fenestra and interparietal suture, possibly corresponding to the area lateral to the sagittal eminence. The parietal shelf is large and oriented dorsally. The parietal ridge is weakly developed. Posterolaterally, the parietal has a small process, which contacts the supratemporal posteriorly. The parietal contacts the prefrontal (anteriorly), postfrontal (laterally), and frontal (anterolaterally). Postfrontal. The postfrontal is anteroposteriorly elongate and has an anteromedial depression in dorsal view ( Figures 7G-7H, 8A). Its posteromedial edge encloses the supratemporal fenestra and is overlapped by the supratemporal. The postfrontal becomes narrower anteriorly until it forms a point that overlaps the prefrontal anteriorly. In lateral view, the postfrontal is slightly arched and forms the dorsolateral edge of the posterior skull. The postfrontal extends as far anteriorly as the anterior parietal but is narrower than the latter, and extends anteriorly to the orbital midpoint ( Figure 7C). Postorbital. The postorbital forms the posterior edge of the orbit ( Figure 7C). A small dorsal lamella appears to be present. In lateral view, the postorbital is wider than the jugal, but still narrow in relation to its height. The ventral postorbital overlaps the jugal, and is excluded from the posteroventral edge of the orbit by the latter. The dorsal postorbital is overlapped by the postfrontal. The dorsal and ventral ends of the postorbital are tapered, forming a semilunate shape. Supratemporal. The supratemporal is posteriorly broad and slightly convex dorsally ( Figures 7G-7H, 8A). A long anterior ramus excludes the postfrontal from the lateral edge of the supratemporal fenestra. The supratemporal is anteroposteriorly shorter than mediolaterally broad. A descending ramus is present but not clearly exposed. The supratemporal bears four ridges radiating from the posterolateral corner ( Figure 8A); we use the term 'palmate' to refer to this morphology. Quadratojugal. The quadratojugal is small, triangular, and articulates along its entire anterior edge with the postorbital ( Figure 7C). Orbit. The sclerotic ring consists of 16 plates. It is 88 mm long and 79 mm high; the sclerotic aperture is 38.5 mm high and 35.5 mm long and is elliptical in shape. The orbit is 96 mm long and 85 mm high and is teardrop-shaped in lateral view when the circumorbital area is also considered. The orbit is bordered by the prefrontal, postfrontal, postorbital, jugal, and lacrimal ( Figure 7C-7D). Quadrate and Palate There is an element covered by the postorbital in right lateral view, which may be the quadrate based on position and shape. The palate is not exposed. Mandible Dentary. The dentary is a robust element that tapers anteriorly ( Figure 7C-7D). The anterior tip is missing. The posterior end forms a thin point, articulating ventrally with the surangular at the level of the anterior orbit. The dentary becomes progressively deeper ventral to the external narial opening until it makes up the entire lateral surface of the lower jaw. The dentary is slightly convex in lateral view along the entire occlusal edge. The dentary fossa is present along its lateral surface. Angular. The angular is short and does not extend as far anteriorly as the surangular in lateral view ( Figure 7D). In ventral view, the posterior end of the angular is slightly expanded with rounded edges, while the anterior end tapers to a sharp point, and disappears from ventral view at approximately the level of the maxilla, just posterior to the symphysis ( Figure 7E-7F). The ventral edge of the angular is not strongly curved in lateral view. However, the lower jaw appears slightly bowed ventral to the orbit ( Figure 7C-7D). Splenial. Posteriorly, the splenial contacts the medial surface of the angular, becoming broader in ventral view as the angular thins ( Figure 7E-7F). The splenial is thin and convex in ventral view. The posterior end is elongated and pointed. The anterior end is not preserved. The broadest point of the splenial in ventral view occurs at the posterior end of the mandibular symphysis. Surangular. The surangular is an elongate and large element that forms a significant part of the posterior end of the lower jaw ( Figure 7C-7D). It is strongly curved ventral to the orbit. The posterior end is broad and ovate in lateral view, with a sinusoidal dorsal margin, formed in part by a small coronoid process. The anterior contact with the dentary is elongate, almost one-third of the length of the surangular in lateral view. The surangular fossa is deep. Anteriorly, the surangular is excluded in ventral view by the splenial and the dentary at approximately the same point as the mandibular symphysis ( Figure 7E-7F). Prearticular. The prearticular is only preserved on the right side. It is laterally compressed and elongate, positioned medial to the angular and lateral to the posteriormost splenial. Dentition. The teeth are small and acutely pointed, with smooth, thin enamel lacking macroscopic ornamentation ( Figure 8B). There is no pronounced constriction between the crown and root. The roots are not well-enough preserved to assess the presence or absence of apicobasal ridges, however, the root is smooth immediately basal to the crown. Crown height is 4 mm at the midpoint of the dentary. The teeth extend as far posteriorly as the anterior orbital margin. Hyoid apparatus. The left ceratobranchial is preserved ( Figure 7E-7F). An element in the position of the right hyoid is of questionable identity. Postcranial Axial Skeleton The atlas-axis complex is not visible. The third cervical centrum (C3) is the first vertebral centrum exposed on the right side. The right and left sides of the atlantal neural spine are unfused, and the atlantal neural spine is much smaller and shorter than the axial neural spine ( Figure 8C). The axial neural spine is twice as wide as that of C3 and more posterior dorsal neural spines, with an anterolateral facet for articulation with the atlantal spine ( Figure 7D). Postzygapophyses appear to be paired until at least C7. Anterior dorsal centra bear paired apophyses. Vertebrae are taphonomically compressed. The anteriormost dorsal ribs are bicapitate, but the tuberculum and capitulum are less clearly separated than in more posterior ribs ( Figure 7B). Ribs bear an anterodorsal groove; this groove does not extend to the distal half of the rib. Some partial gastralia are preserved immediately posterior to the forelimb ( Figure 7A). Pectoral Girdle Clavicle. The clavicle is wing-like, broadening medially and concave posteriorly with a thin dorsal process. It is twisted ventrally from the middle of the axis in such a way that the lateral ramus is offset approximately 120º from the medial one (Figure 8E). The medial ramus is more robust, broad, and convex than the lateral ramus. Whereas the medial ramus is slightly curved, the lateral one is flattened on both its dorsal and ventral surfaces. Scapula. The anterior blade is concave in external view. An anterior expansion is present ( Figure 8E). The coracoid facet is smaller than the glenoid facet. The posterior shaft of the scapula is dorsally straight and mediolaterally compressed. The posterior shaft is curved ventrally and slightly flared at its distal end. The ventral edge is proximodistally shorter than the dorsal edge. Coracoid. The coracoid is saddle-shaped in external view, with the glenoid and scapular facets thickened and projecting ventrally. The coracoid is more or less equidimensional and rounded, and is almost as long as the median stem of the interclavicle ( Figure 8E). An anterior notch is present but the posterior notch is reduced or absent. The glenoid facet is slightly rugose and much larger than the scapular facet. The intercoracoid facet is extremely reduced. Interclavicle. The interclavicle is 'T'-shaped. The posterior stem is approximately as long as the transverse bar. The transverse bar is relatively wide and convex in external view ( Figure 8E). In anterior view, the transverse bar is broad and tapers laterally towards the tips. Forefin Humerus. The distal end of the humerus is wider than the proximal end, but not significantly so (Table 1; Figure 8D), and a mid-shaft constriction is present. The proximal anterior surface of the humerus is flattened. The dorsal process lies towards the anterior edge and is relatively small, extending less than half the length of the humeral shaft. In anterior view, it can be seen that the deltopectoral crest extends further distally than the dorsal process. The ulnar facet is longer than the radial facet. The anterior distal edge is slightly squared off, forming a prominent leading edge facet. The ulnar facet is straighter in dorsal view, whereas the radial facet is more concave. Distal limb elements. The ulna and radius are in broad contact. The radius is notched anteriorly, and bears distal articular facets for the radiale and intermedium; the facet for the former is longer than that of the latter. The ulna is pentagonal in outline, with distal facets for the intermedium and ulnare. The pisiform is absent in FWD-129. The intermedium is pentagonal in outline, and forms a large contact with distal carpal 3. The ulnare is very small and proximodistally short, smaller than the intermedium. Hauffiopteryx sp., Figure 9 SMNS 81367, SMNS 80225, SMNS 81962, and SMNS 81965 are referable to Hauffiopteryx sp., but the diagnostic portions of the skull are either not exposed or are damaged (Figure 9). SMNS 81367 ( Figure 9A) is a small individual with the skull preserved in dorsolateral and the postcranium preserved in ventrolateral views. The specimen is clearly referable to Hauffiopteryx based on the small, round supratemporal fenestrae, the palmate morphology of the supratemporal, and the location of the parietal foramen anterior to the anterior edge of the supratemporal fenestra. Extensive reconstruction in the narial region prevents an accurate assessment of morphology and thus specific referral. This specimen was figured as H. typicus by Pardo-Pérez et al. (2019). SMNS 80225 ( Figure 9B) is a poorly preserved specimen prepared in ventrolateral view. The skull has been partially reconstructed. This specimen is referred to Hauffiopteryx sp. based on the number and posterior extent of the gastralia. Fragmentary pelvic elements suggest the absence of a robust ischiopubis, also consistent with Hauffiopteryx. This specimen was considered to be referable to H. typicus by Maisch (2008). SMNS 81962 ( Figure 9C) is preserved in ventral view. The anterior rostrum has been reconstructed. This specimen is referred to Hauffiopteryx based on the reniform outline of the quadrate in lateral view, palmate supratemporal, and the hind limb with digits converging distally. 18 SMNS 81965 ( Figure 9D) is also preserved in ventral view; the rostrum has been reconstructed. This specimen is considered to be referable to Hauffiopteryx based on the extensive development of gastralia, and the medially unfused ischium and pubis. An ossified hyoid corpus is present, a feature thusfar only observed in Hauffiopteryx among ichthyosaurs (Motani et al. 2013). SMNS 81965 was considered to be referable to H. typicus by Maisch (2008). PHYLOGENETIC ANALYSIS In order to test the relationship between material referred to Haffiopteryx typicus from the UK and Germany, Hauffiopteryx altera sp. nov., NMO 26575, and Leptonectes tenuirostris, we scored these as five separate OTUs in a version of the Moon (2017) Ichthyosauria matrix modified based on . The Hauffiopteryx material from England was scored from the literature (Caine and Benton, 2011;Marek et al., 2015); NMO 26575 was scored based on personal observation (EM) and Maisch and Reisdorf (2006). In addition, we added the taxa Protoichthyosaurus prostaxalis and P. applebyi, scored based on and . Additional character and scoring changes are outlined in Appendices 2-3; the nexus file is provided in Appendix 4. We analyzed the data using maximum parsimony implemented in TNT v. 1.5 (Goloboff and Catalano, 2016) with a New Technology search (100 iterations of the ratchet algorithm, minimum length found 100 times) ( Figure 10). Following parsimony analysis, the resulting MPTs were analyzed using the Iter-PCR script (Pol and Escapa, 2009), and 12 taxa causing instability were pruned from the strict consensus tree. Among neoichthyosaurian taxa, this resulted in Malawania anachronus, Temnodontosaurus eurycephalus, Pervushovisaurus bannovkensis, P. campylodon, and Simbirskiasaurus birjukovi being pruned (see Appendix 2 for complete list). A Bayesian analysis (Figure 11) was executed in Mr. Bayes. Search settings and taxon selection are outlined in . Both Bayesian and parsimony analyses support referral of NMO 26575 and FWD-129 to the genus Hauffiopteryx, distinct from all three species of Leptonectes (Figures 10-11). Moreover, material referred to Hauffiopteryx typicus from Germany and England formed a monophyletic group. We consider scoring differences between these H. typicus OTUs to be caused by differences in taphonomy (three-dimensional preservation vs. strong lateral compression), variation in the quality of preservation and preparation, and ontogeny, with the British material thought to represent juvenile individuals. Hauffiopteryx formed a sister group to Stenopterygius + Ophthalmosauridae in parsimony analysis ( Figure 10); its position relative to Stenopterygius spp. and Leptonectes spp. was unresolved in Bayesian analysis ( Figure 11). Comparison of Hauffiopteryx to Other Early Jurassic Ichthyosaurs Hauffiopteryx differs from all coeval Toarcian ichthyosaur genera in the extensive participation of the prefrontal in the external narial opening. The prefrontal is excluded in Temnodontosaurus trigonodon, Stenopterygius (Maisch, 1998a); Suevoleviathan (Maisch, 2001), Eurhinosaurus (SMNS 18648), as well as in Leptonectes (McGowan and Motani, 2003). The nasals in Hauffiopteryx are more extensively exposed dorsally anterior to the external narial opening than the premaxillae, unlike in Stenopterygius (e.g., Maxwell, 2012b). As in Eurhinosaurus (e.g., SMNS 18648), the upper temporal fenestrae are small and circular, the supratemporal is broad and palmate in morphology, and the parietal shelf is horizontally oriented. As in Temnodontosaurus trigonodon (Fraas, 1913), the splenial plays a major role in the mandibular symphysis. The teeth of Hauffiopteryx are slender and conical, with smooth enamel unlike in Suevoleviathan, in which the enamel is rugose in texture (Maxwell, 2018). In the postcranium, the gastralia in Hauffiopteryx extend posteriorly at least as far as the thirty-fifth vertebra. This is unique among Toarcian genera: gastralia are unknown in Temnodontosaurus trigonodon and Suevoleviathan; in Eurhinosaurus and Stenopterygius gastralia are present in the anterior-to mid-dorsal region (Buchholtz, 2001). The rib tuberculum and capitulum in the dorsal region are widely separated in Hauffiopteryx, as in Stenopterygius but unlike in Temnodontosaurus trigonodon, Eurhinosaurus, and Suevoleviathan . The anterior scapula is expanded and a foramen between the humerus, radius, and ulna is absent, unlike in Temnodontosaurus trigonodon (e.g., SMNS 17560). The elements of the anterior digit in both the fore-and hind limbs bear notches, and the proximal limb elements are angular and tightly interlocking, unlike in Suevoleviathan (Maisch, 1998b). The ischium and the pubis are thin and styloidal, unlike in Temnodontosaurus, Suevoleviathan, and Eurhinosaurus , but are widely separated medially, unlike in Stenopterygius in which they are fused both medially and laterally . Parsimony analysis resolved the position of Hauffiopteryx as sister to Stenopterygius + Ophthalmosauridae ( Figure 10). Although poor resolution outside this group increases ambiguity in character reconstruction, this clade is supported by a narrow, tapering anterior jugal, the basioccipital contributing to the floor of the foramen magnum in posterior view, a supraoccipital with parallel exoccipital processes, a caudal region approximately equal in length to the presacral vertebral column, one complete postaxial digit supported by an ulnare with two well-developed distal articular fac- ets, polygonal metacarpals and phalanges, and lateral fusion of the ischiopubis. The strongly grooved ribs with broadly separated tuberculum and capitulum in the mid-dorsal region are also different from all leptonectids . In the skull, greater similarities to leptonectids are present -especially in the dermatocranium. These include the thin rostrum with elongated external narial opening, the small upper temporal fenestra, the enlarged, palmate supratemporal, the horizontally oriented parietal slope, and the slight overbite. However, unexpected similarities with Temnodontosaurus (e.g., SMNS 50000) are also identified in the skull: the anteroposterior orientation and large size of the extracondylar area (ECA) of the basioccipital rather than the ECA being dorsoventrally oriented, such as that of Eurhinosaurus (SMNS 18648) and to a lesser extent Leptonectes tenuirostris and Wahlisaurus (Lomax and Massare, 2012;Lomax, 2016), and the extensive participation of the splenial in the mandibular symphysis (see Fraas, 1913). These differences support an independent derivation of a longirostrine morphotype in Leptonectidae and Hauffiopteryx, as argued by Moon (2017), although unlike Moon we recover T. azerguensis as a leptonectid in the Bayesian analysis ( Figure 11). Maisch (2008) was of the opinion that Hauffiopteryx typicus was characterized by certain unusual ontogenetic changes, namely: 1) strong positive allometry of the antorbital rostrum relative to the skull with increasing body size, and 2) strong positive allometry of the skull relative to presacral length. Unfortunately, the rostrum appears to have been reconstructed/modified in most of the available specimens of H. typicus, with the exception of GPIT 1491/4, GPIT/RE/12905, SMNS 80226, MHH 9, and BRSLI M1399 (see Table 2 for raw measurements). Based on log-transformed data, the antorbital rostrum appears to have grown with weak positive allometry relative to the lower jaw (slope = 1.15), and the lower jaw grew with strong negative allometry relative to the presacral vertebral column, the humerus, and to a lesser extent, the femur (as reported for e.g., Stenopterygius quadriscissus and Ichthyosaurus communis: McGowan, 1973). Thus, at present there is some support for the view that antorbital rostrum of Hauffiopteryx typicus showed a different allometric growth pattern than Ichthyosaurus and Stenopterygius, although there is enough variation present that this pattern is not clearly apparent in the ratios of individual specimens. Ontogenetic Status of FWD-129 and Generic Referral of Hauffiopteryx altera FWD-129 shows multiple features consistent with Hauffiopteryx from both the Posidonienschiefer Formation as well as from coeval localities in England, but differs from Stenopterygius, the most abundant and widespread ichthyosaur genus during the Toarcian. Characters supporting the referral of FWD-129 to Hauffiopteryx include participation of the prefrontal in the posterior external narial opening, small, round upper temporal fenestrae situated posterior to the pineal foramen, short nasals that do not contact the parietals, and a forefin with tightly interlocking, polygonal phalanges with a notch on the elements of the leading edge digit (Maisch, 2008;Caine and Benton, 2011). This combination of features is consistent with Hauffiopteryx among all ichthyosaurian genera, and this referral is also supported by both parsimony and Bayesian phylogenetic analysis (Figures 10-11). However, FWD-129 differs from H. typicus based on some aspects of cranial anatomy. Maisch (2008) considered individuals of Hauffiopteryx typicus of less than 2.5 m total length to be juveniles. Thus, using Maisch's criterion, with a total length of 193 cm (Table 2) GPIT 1491/4 would be a juvenile; however, we suspect that the specimen is osteologically mature based on the convex shape of the proximal humerus and closure of the sutures between phalanges (Johnson, 1977). Caine and Benton (2011) estimated total length in BRLSI M1399 at ~150 cm (skull length = 322 mm; humerus = 47 mm [from photo: Caine and Benton, 2011: text- fig. 9]); this specimen has been generally interpreted as a juvenile (Caine and Benton, 2011;Marek et al., 2015). FWD-129 was initially considered to be a juvenile of Eurhinosaurus longirostris, due to its small size, large orbits, and small temporal fenestra (Jäger, 2005). FWD-129 is approximately equal in size to BRLSI M1399, with a lower jaw at least 280 mm in length and a humerus 46 mm in length ( Table 2). The proximal articular surface of the humerus is flat, and the sutures between the limb elements remain open, also strongly supporting a juvenile attribution for this specimen. Relative proportions of the skull to the postcranium (McGowan, 1973), proportions within the skull (especially pertaining to the orbit/sclerotic ring: Fernández et al., 2005), relative tooth size (Dick and Maxwell, 2015), and notching of the anterior limb elements (Maxwell et al., 2014) are known to be ontogenetically influenced in the closely related genus Stenopterygius. However, the characteristics separating FWD-129 from specimens referred to H. typicus, Stenopterygius, and Eurhinosaurus are not expected to vary ontogenetically. This is especially apparent given that FWD-129 is of a similar ontogenetic stage to BRLSI M1399 based on both absolute size and limb ossification, but shows numerous differences in cranial morphology. Comparison of Hauffiopteryx altera with Hauffiopteryx typicus The lectotype of H. typicus (GPIT 1491/4) from Holzmaden is severely crushed, whereas FWD-129 is a lightly compressed three-dimensional skeleton. Nevertheless, it is possible to identify ways in which the two skulls differ. In H. typicus, the orbit is very large and round, creating a steep inflection in the nasal in lateral view. In FWD-129, the orbit is more elongate, and in lateral view the nasals are less steeply inflected anterior to the orbit than in H. typicus. SMNS 51552 and MHH 9 ( Figure 2B, 2C) are much more similar in shape to the lectotype of H. typicus (GPIT 1491/4; Figure 2A) than to FWD-129 ( Figure 7C). Unlike in GPIT 1491/4, MHH 9, and SMNS 51552, the lacrimal of FWD-129 does not separate the prefrontal from the jugal, and is excluded from the external narial opening. The lacrimal of FWD-129 is more robust than in specimens referred to H. typicus. These differences result in a relatively larger distance between the anterior edge of the orbit and the narial opening in FWD-129 than in H. typicus. The maxilla is overlapped dorsally by the lacrimal over almost one third of the posterodorsal surface, and the maxilla forms the ventral edge of much of the external nares in the lectotype, MHH 9, and SMNS 51552. In FWD-129, it is the jugal rather than the lacrimal that overlaps the maxilla posteriorly, and the maxilla is almost entirely excluded from the external narial opening in lateral view. BRLSI M1399 has a shallower, more elongated maxilla than FWD-129, but the maxilla of FWD-129 has a greater contact with the ventral surface of the premaxilla than that of BRLSI M 1399, with the premaxilla extending well past the midpoint of the external nares in the former specimen. The maxilla of FWD-129 is slightly deeper posteriorly, whereas in the English specimen the maxilla is rather similar in depth along its entire length. The maxilla of FWD-129 contacts the lacrimal and prefrontal posterodorsally, whereas in BRLSI M1399 the maxilla contacts only the lacrimal. The lacrimal of BRLSI M1399 participates posteriorly in the external narial opening (Marek et al., 2015: fig. 1;Caine and Benton, 2011: fig. 9); however, in FWD-129 it is excluded from the external narial opening by the prefrontal. In dorsal view, the prefrontal in FWD-129 is anteroventrally narrower than that of BRLSI M1399. Also, the narial ramus is more anteroposteriorly oblique in relation to the oval orbit. The jugal extends much further anteriorly in FWD-129 than in BRLSI M1399 (Caine and Benton, 2011: fig. 12;Marek et al., 2015: fig. 1). The orbit is also rounder in BRLSI M1399. These features are very similar to the set of characters differentiating FWD-129 from H. typicus, consistent with our interpretation of BRLSI M1399 as referable to H. typicus. Marek et al. (2015) cited the absence of root striations, the absence of the narial process of the maxilla, the presence of the temporal process of the frontal, wing-like basipterygoid processes, the absence of a ventral notch in the extracondylar area of the basioccipital, and the rod-like ischium/ ischiopubis as differing between Hauffiopteryx and BRLSI M1399. However, some of these conclusions are the result of character scoring errors: BRLSI M1399 has a ventral notch in the extracondylar area of the basioccipital (Marek et al., 2015: fig . 10a), and lacks participation of the frontal in the supratemporal fenestra (i.e., processus temporalis of the frontal absent, Marek et al., 2015: fig. 4a). The only character listed that truly differs between H. typicus and BRLSI M1399 is the absence of root striations in the latter; however, Marek et al. (2015) state that the teeth are quite poorly preserved, which may affect character scoring. For that reason, we consider BRLSI M1399 to be referable to H. typicus. Hauffiopteryx typicus from England and Germany formed a monophyletic group in our analyses, to the exclusion of NMO 26575 and Hauffiopteryx altera sp. nov. (Figures 9-10). Comparison of Hauffiopteryx typicus to NMO 26575 Maisch and Reisdorf (2006) referred a small ichthyosaur from the late Pliensbachian of Switzerland to the much older Rhaetian-Sinemurian species Leptonectes tenuirostris ( Figure 12). However, Maisch and Reisdorf (2006). this specimen shares greater similarity with the stratigraphically less far-removed genus Hauffiopteryx than with Leptonectes tenuirostris based on its small size, cranial proportions, significant participation of the prefrontal in the posterodorsal edge of the external nares, and angular, closely packed limb elements with multiple notches along the anterior digit of the forelimb. NMO 26575 is distinct from Hauffiopteryx altera in that the jugal is quite short, and the prefrontal does not exclude the lacrimal from the posterior edge of the external nares. NMO 26575 is more similar to H. typicus in this and most respects, and we consider it to be referable to H. typicus, resulting in a range extension of at least 2 million years for both the genus and the species. One of the key differences between most skulls referred to H. typicus and NMO 26575 is the proximity between the anterior lacrimal and subnarial process of the premaxilla in the latter, excluding the maxilla from participation in the external narial opening. However, the morphology observed in GPIT/RE/12905, in which the anterior lacrimal contacts the subnarial process of the premaxilla to exclude the maxilla from the narial opening, suggests that the anterior process of the lacrimal is variable in length in H. typicus and that NMO 26575 is within the range of variation of this species. Instraspecific variation in this part of the skull has already been noted in other Early Jurassic ichthyosaurs: e.g., in Protoichthyosaurus prostaxalis , and Stenopterygius triscissus (compare Godefroit, 1994: fig. 19;Caine and Benton, 2011: text- fig. 3). Palaeoecology The palaeoecology of the morphologically similar genus Stenopterygius is relatively well understood, and includes dietary information for individuals over a wide range of body sizes (Dick et al., 2016), complete ontogenetic series , and embryonic material (Böttcher, 1990). In contrast, Hauffiopteryx is much less abundant. The material listed in the current contribution represents the complete sample of Hauffiopteryx known from publicly accessible collections. Preserved individuals fall into a much narrower size range than Stenopterygius (Table 2 vs. Maxwell, 2012b supplementary table 1) and embryos are unknown, limiting our understanding of ontogenetic variation in this genus. Recognizable prey items are lacking as gastric contents. Thus, how Hauffiopteryx may functionally differ from Stenopterygius is extremely speculative. Although referral of NMO 26575 to Hauffiopteryx typicus extends the stratigraphic range of the genus and species into the upper Pliensbachian, all finds from the Posidonienschiefer Formation are concentrated in a very narrow stratigraphic range, from the uppermost tenuicostatum Zone to the lowermost serpentinum Zone. This time period corresponds to the onset to the peak of the early Toarcian Oceanic Anoxic Event, an event that profoundly influenced composition and abundance of both vertebrate and invertebrate faunas in the southwest German Basin (e.g., Hauff, 1921;Maxwell and Vincent, 2016). It is thus possible that Hauffiopteryx had a particularly specialized diet or hunting strategy, and its sudden reappearance inand subsequent disappearance from -the basin was related to changes in prey abundance or habitat (e.g., bottom-water anoxia forcing fish and belemnites into a narrow zone near the sea surface: Ullmann et al., 2014). CONCLUSIONS We found no evidence that the English and German material referred to the genus Hauffiopteryx represent separate species; however, we do find evidence that two species of Hauffiopteryx were present in the Early Jurassic of the Southwest German Basin: H. typicus and H. altera sp. nov. These two taxa are differentiated primarily based on characters pertaining to the lacrimal, prefrontal, and jugal. In addition, we refer a specimen previously considered to be Leptonectes tenuirostris to Hauffiopteryx typicus, extending the range of the genus into the Pliensbachian. This result is supported by phylogenetic analysis, which recovers Hauffiopteryx as a sister group to Stenopterygius + Ophthalmosauridae. Hauffiopteryx represents a valid genus, defined by a suite of synapomorphies involving both the skull and postcranium. However, the functional and paleoecological significance of these characters differentiating Hauffiopteryx from the superficially similar genus Stenopterygius are unclear. 26 NSERC Discovery Grant to H. Larsson (McGill University, Canada), for funding and support, and J. Pardo-Pérez (SMNS, U. Magallanes, Chile) and C. Gascó Martín (SMNS) for discussion and logistical support. The suggestions of A. Wolniewicz, N. Zverkov, and an anonymous reviewer improved the manuscript. APPENDIX 2. Discussion of character and taxon scoring changes from past analyses. Our phylogenetic analysis was based on the matrix developed by Moon (2017) and modified by . In addition to rescoring Hauffiopteryx, we made the following revisions to characters and scoring: Character 50: The position of the parietal foramen is more complex than initially captured in this character. We divided state (1) into two separate states. Rescoring affected only Ichthyosaurus. Character 50 Parietals -location of the parietal foramen: (0) contact between the left and right parietals present anteriorly, frontal excluded from the parietal foramen; (1) on the frontoparietal suture; (2) entirely surrounded by the frontals (modified from Motani 1999c, character 19). Character 89: The transverse flange of the pterygoid (character 88) is absent in most ichthyosaurs , which are scored as "poorly developed" in the matrix of Moon (2017: character 88). We therefore rescore character 89 as '-' in these cases. Characters 165, 166: In the analysis of Moon (2017), characters 165-166 are duplicated as currently organized (i.e. a sacrum is present by definition only if differentiated sacral ribs are present). Moreover, no taxa in the sample have a sacrum as defined by character 165 (ribs sutured to ilium). For that reason, we restructure characters 165-166 as follows: Character 165: Sacrum (0) present (at least one morphologically differentiated sacral rib can be observed), (1) absent (no truly sacral ribs are present) (Motani, 1999 character 104 [part]). Character 166: Sacral ribs (0) two with distal expansion, (1) one with distal expansion, (2) present and morphologically distinct [see character 165] but no distal expansion present. This character is treated as ordered. Character 210: As scored by Moon (2017), this character is uninformative. This character was rescored based on pers. observ. of specimens at the SMNS and from the NHMUK digital collections interface, and such a notch separating the radial and ulnar facets of the distal humerus was observed in T. trigonodon, T. platyodon, and S. integer. Because such a notch may exist even in the presence of a humerus-intermedium contact, character 210 was scored for all taxa. Character 221: A small interosseous foramen is present between the radius and ulna in T. trigonodon, T. platyodon (see McGowan, 1979), and Suevoleviathan. This character was rescored accordingly. Character 240: In basal ichthyopterygians, metacarpal V is much larger than distal carpal IV, whereas in more derived forms these elements are subequal or metacarpal V may be lost entirely (Maisch and Matzke 2000b, character 100). In the character list Moon (2017), the character states were incorrectly transcribed. We have edited this error; the character-taxon matrix was not affected. Mixosaurus kuhnschneyderi: Rescored for character 118. Guizhouichthyosaurus wolonggangense: The following characters were rescored based on reinterpretation of forelimb homologies in the original description, in which digit I is absent (and therefore metacarpal V is present): 217, 219,229,233,236,237,240,243,247, were also rescored based on the description of the type specimen. Temnodontosaurus trigonodon: Rescored for character 219, since a postaxial manual accessory digit is absent. Temnodontosaurus crassimanus was rescored for characters 47-48, 54, 73, 204, 207, 209, 214, 219, 226, 230, 240, 243 based on Melmore (1930). Additionally, we assume that Melmore (1930) Stenopterygius uniter: Characters 5,9,20,152,241 rescored based on pers. observ; character 82 rescored based on pers. observ. Stenopterygius aaleniensis: Scoring of character 241 changed to '?' as the metacarpal row is not preserved in this taxon; character 5 (subnarial process) also rescored based on pers. observ. Chacaicosaurus cayi: Characters 233, 247-248 rescored based on Fernández (1994). Ophthalmosaurus icenicus: Character 41 rescored as present, based on the separation of the postfrontal from the parietal in dorsal view by a process of the frontal. Caypullisaurus bonapartei: Scoring for character 247-248 changed, as manual metacarpal V is present in these taxa (Fernández, 2001). Arthropterygius chrisorum: Character 177 changed to '?' as the caudal region is not well-enough preserved to evaluate this character. Paraophthalmosaurus kabanovi: Scoring for character 247 changed to ? because the limb is incompletely preserved in this specimen. Muiscasaurus catheti: rescored as '?' for characters 11-12, since the maxilla-lacrimal suture is unclear. Sveltonectes insolitus was rescored for character 227. APPENDIX 3. Character list, copied directly from Moon (2017). Text in red corresponds to changes to the original character list that were introduced in ; text in blue indicates characters that were modified for the current study (characters 50, 165-166 only). See Appendix 2 for discussion and references. Characters 100, 107, 153, 166, and 219 are treated as ordered. Skull Character 1 Snout -extremely slender premaxillary segment, < ¼ the maximum lateral width of the posterior of the skull: (0) absent (1) present (Motani 1999c, character 34). This is modified from Motani's (1999c) character 34 by defining a specific boundary of 'slender'. Motani's original character separates several taxa, especially his 'Eurhinosauria'. To maintain this distinction this character is defined as the width of the snout where the premaxillae exclude the nasals from dorsal view -the nasals anteriormost dorsal exposure -compared to the maximum width of the post-narial skull. Character 2 Premaxilla -supranarial process: (0) present (1) (1) present, relatively broad (Druckenmiller and Maxwell 2010, character 15). Character 7 Maxilla -reduction: (0) absent, ≥ ½ of the length of the snout (1) present, < ½ the length of the snout (modified from Sander et al. 2011, character 106). The reduction of the maxilla is defined more specifically than Sander et al.'s (2011) character 106. While this is defined relative to the length of the snout as a whole, the dorsoventral exposure and contribution should also be considered (see character 9). Character 8 Maxilla -bearing teeth: (0) present (1) absent. See character 7.
2020-07-02T10:29:23.098Z
2020-06-24T00:00:00.000
{ "year": 2020, "sha1": "0342c464cf2c9794a9976b7ff8e1787e61197551", "oa_license": "CCBYNCSA", "oa_url": "http://palaeo-electronica.org/content/pdfs/937.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b0e9078b68a94f19731c992c21bfce0cbc5431e4", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Geography" ] }
258380020
pes2o/s2orc
v3-fos-license
Research and Innovation in Organ Donation: Recommendations From an International Consensus Forum Background. This report provides recommendations from the Research and Innovation domain as part of the International Donation and Transplantation Legislative and Policy Forum (hereafter the Forum) to provide expert guidance on the structure of an ideal organ and tissue donation and transplantation system. The recommendations focus on deceased donation research and are intended for clinicians, investigators, decision-makers, and patient, family, and donor (PFD) partners involved in the field. Methods. We identified topics impacting donation research through consensus using nominal group technique. Members performed narrative reviews and synthesized current knowledge on each topic, which included academic articles, policy documents, and gray literature. Using the nominal group technique, committee members discussed significant findings, which provided evidence for our recommendations. The Forum’s scientific committee then vetted recommendations. Results. We developed 16 recommendations in 3 key areas to provide stakeholders guidance in developing a robust deceased donor research framework. These include PFD and public involvement in research; donor, surrogate, and recipient consent within a research ethics framework; and data management. We highlight the importance of PFD and public partner involvement in research, we define the minimum ethical requirements for the protection of donors and recipients of both target and nontarget organ recipients, and we recommend the creation of a centrally administered donor research oversight committee, a single specialist institutional review board, and a research oversight body to facilitate coordination and ethical oversight of organ donor intervention research. Conclusions. Our recommendations provide a roadmap for developing and implementing an ethical deceased donation research framework that continually builds public trust. Although these recommendations can be applied to jurisdictions developing or reforming their organ and tissue donation and transplantation system, stakeholders are encouraged to collaborate and respond to their specific jurisdictional needs related to organ and tissue shortages. R apid advances achieved through basic and clinical research have made solid organ transplantation the treatment of choice for many end-stage organ diseases. Historically, organ transplantation research has focused on improving organ recipients' transplantation processes and posttransplant health outcomes. More recently, attention has turned toward exploring donation processes that may improve the quality and quantity of transplantable organs. [1][2][3][4][5] Research in these fields has ranged from increasing the number of organs recovered from each donor through donor management research to understanding the family experience during the surrogate consent to the donation process. Although deceased donor research is a novel field of investigation with great promise, it poses unique ethical, legal, regulatory, and logistical challenges that have slowed its development. Given these challenges, this report proposes action-guiding recommendations for a successful donor research framework. Furthermore, this report stems from work done in the context of the International Donation and Transplantation Legislative and Policy Forum (the Forum) that aimed to create an expert consensus description of what an ideal OTDT system should include (Weiss et al). 6 Therefore, this report should be understood in the context of the Forum's broader mandate, and as 1 of 7 domains providing expert guidance and recommendations. SCOPE AND UNDERLYING PRINCIPLES Although this report recognizes the importance of research in tissue and living organ donation, we focus our recommendations on deceased organ donation processes due to the novelty of this emerging field. This report's recommendations align with key principles identified by the United States National Academies of Sciences, Engineering, and Medicine as essential ethical underpinnings for any framework enabling donation research: autonomy, beneficence, fairness, and trustworthiness. 7 The Academies' report represents the most authoritative and comprehensive inquiry into the ethics of interventional donor research to date and the Research and Innovation Committee (RIC) drew on its findings during deliberation. For this report, the term donation process research encompasses research addressing donation pathways and practices that may or may not already be part of routine clinical practice. Aims of donation process research include improving organ viability, enhancing the likelihood of successful donation, and offering insights to improve deceased donation processes and outcomes. Research in this field may include, but is not restricted to, research on donor identification, consent practices, donor management, interventions performed on deceased donors in situ or organs ex situ, antemortem interventions performed on living donors in the context of controlled donation after circulatory determination of death (cDCDD), and interventions performed on an organ or donor for research even when organs are ultimately not transplanted. MATERIALS AND METHODS This report's recommendations stem from the work of the RIC. The RIC represents 1 of 7 domains in the context of the International Donation and Transplantation Legislative and Policy Forum (Weiss et al), 6 which provides expert guidance for legislators, regulators, and other system stakeholders when creating or reforming OTDT legislation and policy to improve system performance. This report's recommendations focus on deceased donation research and are intended for clinicians, investigators, decision-makers, and patients, family, and donor (PFD) partners involved in the field. Although the National Academies focused narrowly on research with neurologically deceased donors, its ethical framework is consistent with the World Health Organization's Guiding principles on human cell, tissue, and organ donation 1 and its report on interventional donor research, although specific to a US jurisdictional context, is the most authoritative to date. In developing recommendations, the RIC employed the nominal group technique (NGT) to identify and prioritize topics and produce recommendations by consensus. 8 NGT is a structured approach facilitating problem identification, solution generation, and decision-making, 9 and has been used extensively in healthcare contexts to identify priorities, support guideline development, 10,11 and explore perspectives among health professionals, caregivers, and the lay public. 12,13 To ensure efficient uptake of NGT methods, the RIC Lead (DIEUDÉ) received individual training from a firm specializing in healthcare guideline development (STA HealthCare Communications). STA advised on NGT's implementation to enhance balanced group discussion, promote efficient identification of challenges, produce a prioritized list of topics, and develop a Group Process Agenda. The RIC lead chaired 4 virtual group meetings to generate and prioritize topics meriting recommendations. The 12 RIC members brought overlapping expertise in deceased donation research as the committee includes organ donation researchers (n = 6), pediatric intensivists (n = 3), donation physicians (n = 2), transplant surgeons (n = 2), patient partners and patient engagement leads (n = 2), bioethics experts (n = 2), a lawyer (n = 1), and a tissue expert (n = 1). Members and affiliations are listed in Appendix I, SDC, http://links.lww.com/TXD/A500. We employed a narrative review approach to identify the current literature concerning each topic. Members were assigned a topic based on their expertise and used academic databases (PubMed and Google Scholar) and snowballing to identify literature on their respective topics. Members also used their expertise to identify authoritative and internationally relevant sources for their respective topics, including policy documents and gray literature. Results from the narrative reviews informed committee members and led to an evidencebased rationale for each recommendation. NGT concepts were applied during 9 virtual committee meetings over 5 mo to reach a consensus on recommendations. Significantly, PFD partners were involved in all RIC discussions, and their input was explicitly solicited through all stages of the Forum. Draft recommendations were completed and submitted to the Forum's scientific committee for review before presentation at the Forum's hybrid in-person/virtual event in Montreal, Canada, on October 14-15, 2021. The RIC lead presented all recommendations to the Forum and solicited feedback from attendees representing all stakeholder groups. This feedback was discussed among members at 2 follow-up meetings and incorporated into the recommendations outlined below. For more information, refer to The Legislative and Policy Methods paper (Weiss et al). 6 RESULTS This report provides 16 recommendations pertaining to 3 key areas, integral to a successful donor research framework: (1) patient donor family and public involvement in research; (2) consent and enabling research ethics frameworks; and (3) data management. PATIENT AND PUBLIC ENGAGEMENT/ INVOLVEMENT IN RESEARCH Recommendation 1: We recommend patients, families, donors, and public engagement/involvement in research based on the principles of inclusiveness, support, mutual respect, and cobuilding. Building a community between PFD partners and researchers in the healthcare system is foundational to meaningful and impactful patient engagement and partnerships. 14,15 Given the increasing recognition in the literature regarding the importance of involving patients and the public in health research, [16][17][18] donation process research would benefit from integrated PFD partners in research teams. In recent years, research has outlined the tangible value of patient engagement. This includes, but is not limited to, improved patient outcomes, improved system outcomes, and meaningful improvements throughout the research process (ie, study design, recruitment materials, stakeholder buy-in, data collection tools, privacy protection and data security plan, and knowledge dissemination), increased relevance to patient needs, and greater cost-efficiency. [16][17][18] Despite these positive indicators, many barriers to meaningful patient engagement in health research remain. 15,[19][20][21] There is also limited agreement on how and when to engage patients, how best to incorporate patients' expertise throughout the research process, and how to harmonize disease or jurisdiction-specific guidelines. 16,18,22,23 Although there are currently few published reports documenting the value of PFD involvement in deceased donation research, there are several reasons to believe it would be valuable in this domain. For instance, a lack of donors is the leading reason for organ supply shortfalls, and family veto is a significant reason for the loss of potential donors. Understanding family attitudes and motivations during consent discussions is critical, and PFD input into the design and execution of those studies would likely improve their quality and generalizability. In a recent Canadian study, clinical researchers integrated a PFD partner into their research team. 24 The PFD partner, a mother of a deceased organ donor, brought her experience and insights to the team, allowing the research team to improve its study design, recruitment materials, stakeholder buy-in, and data collection tools. Although there are concerns with engaging patients across the research spectrum, especially in basic science research, 25 this study indicates that, although challenging, patient engagement in deceased organ donation research is feasible and improves the quality of research. Our domain recognizes the importance of patient engagement and involvement in organ donation research. The concepts of engagement and involvement refer to active and meaningful collaboration of organ donors, recipients, their families, and the public in the governance, priority setting, initiation and conduct of research, and knowledge translation. 26,27 When integrating a PFD partner to the research team, we recommend project leads consider the following strategies: • Dedicating sufficient funding in a project's budget to remunerate PFD partners for their roles and shared expertise. Patient engagement should be considered a fundamental part of research and built into long-term strategic planning. • Developing a tailored approach to matching patient expertise with specific goals of each stage of the research process. • Establishing a patient engagement plan outlining the project's core values and that clearly defines at the outset of the research process the scope of patient engagement, time commitments, and roles. • Developing an evaluation framework with sufficient metrics to measure near, intermediate, and long-term outcomes of engaging patients across health research activities. CONSENT AND ETHICAL FRAMEWORK Donation process research poses unique ethical and logistical challenges that must be addressed before research is undertaken. For example, donation process research involves an unusually large number of stakeholders. These include organ donors, families of the deceased, organ recipients, patients on transplant waitlists, donation and transplantation professionals, and organ donation organizations. Given that donation research may impact the distribution of scarce healthcare resources, organ allocation, and public trust, society at large may also be affected. Difficulties in identifying these stakeholders in advance of research getting underway, combined with the geographical dispersion of those who may be impacted by a study, pose considerable challenges regarding research ethics committee oversight and consent for research. Surrogate consent is almost always required for donorbased studies because donors are typically unconscious or already deceased according to neurological criteria at the time of consent. Surrogate consent, however, is complicated given the emotional distress of surrogates burdened by the patient's illness, injury, or death and the many difficult decisions that must be made in a short period. [28][29][30] Further challenges arise concerning the identification of research participants (whose status may change throughout the course of the research), 31 risk assessment, and, in the context of cDCDD, protections for vulnerable participants. Donor research has an unfamiliar structure that upends the familiar architecture of prototypical trials, wherein the patient-participant is usually both the unit of intervention and the unit of outcome assessment. 7 Donor intervention research challenges ethical frameworks because interventions are often performed on the donor (or their organs), whereas outcomes are often assessed in recipients. 7 This complicates the harmbenefit analysis, determination of appropriate risk thresholds, and the identification of research participants. 31 This framework is further challenged when the donor is deceased. Despite the hurdles to donation research, several welldesigned trials assessing the efficacy of interventions performed on neurologically deceased donors have demonstrated that these studies can be feasible and impactful regarding transplant outcomes and organ utilization, 32 and guidance on the ethical conduct of interventional research has emerged from multiple sources. 7,28,29,33,34 Existing guidance provides valuable points of reference for investigators undertaking trials of deceased donor interventions. Their dissemination has clarified requirements for the ethical conduct of interventional research in several jurisdictions. Despite their value, these recommendations focus narrowly on deceased donor intervention research and say little about other forms of donation process research that may be seen in the future (eg, trials of antemortem interventions in cDCDD). Moreover, these guidelines are geared toward national legislative and regulatory contexts and are not straightforwardly applicable in other jurisdictions. Hence, below we present recommendations broadly applicable across jurisdictions. Given local regulatory frameworks, readers should note that not all recommendations will be feasible or applicable to all jurisdictions Donor Consent Recommendation 2: When research is conducted on donors or their organs following the determination of death we recommend that researchers and research ethics committees ensure that deceased donors despite not being research participants are treated in a manner that demonstrates respect for the dignity of the donor and their next of kin and maintains public trust in deceased donation systems. Identifying research participants is essential to applying appropriate research protections. 31 However, the unique features of deceased donor intervention research complicate the identification of research participants. Neurologically deceased donors do not meet the criteria for research participant status in some jurisdictions and internationally accepted research ethics frameworks because they are not living persons. 35 Although the status of deceased donors has been the subject of some controversy, 36-38 a growing number of commentators argue that deceased donors are not research participants. Research participants are subject to risks, burdens, and physical harm, and hence require standard protections, including risk minimization, favorable risk-benefit ratios, and research ethics committee oversight. However, the same cannot necessarily be said of deceased donors from whom vital organs can be legally and ethically removed without concern for inflicting welfare harm upon a person. Consequently, depending on regulatory requirements, research protections commonly afforded to living donor participants are not necessarily owed to deceased donors. The distinction between deceased and living persons is therefore relevant from a legal and ethical perspective. Although our committee was in broad agreement on this point, qualitative research demonstrates discomfort, disagreement, and confusion among healthcare professionals regarding whether research protections should be afforded to deceased donors. 39,40 Respect for the deceased body and the deceased donor's prior expressed wishes requires an acknowledgment that although deceased donors need not necessarily be considered research participants, this does not imply they are owed no protection at all. As an example, an intervention, studied in the context of an RCT that prevents organ recovery due to a complication may be considered contrary to the wishes of the previously living donor, justifying the need for protection. Importantly, however, these protections need not be identical to the traditional research protections afforded living participants. Our committee recommends that deceased donors be treated by researchers and research ethics committees in a manner that maintains public trust and respect for the dignity of the deceased and their surrogate decision-makers. Respect for the dignity and decisional autonomy of the deceased person requires that the use of their organs be consistent with their wishes. This can be achieved by ensuring that the previously living person, upon whose body or organs the interventions will be performed, or the surrogate decision-maker, has consented to postmortem donor research. A caveat to this recommendation stems from a recognition that different ethical and regulatory considerations may apply in specific cultural or jurisdictional contexts. For example, consenting to donor research and postmortem intervention may not be made by 1 individual in jurisdictions with a greater emphasis on shared medical decision-making. Moreover, ensuring that the authorization-consent process for research is consistent with the legal and regulatory framework for organ donation in each jurisdiction is critical. Classification of who is a participant, and who is afforded rights, varies by the laws of specific jurisdictions. For example, the recommendation that deceased donors should not be considered research participants may not apply in jurisdictions that do not recognize neurological death (eg, China) 41 or in those that stipulate that deceased persons involved in research are research participants. In jurisdictions wherein neurological death is not legally recognized, or in which no distinction is made between the living and the dead, clinically deceased donors may be considered research participants, and research participant protections should be afforded in those instances. Recommendation 3: With the exception of deidentified retrospective research, we recommend first-person* or surrogate** authorization or consent be required for deceased donation research to proceed. Respect for persons demands that the use of deceased donors' organs and tissue be consistent with the previously living person's wishes. Although organ donation, transplantation, and organ donation research have overlapping goals, they are distinct activities. Even when first-person or surrogate consent to organ donation is in place, it cannot be inferred from generic consent to donation that the donor would have consented to donation research. Enrolling a donor in prospective donation research without knowing their wishes regarding donation research runs the risk of instrumentalizing the donor or interfering with fulfilling their interests in what becomes of their bodies after their death. To ensure respect for persons, we recommend seeking specific authorization or consent for research. For this recommendation, first-person consent refers to authorization provided by the organ donor while alive and recorded in a registry. Surrogate consent refers to authorization given by a person with legal standing to make medical decisions on behalf of the patient or deceased donor within the relevant jurisdiction. We have chosen to use the term consent even for postmortem interventions understanding that the level and type of information related to risks and benefits needed for postmortem interventions-research or otherwise-is broadly accepted to be lower than informed consent for interventions on living individuals. 39 Some jurisdictions refer to "authorization" instead of consent regarding postmortem interventions, although that practice is not universal. 7 Despite this recommendation, recent trends suggest that many deceased donor intervention studies are increasingly employing waived consent models, which waive the requirement for informed consent. 42 The most frequent justification for waived consent is that deceased donors are not research participants, and therefore, informed research consent is not required. 42 Although this may be true, it is nonetheless advisable to obtain first-person or surrogate consent for research to maintain public trust and demonstrate respect for the decisional autonomy of the previously living person and their surrogates. Our committee identified waived consent as an area requiring further research and jurisdiction-specific analysis. In the meantime, we recommend careful consideration of specific circumstances in which consent may be waived. All requests for a waiver of consent should be scrutinized by the appropriate research ethics body to ensure the study meets the conditions for a waiver of consent in accordance with local legal and regulatory frameworks and internationally accepted ethical guidelines. Interventions determined to be of minimal risk by the appropriate research ethics committee may be acceptable when the research may not be practicably carried out without the waiver. 35 Recommendation 4: We recommend that in most cases research consent be discussed at the same time as organ donation and by the same individuals who approach surrogates for consent to organ donation. These individuals should have the requisite training and information to discuss research projects and the resources to contact research teams for clarification and formal consent if necessary. Beneficence demands that researchers take steps to minimize burdens on those who may be affected by the conduct of research. In the context of deceased donation, surrogates are often distressed, and confronted with 2 difficult discussions in a short period: whether to withdraw life-sustaining measures and whether to donate the patient's organs. 30 Consenting to donation research may add to the decisional burden by adding another difficult decision. Given the stresses surrogates experience, streamlining donation research's consent, and the authorization process would help to minimize the surrogate's decisional burden. Therefore, we recommend that research consent and authorization be discussed simultaneously with organ donation and by the same individuals who approach surrogates for consent to organ donation. This recommendation does not necessarily imply that the donation staff would complete the formal consent for research discussions, although that could be possible in some instances. Instead, this initial discussion of research could be structured more as an exploration of the family's interest in learning about one or more potential research projects. If the family states a desire to be informed of these possibilities, a separate research coordinator could become involved to discuss specific studies. A division of responsibilities is consistent with studies that have examined family experience of consenting to research on behalf of the patient. These studies emphasize the importance of training and the required skills of the individuals seeking and taking consent and their practices, including the ability to disclose information about the research to patients/families in lay terms. 30,43,44 In providing this recommendation, the committee recognizes this is ideal for furthering donation research. However, the logistics of training and updating policy are complex and require continuous assessment and quality improvement. Although some ODOs have the capacity to train their staff in discussing complicated research programs, others may opt for simpler explanations and referrals to a research team. Recommendation 5: We recommend that patient or surrogate consent or refusal to participate in research be recorded in the consent to organ donation documents. Including consent for research in organ donation authorization documents is consistent with the principles of respect for persons (by ensuring surrogates are given the option to fulfill the expressed or inferred wishes of the donor), beneficence (by reducing the administrative burden on surrogates), and trustworthiness (by ensuring research is not carried out without proper authorization, when required). The withdrawal policy of the consent to interventional research should be clearly described to patients or surrogates. We acknowledge that with this recommendation the current regulatory requirements for the conduct of clinical research in each jurisdiction need to be considered, including the management of relevant research documents. Recommendation 6: We recommend jurisdictions consider expanding intent to donate registries to include authorization or consent to research. Expanding intent to donate registries to include consent to research is consistent with the principles of respect for persons and trustworthiness. This approach could reduce the decisional burden on surrogate decision-makers in the context of a patient's (often sudden) illness or injury. The withdrawal policy of the consent to interventional research should be described. The option of expanding the intent to donate registry should be approached with caution owing to several salient unknowns common to consent for donation in general. Take, for example, the extent and degree of information disclosure that are desired or required, the extent of the donor's understanding of the research interventions, the effects on donor registration rates, and whether the choice should be binary or detailed. Recommendations 5 and 6 seek to provide high-level considerations and guidance on how consent for interventional research can be approached. The intent is to allow policymakers in variable jurisdictional circumstances to interpret this guidance as appropriate given specific cultural, regulatory, and infrastructural contexts as we recognize that a blanket consent to research may be suitable in some contexts, and unsuitable in others. Recipient Consent Recommendation 7: We recommend that the minimum ethical requirements for the protection of both target and nontarget organ recipients include: (1) oversight from the appropriate research ethics body and (2) recipient consent to receive a research organ or a nontarget organ that may have been affected by a research intervention. Research interventions performed on donors are often performed because of a suspected benefit to a single organ or system, which we define as the target organ. However, interventions may include systemic interventions that could have indirect or unanticipated effects on other organs, which we define as the nontarget organs. Whenever feasible, the safety of systemic experimental interventions for all recipients of organs-be they target or nontarget-ought to be assessed. When a global assessment is feasible and requires follow-up intervention or interaction with researchers, all recipients should be classed as research participants and afforded research participant protections. When a global assessment is not feasible, recipients of nontarget organs should not be considered research participants and would not require research protection. For example, a study in which outcomes are assessed ex situ, before transplant or a minimal risk study of a systemic intervention, without data collection from recipients. In these cases, informed (clinical) consent could be sufficient, provided the full history of the organ is disclosed. This recommendation suggests that ethical oversight includes a research ethics committee review of the research study and ongoing monitoring by the responsible data/safety board. Research teams should anticipate and monitor the impact of systemic interventions on both the target and nontarget organs and evaluate those impacts when appropriate and feasible. All recipients should receive adequate information about the intervention and be given the opportunity to discuss/clarify details within the available time constraints. As with any informed consent, this includes discussions of any uncertainty regarding potential risks. In the interest of fairness, and in addition to clinical assessment of the recipient posttransplant, monitoring should include ongoing assessment to ensure studies do not disrupt patterns of organ allocation. The organ should be offered according to the standard allocation criteria when the intervention takes place before organ allocation. This may increase the waiting list time for patients who do not wish to accept a research organ or do not meet a study's inclusion criteria. Uncertainties regarding the precise implications for the recipient of declining an organ should be openly communicated to the prospective recipient, particularly in terms of time on the waiting list. In rare circumstances, a recipient may be identified following donor research consent yet prior to the planned intervention, raising the question of the recipient's right to veto a study intervention on the donor candidate. It is important to consider how a right of veto may conflict with the interests of different stakeholders, including the donor who consented to participate in the research, other organ recipients participating in the research and who may benefit from intervention, and the public interest in transplantation research. Rapid involvement of a clinical or research ethics committee may be appropriate in these circumstances. Recommendation 8: We recommend a 2-stage process to ensure that the transplant recipient gives valid consent to accept an intervention research organ, first at the time of waitlisting, and second at the time of organ offer. Obtaining informed consent when accepting a research organ or participating in interventional research is challenging within the time constraints available for the acceptance of the organ. Nonetheless, respect for persons demands that recipients consent to either the receipt of a research organ, research participation, or both, when applicable. To overcome this challenge, the scheme advocated by the National Academies is instructive. 7 • At the time of waitlisting, an initial discussion and indication of willingness to accept a research organ (which may be stratified according to a patient's risk tolerance, ie, minimal risk, above minimal risk, moderate risk, etc). • At the time of offer, more detailed discussion and consent to accept a research organ from a particular protocol. • Periodic review of recipient preferences while on the waiting list. • Informing waitlisted patients that they can reverse their decisions at any time and how to do so. In addition, education about organ interventional research should commence early in evaluating patients for transplantation, ideally at the time of transplant waitlisting. Education should incorporate discussion about the aims of interventional research, the possibility of an offer of a research organ, current/past research, categorization of risk, and evaluation of the harms and benefits. Furthermore, information about donor intervention research studies should be made available to facilitate discussions on donation research with organ recipient candidates before an organ is offered. It is important to recognize that accepting risk is already an intrinsic part of the consent process for transplantation. Any additional risk resulting from the donation research could be incorporated into the risk/benefit assessment required to evaluate transplant candidates when considering high-risk donors routinely. Ongoing communication between the transplant team and researchers is critical to implementing this recommendation. Given the need for transplant teams to determine the suitability of organs for transplant, we suggest that transplant teams be made aware of studies implicating research organs; researchers need to provide sufficient information to the transplant team to explain what this means for patients. Ongoing communication, especially for complex projects, requires a collaborative approach whereby research team members can be accessed for further information. Given the novelty of the proposed scheme for ensuring recipient consent to the receipt of an organ subjected to a research intervention is valid and informed, its effectiveness should be evaluated during the early phases of implementation to allow for iterative development and refinement in response to challenges and the particularities of local OTDT structures. Recommendation 9: We recommend that recipient-informed consent to research participation is required for any followup intervention, interaction, or data collection, storage, and sharing beyond what is part of routine posttransplantation follow-up. We recommend researchers obtain informed consent for any intervention, interaction, or data collection for research purposes. The protections must be consistent with the jurisdiction's legal and regulatory research framework covering any other clinical research program. It is also important to note that recipients of an intervention research organ should be allowed to withdraw their consent to any posttransplantation intervention, interaction, and data collection that forms part of the research study. Recommendation 10: We recommend the creation of a centrally administered donor research oversight committee, a single specialist institutional review board for organ donor intervention research, and a research oversight body to facilitate coordination and ethical oversight. The logistical, ethical, and practical challenges facing donation process research demand dedicated entities to streamline study design and approval as well as ensure appropriate oversight and communication among geographically dispersed donation and transplantation programs. To this end, the entities outlined by the national academies may provide useful guidance in developing this tripartite structure to enable donation process research. 7 Centrally Administered Donor Research Oversight Committee A committee mandated to prioritize, review, implement, and track research protocols; assess and monitor the impact on organ allocation and distribution; develop and disseminate information about organ donor intervention research; and track outcomes. The committee's work should be complementary to that of any ethical committee for organ donation and transplantation in place on the parallel level (eg, the national/ regional level). Single Institutional Review Board for Organ Donor Intervention Research Role Make decisions regarding consent processes: review and approval of protocols/protections/compliance with regulatory/policy requirements. Study-specific data and safety monitoring boards (DSMBs) established ad hoc by the research oversight committee. We suggest that the role of the DSMBs includes reviewing incoming data, and providing participant safety by establishing criteria to terminate studies or amend protocols if unsafe. There are multiple examples of DMSBs established and managed by research institutes (National Institutes of Health) or networks (The Canadian Donation and Transplantation Research Network) to support trials that are led by principal investigators in their respective field. The potential advantage of the donation research oversight committee establishing a DSMB is that the donation research oversight committee will ensure that the membership of the DSMB reflects and retains the multidisciplinary expertise necessary to interpret the data from donation clinical trials and to fully evaluate participant safety. DATA MANAGEMENT: COLLECTION STORAGE AND SHARING Data management, including collecting, storage, and sharing, are essential best practices for improving an OTDT system. The FAIR guidelines 45 establish several practical and consensus principles/goals for data sharing. The RIC acknowledges that although these principles are widely endorsed, their implementation within donation and transplantation research is absent on a systematic level. Given the unique ethical, legal, regulatory, and logistical challenges affecting research in these fields, FAIR principles would support and strengthen outputs. FAIR guidelines state that data must be Findable, Accessible, Interoperable and Reusable. Additionally, the 7Rs 46 also provide guiding principles for data management. Data should be reusable, repurposable, repeatable, reproducible, replayable, referenceable, and respectful. In the context of donation and transplantation research, we interpret these goals as follows: Findable relates to the permanence of data that should be persistently identifiable and unique. Recommendation 11: We recommend that data, when made available, include a unique and permanent digital identifier such as a digital objective identifier or accession code that ensures it is easily located. This suggests that datasets be associated with their metadata, which includes standardized terms relating to the field of transplantation/donation, including information about consent that facilitates searches. Accessible relates to the ability to obtain data through reasonable means or authorization. Recommendation 12: We recommend that datasets be accessible and freely available at the point of publication while respecting confidentiality and intellectual property rights. We recommend that data are made available in relevant repositories (see below). For clinical donation/transplantation datasets, we recommend that these are made available in specific repositories that restrict access and preserve participant anonymity according to the study type. Interoperability is crucial for combining and linking data, as well as understanding how datasets relate to each other. Without interoperability, the potential usefulness is reduced. Ensuring data are appropriately annotated is, in turn, crucial for the ability to aggregate datasets for systematic analyses. Recommendation 13: We recommend that datasets are made available with metadata that utilize keywords and vocabulary that are standard across transplantation research and/or welldefined in the metadata. Reusability allows for datasets to be processed with no limitations. Recommendation 14: We recommend that transplant research data are machine readable and in recognized formats. Reusability must be possible without input from the researchers who generated the data and not linked to a requirement for specialist equipment for readability. We also recommend that data be in a format useable for analysis and aggregation. Data repositories are a crucial element in promoting the above goals. These provide storage space for datasets, and their use is often a requirement for publication (as per journal requirements). Recommendation 15: We recommend that all journals in the field of transplantation and donation ensure that the use of data repositories is a requirement for publication and that accession codes are made available at the point of submission. Moreover, to comply with the above principles, we recommend that data be made freely and immediately accessible at the point of deposition, be made available in perpetuity with a permanent digital objective identifier, and not be withdrawn. The choice of data repository depends on the study and data type. A number of resources help guide the choice of a certified repository (eg, re3data.org). For example, the Gene Expression Omnibus (GEO, NCBI) provides a resource for functional genomics data storage, whereas FlowRepository (flowrepository.org) includes storage for cytometry and immunology datasets. Recommendation 16: We recommend establishing clinical data repositories specific to donors and transplant recipients. Researchers should seek to comply with institutional, funder, and journal requirements when identifying a data repository. Repositories must not charge those accessing the data. DISCUSSION This report integrates recommendations stakeholders can use when developing OTDT systems that respond to the global need for transplantable organs and tissues. The recommendations made in this report should be considered in the context of the total outputs of the Forum. This includes the report on baseline ethical principles, which highlights the need to evaluate the efficacy of a proposed change in practice and policy as part of framework development. Research is the only way to create such knowledge. The above recommendations adhere to the goals identified by the US National Academies of Science, Engineering, and Medicine report for a framework enabling donation research, including the key idea that maintaining public and stakeholder trust in a jurisdiction's ODTD system is essential to enabling success. ODTD systems must strive to continually improve their transparency and accountability structures, including those in their deceased donor process research. • Respecting an individual's choice, including their preferences regarding postmortem research, remains a pillar for ODTD systems. Jurisdictions must establish guidelines that improve coordination of a donor's preference and allow for timely information sharing to honor a patient's expressed preference. • Clarifying the legal and regulatory framework for their jurisdiction's deceased donor process research. This framework should adhere to and respect a jurisdiction's social, political, cultural, and religious norms. • Obtaining informed consent must be a pillar of donation process research. Clinicians who invite transplant recipients to participate in research must fully inform prospective participants of the risks and harms of participation. • Establishing a framework for centralized management and oversight of organ donation research to ensure a standardized approach at the national level. • Confirming all disclosure policies are consistent with the local legal and regulatory framework. This report's recommendations also align with key principles identified by the US National Academies of Sciences, Engineering, and Medicine as essential ethical underpinnings for any framework enabling donation research, including autonomy, beneficence, fairness, and trustworthiness. 7 This report's recommendations were created to ensure deceased donation research creates methodologically sound knowledge through ethically conducted research. We build on foundational work to provide expert guidance to decision-makers and stakeholders, including patients, families, and donors who ultimately depend on a high-functioning OTDT system. Although recommendations in this report, such as the recommendation on patient and public engagement in research and data management collection and storage, might also apply to multiple, if not all, areas of healthcare research, deceased donation process research presents unique ethical and logistical challenges. Therefore, the processes involved with this novel field of research must be guided by the principle of safeguarding public trust in the OTDT system. Although OTDT systems will experience unique local challenges, several challenges are experienced by most OTDT donation research systems. This includes the remarkably high number of stakeholders involved, a need to appropriately identify and attain consent, donation research's impact on healthcare resources and allocation, and its impact on ethical committees. These considerations are discussed and integrated into the formulation of the recommendations. Significantly, we recognize that local resource constraints, cultural and religious considerations, regulatory frameworks, or political realities may exclude some jurisdictions from implementing all recommendations. In addition, jurisdictions will have varying established responsibilities for clinical research and the role of the ODT system will be to identify specific concerns in donation and transplantation research. Depending on the context, some of these recommendations can be identified as inapplicable, ideals for the future, or current imperatives. We hope OTDT stakeholders can use these recommendations to enhance their deceased donor process research framework while maintaining public trust as stakeholders collaborate to respond to their jurisdictional needs related to organ and tissue shortages.
2023-04-29T14:25:55.314Z
2023-04-28T00:00:00.000
{ "year": 2023, "sha1": "3cb8ee5ee0cf2a055a37a42862db59d27f864bbc", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "WoltersKluwer", "pdf_hash": "3cb8ee5ee0cf2a055a37a42862db59d27f864bbc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236617105
pes2o/s2orc
v3-fos-license
The effect of co-treatments of chemotherapeutic drugs and curcumin on cytotoxicity and FLT3 protein expression in leukemic stem cells This study aims to enhance efficacy and reduce toxicity of the combination treatment of a drug and curcumin (Cur) on leukemic stem cell and leukemic cell lines, including KG-1a and KG-1 (FLT3+ LSCs), EoL-1 (FLT3+ LCs), and U937 (FLT3− LCs). The cytotoxicity of co-treatments of Dox or Ida at concentrations of the IC10 – IC80 values and each concentration of Cur at the IC20, IC30, IC40, and IC50 values (conditions 1, 2, 3, and 4) was determined by MTT assays. Dox–Cur and Ida-Cur increased cytotoxicity in leukemic cells. Dox–Cur co-treatment showed additive effects in several conditions. The effect of this co-treatment on FLT3 expression in KG-1a, KG-1, and EoL-1 cells was examined by Western blotting. Dox–Cur decreased FLT3 protein levels and total cell numbers in all the cell lines. By contrast, the FLT3 protein levels and total cell number after Cur treatment did not show significant differences as a result of the co-treatments. Dox–Cur decreased FLT3 protein expression in a dose dependent manner. In summary, Cur was the effective compound in inhibiting FLT3 protein expression. Co-treatment with Dox–Cur could enhance the cytotoxicity of Dox by inhibiting the proliferation of AML leukemic stem cells. Introduction Leukemia is among the top 10 cancers diagnosed globally. It is a group of cancers of early blood-forming cells, which are characterized by the uncontrolled production and accumulation of blast or immature abnormal blood cells in the peripheral blood and bone marrow. Leukemia can be divided into four major types according to the stage and cell of origin: acute myeloid leukemia (AML), acute lymphoid leukemia (ALL), chronic myeloid leukemia (CML), and chronic lymphocytic leukemia (CLL). AML is the most common type of acute leukemia in adults, with the highest incidence and death rate in both sexes. It can be distinguished by clonal expansion of abnormal myeloid blasts in bone marrow, peripheral blood, or other tissues. According to recent data, 15-25% of AML patients fail to achieve complete remission (CR) due to chemotherapy resistance and may show relapse, with the overall 5-year survival rate of approximately 40% 1,2 . Moreover, between 10 and 40% of newly diagnosed AML patients do not achieve CR with intensive induction therapy, and such patients are categorized as primary refractory or resistant 3 . Hence, AML is de ned as an aggressive malignant myeloid disorder. One theory of resistance and relapse in AML patients involves the presence of subpopulations of leukemic stem cells (LSCs) 4 . LSCs have been described as the human AML-initiating cell with a selfrenewal capacity and the ability to give rise to heterogeneous lineages of cancer cells 2,5 . They can be identi ed by the cell surface phenotype CD34 + hematopoietic stem cell and CD38 − subpopulation 6 . The used traditional chemotherapeutic drugs are incapable of defeating the LSC population due to many reasons. First, these drugs have been designed to eliminate fast-dividing cells by inhibiting cell cycle progression 7 ; thus, they cannot it did not affect LSCs which mostly remain in stage G 0 of the cell cycle 8 . Second, the expression of P-glycoprotein (MDR1), a multidrug resistance e ux pump protein, in LSCs potentially removes cytotoxic agents from cancer cells 9 . In addition, LSCs may sustain some mutations and epigenetic changes, resulting in conventional drug toxicity reduction and resistance 10,11 . Thus, LSCs are considered to play a fundamental role in AML pathogenesis and have become the main targeted therapies of AML. Although drug resistance in AML patients usually occurs, the traditional chemotherapy remains a popular method for leukemia treatment due to its high ability to destroy cancer cells that can spread throughout the whole body. Anthracycline antibiotics, such as doxorubicin (Dox (14-hydroxydaunorubicin)) and idarubicin (Ida (4-demethoxydaunorubicin)), are generally used as standard chemotherapeutic agents for AML treatment 12 . These drugs function by inhibiting topoisomerase II activity in DNA transcription and also trigger apoptosis or autophagy in cells 13 . The combination of anthracyclines and cytarabine in the initial treatment is capable of inducing complete remission (CR) in approximately 45-70% of patients 14 ; however, more than 40% of CR cases eventually experience relapse within 2 years 15 . The previous studies on AML leukemic stem cells demonstrated that anthracycline is less effective in killing LSCs (CD34 + /CD38 − cells) than committed leukemic cells (CD34 + /CD38 + cells) 16 , and the co-treatment of cytarabine and anthracyclines is less effective against primitive AML cells than against leukemia blasts 17,18 . Furthermore, with high dose administration, anthracyclines are able to cause the development of side effects in patients in relation to its chemical structure, including nausea, vomiting, hair loss, and myelosuppression 19 . Several reports expressed their concern about the presence of cardiac, renal, and liver toxicity in patients treated with Dox 20,21 . Thus, combination therapy with natural substances with chemosensitizing and chemoprotective activities may be a promising strategy to overcome LSCs and reduce the side effects of anthracyclines. Curcumin (Cur) is a natural polyphenol constituent of turmeric (Curcuma longa Linn.). It exhibits a wide range of pharmacological activities, such as antioxidant, anti-cancer, anti-in ammatory, and antimicrobial effects [22][23][24] . Previous studies reported that Cur exhibited an excellent cytotoxic effect; induced cell death in several types of leukemic cell lines 25,26 ; and showed inhibitory effects on WT1 and FLT3 protein expression, which are associated with cell proliferation 26,27 . Moreover, it inhibited the activity of Pglycoprotein (MDR1) 28 and exhibited cancer chemopreventive properties, especially in myocardial protection 29 by inhibiting ROS generation 30 . Consequently, it may be possible to manipulate the combination of Cur and anthracyclines for reduction in anthracycline toxicity and to overcome drug e ux via Pgp-mediated MDR in leukemia on AML leukemic cells and LSCs. Although Dox and Cur exhibit synergistic cytotoxic effects on cancer cell models, the combination of free Dox and free Cur shows a slightly obvious synergistic effect on the animal model 31 . The aims of this study were to study the cytotoxicity of co-treatment with anthracycline drugs and curcumin for FLT3-overexpressing leukemic stem cells (KG-1a and KG1), FLT3-overexpressing leukemic cells (EoL-1), and non FLT3-expressing leukemic cells (U937). Moreover, the effect of co-treatments on FLT3 protein expression and total cell numbers were determined. The IC 50 values of Dox in leukemic stem cells (KG-1a and KG-1) were found to be signi cantly higher than for leukemic cells, EoL-1 and U937 cells. However, in the group of the leukemic stem cell line, KG-1 cells were substantially more responsive to Dox and Ida than KG-1a cells were, indicating that a high number of LSCs affected the chemotherapeutic treatment's sensitivity. Furthermore, the IC 50 values of Cur in KG-1a cells were considerably higher than those in the other cells. These ndings demonstrated the drug resistance in LSCs against LCs and suggest that it might be possible to use to improve the potency of AML treatment. Effects of combination treatments of various concentrations of Cur and a xed concentration of Dox on cell number and viability in leukemic stem cells and leukemic cells. According to the results presented in previous section, Cur and Dox-Cur treatments could inhibit AML LSC and LC cell proliferation more effectively than Dox treatment alone. Three non-toxic concentrations within the range of Cur's IC 20 value and a xed concentration of Dox from Dox-Cur condition 1 were tested with KG-1a, KG-1, and EoL-1 cells for 48 h to con rm the impact of Cur on cell proliferation inhibition of Dox. The results demonstrate that the co-treatments of Dox-Cur signi cantly decreased the cell number of both cell lines in a dose dependent manner when compared to a single Dox treatment and control (Fig. 5). The cell number of KG-1a cells in the control group was 3.44 ⋅ 10 5 cells/mL, and decreased to 2.96 × 10 5 , 2.21 × 10 5 , 1.91 × 10 5 , and 1.46 × 10 5 cells/mL in response to Dox, Dox + Cur at 4 µg/mL, Dox + Cur at 4.5 µg/mL, and Dox + Cur at 5 µg/mL, respectively (Fig. 5A). In addition, the cell number of KG-1 cells decreased from 4.18 × 10 5 cells/mL in the control group to 3.48 × 10 5 , 2.93 × 10 5 , 2.47 × 10 5 , and 2.22 × 10 5 cells/mL in response to Dox, Dox + Cur at 3 µg/mL, Dox + Cur at 3.5 µg/mL, and Dox + Cur at 4 µg/mL, respectively (Fig. 5C). Discussion In this experiment, doxorubicin (Dox) and idarubicin (Ida), standard chemotherapy for AML patients, were chosen as chemotherapeutic substance models to be studied. They can destroy leukemic cells by becoming incorporated into the DNA single strand, inhibiting topoisomerase II activity in DNA transcription, and triggering apoptosis or autophagy 13,34,35 . The cytotoxic activity of these anthracyclines was determined in each leukemic cell line by MTT assays. Both drugs showed the greatest cytotoxicity for EoL-1 cells, followed by U937, KG-1, and KG-1a cells. The inhibitory concentrations at cell growth (IC 50 ) values of 50 of idarubicin on KG-1a and KG-1 cells were 19.82 ± 1.80 and 5.45 ± 0.89 ng/mL, respectively. In contrast, doxorubicin showed lower cytotoxicity than idarubicin with IC 50 values of 0.65 ± 0.13 and 0.21 ± 0.02 µg/mL, respectively. This may have resulted from the absence of the methoxyl group at position 4 of idarubicin's structure which increased the lipophilicity and rate of cellular uptake, leading to greater toxicity than that of daunorubicin or doxorubicin 36 . In addition, the cytotoxicity of curcumin (Cur), a natural substance with chemosensitizing and chemoprotective activities 23 , was also examined with four leukemic cell lines by MTT assay. The results showed that Cur demonstrated the highest cytotoxic effect on EoL-1 cells, followed by U937, KG-1, and KG-1a cells. Thus, Cur was selected as a supplementary substance for enhancing the e ciency and decreasing the toxicity of anthracycline drugs in this study. According to the cell viability curve, the co-treatment of Dox-Cur and Ida-Cur tended to increase the cytotoxicity for KG-1a, KG-1, EoL-1, and U937 cells in dose-dependent manners as compared to single drug treatment. Moreover, Cur also enhanced the cytotoxic e cacy on both chemotherapeutic drugs in dose dependent manners due to the lower IC 50 values of anthracyclines used in co-treatment in each cell line. Dox and Ida are usually ineffective due to an increase in LSCs, drug resistance, and relapse in AML patients. In this study, Cur which is a natural supplementary substance, was found to improve the cytotoxicity of Dox and Ida in all the cell lines due to its anti-leukemic (apoptotic induction) 37 and chemosensitizing (decreasing MDR-1 gene expression) 28 activities. For these reasons, it could decrease the toxicity of both chemotherapies, resulting in lower IC 50 values for drugs in co-treatments when compared with single drug treatments. It is notable that effective doses of the co-treatments used to treat KG-1a cells were higher than those for KG-1, EoL-1, and U937 cells. KG-1a and KG-1 cells are leukemic stem cell lines with a high percentage of leukemic stem cells (∼95% and ∼55%, respectively). These cells are well-known for their chemotherapy resistance, which includes primary rest in the stage G 0 of the cell cycle and high expression of the drug e ux pump. Since the EoL-1 and U937 cells lack a stem cell population, they were more vulnerable to the co-treatments. The combination treatment of Dox-Cur showed synergistic and additive cytotoxic effects on both AML leukemic stem cell lines (KG-1a and KG-1 cells) and AML leukemic cell lines (EoL-1 and U937 cells). Despite the fact that only Dox-Cur condition 3 showed synergism on KG-1a and EoL-1 cells, Cur was able to be used as a supplement to lower chemotherapeutic agent doses. The combination treatment also reduced the concentration at the IC 50 value of Dox in each cell line which could be a useful formulation to decrease the cytotoxicity of Dox on normal cells. However, the poor solubility and short biological half-life of Cur, as well as the non-speci c activity of Dox, resulted in low absorption and cytotoxicity of these drugs in tumor cells 13,24 . FLT3 is a key driver of AML, and its mutations are associated with the development of high risk of relapse in patients. Previous studies demonstrated that Cur has an inhibitory effect on FLT3 protein expression in leukemic cells 27 . Thus, the combination of Dox and Cur for AML treatment may lead to FLT3 protein expression reduction, which is involved in the proliferation process of leukemic cells. In this study, the non-toxic doses at IC 20 In the future, the effects of Dox-Cur on cell cycle progression and apoptosis induction will be assessed to validate the mechanism of co-treatment's effect on cell proliferation inhibition and cell death. Conclusion Overall, anthracyclines (Dox and Ida) and Cur, a natural phenolic compound with anti-tumor activity, were shown to be effective AML chemotherapeutic agents. Our results show that the combination of Dox and Cur had a synergistic effect and could improve Dox anti-tumor activity in AML cells, particularly leukemic stem cells, by inhibiting cell proliferation through FLT-3 protein suppression. This nding presents an alternative choice that may be useful in the development of a promising regimen for the treatment of AML relapse in the future. (monoblastic leukemic cell line) was purchased from ATCC ® . These were cultured in RPMI-1640 medium containing 10% fetal calf serum, 1 mM L-glutamine, 100 units/mL penicillin, and 100 mg/mL streptomycin. All the leukemic cell lines were cultured at 37°C in a humidi ed incubator with 5% CO 2 . Methods Cytotoxicity of single doxorubicin, idarubicin, and curcumin (curcuminoid mixture) on leukemic stem cell and leukemic cell viability by MTT assay. KG-1a and KG-1 cell lines were adjusted to 1.5 ´ 10 4 cells, while EoL-1 and U937 cells were adjusted to 3.0 ´ 10 4 and 1.0 ´ 10 4 cells in 100 µL of complete medium, and then seeded into at-bottom 96-well plate and incubated at 37°C under 5% CO 2 atmosphere for 24 h. Western Blotting. KG-1a, KG-1, and EoL-1 cells were prepared and treated with Dox, Cur, and the cotreatment. After that, the cells were harvested after 48 h of incubation, and the whole proteins were extracted using RIPA buffer. The protein concentra-tion was measured with the Folin-Lowry method. The protein lysates were separated through 12% SDS-PAGE and then transferred to PVDF membranes. The membranes were blocked in 5% skim milk and probed by rabbit polyclonal anti-FLT3 and rabbit polyclonal anti-GAPDH antibody at a dilution of 1:1,000. The reaction was followed by HRP-conjugated goat anti-rabbit IgG at 1:15,000 dilution. The proteins were visualized using LuminataTM Forte Western HRP substrate. Finally, the protein band signal was quanti ed using a scan densitometer (Bio-Rad, CA, USA) or Fluorchem E Western blot and gel imager (ProteinSimple, CA, USA). Statistical Analysis. The average of triplicate experiments and standard derivation (SD) were used for quanti cation. The levels of target protein expressions were compared to those of the vehicle control in each experiment. The results are shown as mean ± SD. The differences between the means of each sample were analyzed by one-way analysis of variance (one-way ANOVA). Statistical signi cance was considered at p < 0.05, p < 0.01, and p < 0.001. Figure 1 Cytotoxicity of (A) doxorubicin, (B) idarubicin, and (C) curcumin on KG-1a, KG-1, EoL-1, and U937 cells.
2021-08-02T00:06:15.879Z
2021-05-04T00:00:00.000
{ "year": 2021, "sha1": "e7c73aab3322f626ea977fc2aef38845c8d9ed86", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-467881/v1.pdf?c=1631896595000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "e3ba6126dc4639f99208f524a74c986a700e6714", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry" ] }
119148863
pes2o/s2orc
v3-fos-license
Uniform bounds for lattice point counting and partial sums of zeta functions We prove uniform versions of two classical results in analytic number theory. The first is an asymptotic for the number of points of a complete lattice $\Lambda \subseteq \mathbb{R}^d$ inside the $d$-sphere of radius $R$. In contrast to previous works, we obtain error terms with implied constants depending only on $d$. Secondly, let $\phi(s) = \sum_n a(n) n^{-s}$ be a `well behaved' zeta function. A classical method of Landau yields asymptotics for the partial sums $\sum_{n<X} a(n)$, with power saving error terms. Following an exposition due to Chandrasekharan and Narasimhan, we obtain a version where the implied constants in the error term will depend only on the `shape of the functional equation', implying uniform results for families of zeta functions with the same functional equation. We define r bas (Λ) to be the infimum of all r ∈ R + such that the open ball B(r) of radius r and center 0 contains a Z-basis for Λ. Many results like Theorem 1 exist in the literature, and we refer to the comprehensive survey article of Ivić, Krätzel, Kühleitner, and Nowak [IKKN06] for an overview and numerous references. We first note that such results may be proved using the geometry of numbers. One obtains an error term of O d,Λ (R d−1 ): see Davenport [Dav51] for the basic principle and Widmer [Wid10,Theorem 5.4] or Ange [Ang14, Proposition 1.5] for versions with a completely explicit error term. We are interested in the better error terms that come from more analytic techniques. In this context, we could not find any general result where the dependence of the error term on Λ is specified. Such a result (with a different shape, and a slightly better R-dependence of R d−2 ), was proved by Bentkus and Götze [BG97], but with the dimension d assumed to be at least 9. are Epstein zeta functions, enjoying analytic continuation and a functional equation of a uniform shape. Writing ζ(s, Λ) =: n a(n)λ −s n , our question is therefore reduced to obtaining error terms in estimates for the partial sums λn<X a(n). This approach was followed in classical work of Landau [Lan12,Lan15], who obtained (1) with the implied constant depending on Λ in an unspecified manner. Landau, and following him Chandrasekharan and Narasimhan [CN62], proceeded by developing general techniques to bound the partial sums of Dirichlet series with analytic continuation and a functional equation. Our second main theorem (of which the first will be a consequence) is a uniform version of this result, valid for a wide class of zeta functions. We postpone a precise statement to Section 2; the following is a special case. Theorem 2. Let φ(s) = n a(n)λ −s n be a zeta function with nonnegative coefficients, absolutely convergent for Re(s) > 1, enjoying an analytic continuation to C which is holomorphic away from a simple pole at s = 1, and with a 'well behaved' functional equation of degree d relating φ(s) to φ(1 − s) for a 'dual zeta function' φ(s) = n b(n)µ −s n . Then, we have provided that the error term is bounded by the main term, and where δ 1 = Res s=1 φ(s) , The implied constant depends on the functional equation, but does not depend further on φ(s) or the a(n). Here we think of δ 1 as a 'density at s = 1', and of δ 1 as the 'density of the dual', even if for technical reasons we cannot formulate the latter in terms of a residue, even if the b(n) are nonnegative. We assume above (as part of being 'well behaved') that δ 1 is finite. We can now describe how to recognize Theorem 1 as a consequence of Theorem 2. In terms of the Epstein zeta function ζ(s, Λ), we recognize that N (Λ, R) = λn≤R 2 a(n). Applying The- Renormalizing to get N (Λ, R) gives the statement of Theorem 1. We carry out this investigation in more detail in Section 5. We refer to Section 2 for the precise conditions required of the functional equation in Theorem 2; the definition of 'well behaved' includes (for example) all of the L-functions described in [IK04,Chapter 5.1]. Following [CN62] we stipulate a functional equation (4) without any factors of π −s/2 or involving the 'conductor'. These factors should instead be incorporated into the definition of φ(s), so that µ n will not in general be supported on the integers. This choice of normalization should be kept in mind when bounding δ 1 . (See Section 4 for a typical example.) Results of a similar flavor were proved by Friedlander and Iwaniec [FI05], by an alternative classical method. ('Truncating the contour' instead of 'finite differencing'.) In addition, they explain how their results may be further improved when one can obtain cancellation in certain exponential sums. (It should be possible, at least in principle, to improve the results of this paper by incorporating asymptotic estimates for J-Bessel functions in place of upper bounds.) Their method assumes more of the zeta function; in particular, they assume that its coefficients a(n) are supported on the positive integers and satisfy the bound a(n) ≪ n ǫ . We are especially interested in examples, such as Epstein zeta functions, where these hypotheses fail. Some preliminary work suggests that their method can possibly be made to work without such hypotheses, but that the proofs would not be immediate. The proof of Theorem 2 consists largely of a careful reading of the analogous proof in [CN62]. Nevertheless, for the convenience of the reader we present a complete proof (closely following [CN62,Theorem 4.1]). (Our result also eliminates a factor of X ǫ from [CN62, Theorem 4.1]; it was mentioned as [CN62, Remark 5.5], and also seen in Landau's earlier work, that this was possible.) Another application of 'uniform Landau' is the following estimate for the number of ideals of bounded norm in a number field: Theorem 3. Let K be a number field of degree d ≥ 1. Then, the number of integral ideals a with N (a) < X satisfies the estimate if the error term is bounded by the main term, and where the implied constant depends on d only. We prove this theorem for d ≥ 2 as an application of Theorem 4, our most general version of our main theorem, and we remark that for d = 1 the statement is trivial. This is very nearly a direct application of Theorem 2, except that we estimate µn<Z |b(n)| ≪ Z(log Z) d−1 , which amounts to formally taking δ 1 = O (log Z) d−1 . The factor of (log X) d−1 in (3) subsumes both this and a related logarithmic factor in δ 1 . We refer to Ange [Ang14, Corollaire 1.3] and Debaene [Deb16, Corollary 2] for completely explicit bounds, but with error terms growing more rapidly with X. Moreover, [Lan12,(66)] and [CN62,(8.20)] obtain bounds of essentially the same strength, but with the implied constant depending on K. Following the latter reference, we could also obtain an analogous result with the additional condition that a represent a fixed element of the ideal class group of K. There is a further example where Theorem 2 is useful: applied to the Sato-Shintani zeta functions [SS74] associated to a prehomogeneous vector space. This appeared in the work of the second and third authors [TT13] on counting cubic fields. The zeta functions in question count cubic rings, and one can also define zeta functions counting those rings which are 'nonmaximal at q'. A version of Theorem 2 (appearing implicitly in [TT13]), in combination with a sieve, led to good error terms in the counting function for cubic fields. Moreover, these error terms can be further improved -for this, see [BTT], which will apply essentially the version of Theorem 2 stated here, except accounting for secondary poles of the zeta function at s = 5 6 . Theorem 1 also has potential applications itself. The question came to the third author's attention in the course of his work with Kass [KT], counting rational points of bounded height in the Hilbert scheme of two points in the plane. Some algebraic geometry reduces this to a lattice point counting problem, for which Theorem 1 applies. It turns out that a weaker version of Theorem 1 is equally effective in [KT], but similar lattice point counting problems seem likely to arise in related questions counting points on other vector bundles, and Theorem 1 may prove useful in that context (among others). Organization of the paper. In Section 2 we state and then prove our most general 'uniform Landau' result (Theorem 4). We follow Chandrasekharan and Narasimhan [CN62] quite closely, albeit with a somewhat different exposition, and while removing factors of X ǫ in the error terms. We prove Theorem 2 in Section 3, as a representative (but still fairly general) special case of Theorem 4. We then prove Theorem 3 in Section 4; once the relevant facts about Dedekind zeta functions are recalled, this is also easily deduced from Theorem 4. Finally, we prove Theorem 1 in Section 5. We must establish a couple of lemmas concerning the geometry of lattices and their duals, and then the results are again immediate from Theorem 4. A uniform version of Landau's method We now prove a uniform version of Landau's method, which provides estimates for sums of coefficients of a Dirichlet series with functional equation. We will closely follow the version given in [CN62, Theorem 4.1], but indicating the dependence of our estimates on the Dirichlet series itself. In order to give a complete statement of the theorem, we must set up some notation. Notation and Statement of Theorem • (The Dirichlet series) Let φ(s) and ψ(s) denote two dual Dirichlet series, where {λ n } n∈N and {µ n } n∈N are two sequences of strictly increasing positive real numbers tending to ∞. We assume that φ(s) and ψ(s) each converge absolutely in a certain fixed half-plane. • (The functional equation and meromorphic continuation) We assume φ and ψ satisfy a functional equation of the form where δ > 0 is some real parameter, and is a product of N ≥ 1 Gamma factors where the α ν are positive. We require A := N ν=1 α ν ≥ 1, and note that 2A is frequently called the "degree of the zeta function." We also assume that this functional equation provides meromorphic continuation in the following sense: there exists a meromorphic function χ such that lim |t|→∞ χ(σ+it) = 0 uniformly in every interval −∞ < σ 1 ≤ σ ≤ σ 2 < ∞, satisfying for Re(s) > c 1 , where c 1 and c 2 are some constants. Our hypotheses force all the poles of φ(s) to be contained within a fixed vertical strip, and we assume that φ(s) has only finitely many poles. This assumption will be necessary for the series in (7) to converge, and so we exclude (for example) Artin L-functions (unless the Artin conjecture is assumed). • (Polar Data) We define where C 0 is any curve enclosing all the singularities of the integrand. In the latter sum over the poles ξ of φ(s) s , R ξ (log X) is a constant for each simple pole ξ, and is generally a polynomial of degree ord ξ φ(s) s − 1. We also define where R abs ξ is the polynomial obtained from R ξ by taking absolute values of each of the coefficients. • (Partial sums) We denote the partial sum by • (Bounds on partial sums) We require a bound on the partial sums of the coefficients of the dual zeta function, which we take to be of the form for some C ψ , C ′ ψ > 0, r ′ ≥ 0 and r > δ 2 + 1 4A . (We assume r > δ 2 + 1 4A for technical reasons; see (24).) For simplicity, we will require this bound simultaneously for all Z for which the sum in (8) is nonempty, but see Section 2.4 for a refined version. With these notations, we prove the following theorem. Theorem 4. With the above, we have for every η ≥ − 1 2A , and where Moreover, if a(n) ≥ 0 for all n, then the sum over |a(n)| may be omitted, so that we have simply Throughout, and in particular in (10) and (12), the implicit constants depend on: the parameter η, the functional equation (i.e. on δ, N , α v , and β v ), and on the regions in which φ and ψ converge absolutely -but not on other data associated to φ or ψ. This is a variation of Theorem 4.1 in [CN62], with two modifications. First of all, we track the dependence of the error terms on growth estimates for the individual Dirichlet series φ and ψ. Secondly, the bound (8) takes the place of a constant β for which avoiding additional factors of X ǫ appearing in the error terms in [CN62]. This is not necessarily the only way to do so; indeed, as J. Thorner suggested to the authors, a plausible alternative approach is to choose ǫ = o X (1) depending explicitly on X. Remark 5. The bound η ≥ − 1 2A (equivalently, y ≤ X) is essential; without it, Landau's finite differencing method doesn't make sense and counterexamples to the theorem can be constructed. As is well known, one can at least obtain upper bounds by smoothing; for example, suppose that the a(n) are nonnegative; then we have Now shift the contour to the left of the critical strip, apply the functional equation, and bound the value of the dual zeta function. Proof We now prove Theorem 4. We defer some proofs of technical lemmas to after the outline to give a better proof outline. For each nonnegative integer k, we define the smoothed sums These smoothed sums are sometimes called Riesz means. Typically, it becomes easier to study A k φ for large k. It is possible to recover asymptotics for the non-weighted sum A 0 φ (X) from asymptotics for A k φ (X) through Landau's "finite differencing method." Thus the goal is to understand A k φ (X) well. Recall the notation 1 for c ∈ R. We recognize A k φ (X) through a classical integral transform (as in [LD17,§2], for example) as where c is large enough so that the Dirichlet series φ(s) and ψ(s) converge absolutely for Re s ≥ c. (iii) We assume that δ 2 + 1 4A + k 2A > r (see (24)), and that the fractional part of k (32)). (iv) We assume that c = δ + n for any integer n, so that the integrals (15) and (17) . As k may be chosen depending only on 'the shape of the functional equation', implied constants in what follows will be allowed to depend on k. After shifting the line of integration in (14) to Re s = δ − c, replacing φ(s) with ψ(δ − s)∆(δ − s)/∆(s) through the functional equation (4), and performing the change of variables s → δ − s, we rewrite A k φ (X) as where C k is a curve enclosing all the singularities of the integrand between Re(s) = δ − c and Res(s) = c. (Familiar bounds for the integrand, needed to justify convergence, are recalled in (28).) We separate the analytic portion of the shifted integral (15) and define Then we can rewrite (15) as In order to study W k (X), we will need the following properties of I k (X). Lemma 6. Suppose that k is large enough that the line Re s = c(k) is to the right of all poles of ∆(s)/∆(δ − s). Let I (k) k denote the kth derivative of I k . Then for t ≥ 1, we have As t → 0, we have that Proof. Proved in Section 2.3. We are now ready to describe the finite differencing method, which we apply to (18). Define ∆ y F (X) := F (X + y) − F (X), so that the kth finite difference operator ∆ k y is given by (See (35) for an alternative formula when F is k times differentiable.) Lemma 7. With the same notation as above, Additionally, recalling the definitions of R φ and S k φ (X) from (7) and (16) respectively, we have for y ≪ X that Proof. Proved in Section 2.3. We apply ∆ k y to (18). For the left hand side of (18), we see from above that On the other side of (18), we get Note that the finite difference is taken of I k (µ n X) as a function of X, not of µ n X. Using the properties of I k (X) as stated in Lemma 6, one can prove the following lemma. Lemma 8. For y ≪ X, we have Proof. Proved in Section 2.3. The first bound in (23) is superior to the second bound when µ n ≫ z := X 2A−1 /y 2A , so that we get the bound where in the latter step we deviated from [CN62] by dividing the sums into dyadic intervals [ Z 2 , Z], bounding the contribution of each by (8), and using (9) to sum the results. Our choice of z equalizes the two terms in (25), so that the second of them may be omitted. Therefore applying finite difference operators to (18) and inserting the bounds for the left hand side (21), and the right hand side (25), we see that which is (10), after the change of variables y = X 1− 1 2A −η for some η ≥ − 1 2A . Suppose further now that a(n) ≥ 0 for all n. Then, as noted in [CN62, eq. 4.15], A 0 φ (X) is monotone in X and we have that This may be proved using (35) on is not differentiable when X is an integer. Using the inequalities (27) with (20) gives that and estimating ∆ k y W k (X) as before we obtain (12) as an upper bound for A 0 φ (X) − S 0 φ (X), and similarly as a lower bound for A 0 φ (X + ky) − S 0 φ (X). Since S 0 φ (X + ky) − S 0 k (X) ≪ y X R φ (X), we obtain (12) as a lower bound for A 0 φ (X + ky) − S 0 φ (X + ky), and correspondingly for A 0 φ (X) − S 0 φ (X) after a suitable change of variables. This completes the proof of Theorem 4. Proofs of Technical Lemmas Proof of Lemma 6. Define so that I k is an inverse Mellin transform of G(s). We will show that G(s) can be compared to a function H(s), whose inverse Mellin transform can be explicitly evaluated in terms of J-Bessel functions. As a consequence of Stirling's approximation, one can show [CN62, 2.12] that for any α, log Γ(z + α) = (z + α − 1 2 ) log z − z + 1 2 log 2π + O(|z| −1 ) as |z| → ∞, uniformly in regions |arg z| < π − δ for any fixed δ > 0. Using this expression on G(s), one can show that uniformly on any fixed vertical strip, and further that We therefore have where we define H(s) to be (31) H(s) = Γ(As + µ) Γ(λ − As) e B+Θs , and we note that it follows from (29) that Suppose first that t ≥ 1. For the second term in (30), we shift the line of integration to Re s = c + 1 2A . Our assumption (iii) on k imply that we do not pass through any poles, and the shifted integral converges absolutely by (28), so that For the first term in (30), we recognize it as a J-Bessel function [Wat95] (33) 1 for a positive constant A 1 and wheret = te −Θ is a linear change of variables. Using the classical bound J ν (x) ≪ x −1/2 (as in [CN62,(2.12)] or [Wat95]), we see that (33), and hence also (17), is bounded by As t → 0, the bound I k (t) ≪ t δ 2 + 2A−1 2A k+ǫ follows from immediately bounding the integrand in (17) absolutely. These prove the two bounds for I k (t). We now prove the corresponding bounds for I (k) k (t). The argument is largely the same as above. With c 0 = δ 2 − ǫ (which is c k when k = 0), define a contour C ′ as follows: from c 0 − i∞ up to c 0 − iR, right to c 0 + r − iR, up to c 0 + r + iR, left to c 0 + iR, up to c 0 + i∞. The parameters r and R are chosen as large as necessary so that passing the contour from the line Re s = c k to C ′ does not cross any poles. Thus shifting the contour, and differentiating under the integral sign, we have and h(s) is defined as in H(s) (in (31)), but with k = 0 in the parameter λ. As before The second integral is bounded analogously to the integral of G(s) − H(s) above, by shifting to the right, giving for t → 0 The first integral can similarly be explicitly evaluated in terms of a the J-Bessel function. Elementary manipulations as above show Finally, we have I Proof of Lemma 7. Applying the finite differencing operator ∆ k y directly to A k φ (X) gives that We have used the explicit evaluation ∆ k y (X − λ n ) k = y k Γ(k + 1) to simplify this expression; for a k-times differentiable function F , one can use induction on k to show that We also use (35) to prove (20): Since the kth derivative of S k φ (X) is exactly S 0 φ (X) (for any c satisfying the listed hypotheses), we then have that The result then follows by writing S 0 φ (t) in terms of the residues of φ(s), as in (6), and substituting into (36). Proof of Lemma 8. For y ≪ X, we have the trivial inequality using only the definition of the finite differencing operator ∆ k y , For the second bound, we use (35) to see that where we have trivially bounded the iterated integrals in the last inequality. In both cases the lemma now follows from the bounds of Lemma 6. Restricting the range of the partial sum estimate In (9) we assumed a bound of the shape for all Z simultaneously. Here, as a refinement of our main theorem, we argue that this is only required when Z is 'approximately' bounded by the parameter z of (11). More specifically, suppose for some C 1 > 0 that (9) holds simultaneously for all Z ≤ zX C 1 , and for Z > zX C 1 assume only a (very weak) bound of the shape for any constant C 2 . Then, Theorem 4 still holds, with the implied constant in (12) now depending additionally on C 1 and C 2 . The proof is immediate: in (24), break the sum over µ n > z into the ranges z < µ n ≤ zX C 1 and µ n > zX C 1 . The smaller range is estimated as before; for the larger range, choose the parameter k large enough (depending on C 1 and C 2 ) so that the bound (38) is enough to guarantee that the contribution is bounded above by that of the smaller range. We refer to [BTT] for an application where this additional flexibility is required. A simpler version: Proof of Theorem 2 For the reader's convenience, we give the (brief!) explanation of how Theorem 2 follows immediately from Theorem 4. Other variations can be proved in the same way. We assumed that φ(s) has a 'well behaved' functional equation. To make this precise, consider the following special case of the conditions described in Section 2.1: Assume that δ = 1, so that the functional equation relates s to 1− s. We assume that each α v in (45) equals 1 2 , so that d = N = 2A is the usual degree of the zeta function. We also assume that both φ and ψ are holomorphic away from simple poles at s = 1. If ψ has nonnegative coefficients, then this implies that there exists a positive constant δ 1 for which we may take B ψ (Z) = δ 1 Z in (8); in any case, we assume that such a δ 1 exists. By definition, we have By the functional equation we have φ(0) ≪ | Res s=1 ψ(s)|, and with the last sum over all dyadic intervals [Z, 2Z] on which the µ n are supported. Writing Z min for the smallest value of µ n , this last quantity is bounded by Applying Theorem 4, we thus obtain We equalize error terms by choosing η so that δ 1 X 1 2 − 1 2d = δ 1 X η (X dη ) 1 2 − 1 2d , so that the error is ), as claimed in Theorem 2; the condition η ≥ − 1 2A is equivalent to our demand that the error term be bounded by the main term. Remark 9. We also have the following averaged version of Theorem 2. Suppose that φ i n i=1 is a family of zeta functions, with functional equations satisfying all of the hypotheses above for the same function ∆. Then, we have if the right hand side is bounded by the main term n i=1 Res s=1 φ i (s) X. (In the above, the notation a i (n), λ n,i , δ 1,i , δ 1,i refers to the quantities a(n), λ n , δ i , and δ i associated to each φ i .) The proof is immediate: in (41), choose a single η to equalize the cumulative error terms, rather than choosing an η i for each φ i . Although (42) follows immediately from Hölder's inequality and Theorem 2, the above proof establishes that it is enough to assume that the error term in (42) is bounded by the main term on average, as opposed to individually for each φ i . Ideals in number fields: Proof of Theorem 3 The proof follows immediately from Theorem 4 upon recalling the properties of the associated Dedekind zeta function. Recall (e.g. from [IK04,Chapter 5.10]) that if K/Q is a number field of degree d, then its Dedekind zeta function (N a) −s satisfies the functional equation where r 1 is the number of real embeddings of K and r 2 the number of pairs of complex conjugate embeddings (so that d = r 1 + 2r 2 ), q := | Disc(K)|, and The zeta function ζ K (s) is entire, away from a simple pole at s = 1 with residue where w is the number of roots of unity in K, h is the class number of K, R is the regulator of K, and where the upper bound is [Lou01, Theorem 1]. We have ζ K (0) ≪ q 1/2 (log q) d−1 by (44) and (47) (and indeed ζ K (0) = 0 if K is not imaginary quadratic), and we apply Theorem 4 with We have ζ K (s) ≤ ζ(s) d = n d d (n)n −s coefficientwise, and so that we may take B ψ (Z) = Zq 1/2 (log(Zq)) d−1 to conclude that We choose X η = X d−1 d(d+1) q − 1 d+1 ; formally, this is equivalent to applying Theorem 2 with δ 1 ≪ (log q) d−1 and δ 1 ≪ q 1/2 (log qX) d−1 . (We may not literally apply Theorem 2 as stated because this δ 1 depends on X.) We also note that log(qX dη ) ≪ d log(X) whenever q ≤ X (and if q > X, our conclusion does not beat the trivial bound (48)). Background on Epstein zeta functions We assemble some background material on Epstein zeta functions which will be needed in the proof. Epstein's original paper is [Eps03]; our formulation of his results can be found (for example) in [BBS14], but to our knowledge the only reference for the proofs is Epstein's original work. We also refer to [Cas97] for a good reference on lattices and the geometry of numbers. If Λ ⊆ R d is a rank d lattice, then we choose a matrix L ∈ GL d (R) for which Λ = {Lx : x ∈ Z d }, and define det Λ = |det L|. (L is not uniquely defined, but det Λ, Λ * , and ζ(s, Λ) will be.) We define the dual lattice Λ * to be the set of all vectors u ∈ R d such that u T v ∈ Z for every v ∈ Λ. It is easy to show that Λ * is actually a lattice of rank d, and in fact it is given by Thus Λ is also the dual lattice of Λ * , and det Λ det Λ * = 1. The function v → |v| 2 is a positive definite quadratic form on Λ: if v = Lx where x ∈ Z d , then |v| 2 = Lx · Lx = x T (L T L)x. Writing Q = L T L for the matrix associated to this quadratic form, we have |v| 2 = Q[x] := x T Qx and det Q = det(L T L) = (det Λ) 2 . Then the Epstein zeta function associated to Λ (or to Q) is defined by the Dirichlet series It converges absolutely for Re(s) > d 2 , has analytic continuation to C apart from a simple pole at and satisfies the functional equation Lemma 10. For any complete lattice Λ ⊆ R d , let λ 1 (Λ) denote the length of the shortest nontrivial vector in Λ. The number of lattice points in Λ satisfying |v| ≤ X is bounded by Therefore, in the notation above, C ′ ζ(·,Λ) can be taken as for some absolute constant c d depending only on the dimension d. Remark 11. Observe that, owing to the shape of the functional equation of the Epstein zeta function, the proof of Theorem 1 requires as input a simpler but similar statement. Lemma 12. Suppose Λ is any rank d lattice in R d , and let Λ * denote its dual lattice. Let r bas (Λ * ) denote the infimum of all r ∈ R + such that the ball B(r) contains a basis for Λ * . Then λ 1 (Λ) · r bas (Λ * ) ≥ 1. Proof. Recall the definition of the dual lattice, Λ * := {w ∈ R d : ∀v ∈ Λ, v, w ∈ Z}. Let v ∈ Λ be of minimal length, so that v = λ 1 (Λ). Suppose w 1 , . . . , w d is a set of n linearly independent elements in Λ * fitting within B(V Λ * ). Then there exists i such that w i , v = 0. Then by the definition of Λ * above, we have w i , v ∈ Z, and thus |w i ||v| ≥ 1. By Lemmas 10 and 12, we can take (57) is true if r bas (Λ) ≪ d R. We may then allow r bas < R by multiplying R η by a factor that is O d (1), which multiplies the error term in (53) by another (harmless) factor of O d (1).
2018-09-21T16:26:18.000Z
2017-10-05T00:00:00.000
{ "year": 2021, "sha1": "117cc7400f7856075dcc76ed7a58f2939f0f4b64", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1710.02190", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "117cc7400f7856075dcc76ed7a58f2939f0f4b64", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
225919194
pes2o/s2orc
v3-fos-license
Analyses of land cover change of Singkil Swamp Wildlife Reserve in the last 20 years Singkil Swamp Wildlife Reserve (SSWR) is remaining tropical coastal forested wetland in west coast of Sumatra. The conservation area is also facing habitat loss and destruction due to illegal activities, such as logging and land conversion. However, there is no data available on land cover change of important habitat of great ape critically endangered, Sumatran orangutan (Pongo abelii). This study aimed to determine the rate of change in land cover of the SSRW and to analyze the factors causing these changes in the last 20 years using landsat image. Land cover classification was carried out using supervised classification with maximum likelihood approach. The result showed that primary swamp forest area of the SSWR continued to experience a significant decline every year during the period 1998-2018 that around 24% of primary swamp forest was lost in the period due to deforestation and degradation. Therefore, degraded forests of SSWR should immediately be rehabilitated to restore ecological functions and to increase their productivity as well as education program to increase awareness coupled with forest monitoring and law enforcement to reduce illegal activities. Finally, appropriate management should be applied with co-management between government and local communities in conserving the important conservation areas. Introduction Over time, the condition of forest areas including conservation forests, in terms of vegetation cover has experienced rapid and dynamic changes in line with development progress. Development that continues to increase along with population growth and increased necessities of life causes increasing physical pressure on conservation areas. The Singkil Swamp Wildlife Reserve (SSWR) as one of the conservation areas is currently also not immune to change due to land conversion and land use change. At the beginning of the area designation in 1998, SSWR had an area of 102,500 ha and since 2015 it has decreased to 81,802 ha. This area is representative of coastal forested wetland ecosystems in lowland tropical rain forests and part of the Leuser Ecosystem. The SSWR is a major habitat for protected and endangered wildlife globally, one of them is the Sumatran orangutan (Pongo abelii) [1][2][3][4]. Approximately 50% of Sumatran orangutan habitat is within conservation areas that are managed directly by the Ministry of Environment and Forestry, and 78% of Sumatran orangutan habitat is within the Leuser Ecosystem [5] both inside and outside of conservation areas. Forest loss that continues to occur in conservation areas, especially in peat swamps of SSWR, increasingly threatens the lives of wildlife including Sumatran orangutans. According to Wich et al. [6], Sumatran orangutans are facing extinction mainly due to habitat loss and fragmentation. Most of the forests that are function as habitat of Sumatran orangutan have been converted into agriculture lands and others. However, to date there has been no data related to changes in land cover in the SSWR since it was designated as a wildlife reserve. Therefore, this study aimed to determine the rate of change in land cover of the SSRW and to analyze the factors causing these changes. Research site and period This study was carried out from April to July 2019. The study site cover SSWR area, comprising 81,802 ha located at south western of Aceh Province ( Figure 1) that is distributed at three districts/city, namely South Aceh District, Aceh Singkil District and Subulussalam City. Astronomically, the area is situated in 2º15'12" -2º45'25" N and 97º30'28"-97º45'25" E. Data collection and analyses Main tools/equipments were used in collecting data, such as GPS, camera, compass, and stationary. Subsequently, analyses data tools were ArcGIS 10.1, ENVI 4.7, ERDAS Imagine 8.5 and ms excel. The ground truthing was conducted in April 2019 to collect training data. In this study, land cover classification was carried out using supervised classification with maximum likelihood approach in Landsat images of 1998, 2007, 2014 and 2018. This approach was the most effective classification method when equipped with accurate training data and is one of the most effective algorithms as well as most widely used [7]. Supervised classification is a classification process by selecting desired information categories and selecting training areas (sample areas) for determining each land cover category as a key interpretation [8]. The training area is taken to prove the plausibility that is adjusted to Landsat imagery as a tool for ground check. Subsequently, we determine and select training area locations to collect statistical information on land cover types [9]. Results and Discussions Based on the interpretation of Landsat imagery in 1998, 2007, 2014 and 2018, the SSWR area land cover classification was identified by 6 classes, namely water bodies, primary peat swamp forests, secondary peat swamp forests, bare land, plantations, and shrubby swamp. Accuracy test results showed an overall accuracy of 89.83% and kappa accuracy of 87.41%. This shows that the image classification results can be used because the accuracy of the test value is more than 80% [10]. In Figure 2, visualization results from image interpretations that produce the land cover map of the SSWR region are presented in the past 20 years (1998 -2018). Subsequently, the area of land cover change in the period is presented at Figure 3. This is inversely proportional to the change in secondary swamp forest or logged-over forest. Secondary swamp forest in the period 1998 -2018 experienced an increase of about 15%, from the total area of 7,291.55 ha in 1998 to 19,761.18 ha in 2018. Likely secondary swamp forests, a similar trend also occurs in shrubby swamp which increased by about 8%, from a total area of 3,894.78 ha in 1998 to 10,622.26 ha in 2018. Whereas for others land cover such as water bodies, plantations and bare land in that period did not experience significant changes. According to Zulkarnain and Widayati [11], after disturbances those forests are allowed to experience natural succession. Overall, in 2018 SSWR forest cover has experienced a change of around 8.9% from forest cover in 1998 (Figure 4) Based on Figure 5, the largest change in the SSWR's primary swamp forest area occurred in the period 2014 -2018. During that period the forest area experienced degradation at an average rate of 2,160 ha per year and also experienced deforestation at an average rate of 779 ha per year. On the other hand, this is not accompanied by well reforestation that only around 4 ha per year and is the smallest reforestation from previous periods ( Figure 5). This shows that in that period the SSWR forest area experienced degradation and decreased forest quality with very limited improvement. Forests or peat swamps that are degraded both as a result of illegal logging, encroachment, forest fires and others should immediately be rehabilitated to restore ecological functions and to increase their productivity. Hopefully, at the end, the function of the ecosystem can immediately recover. Forest degradation and deforestation generally occur on the edge of the forest and mostly on the border between the SSWR area and the village ( Figure 6). This is due to lack of awareness and responsibility by opening up land in this area, coupled with intentional illegal logging activities. To reduce illegal activities, forest monitoring and law enforcement are key and must be optimally strengthened. Meanwhile, alternative choices for the welfare of the community around the forest must also be promoted to reduce disturbance to forests and conservation areas [11]. Figure 6. Land cover change of Singkil Swamp Wildlife Reserve during three period analysis. Conclusion and Recommendation In the last 20 years, forest cover of SSWR continued to experience a significant decline every year due to illegal activities, mainly forest conversion and logging. This condition threats the critically endangered great ape, Sumatran orangutan and critically endangered large mammal, Sumatran orangutan and other wildlife due to loss and degradation their habitat. Subsequently, this disturbance also causes decreasing ecosystem functions and services. Therefore, the education programs are to increase awareness need to be done as well as alternative livelihood choices for the welfare of the community around the forest should be also promoted to reduce disturbance to forests and conservation areas. On the other hand, forest monitoring and law enforcement are key and must be optimally strengthened to reduce illegal activities. Finally, degraded forests of SSWR should immediately be rehabilitated to restore ecological functions and to increase their productivity.
2020-07-02T10:32:18.877Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "fe71502477d3806f791d952c79df3df083a26c66", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1542/1/012063", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "10e073aa7a07531215842e2285883aee4dea0bc7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
135466367
pes2o/s2orc
v3-fos-license
Quantitative Logics for Equivalence of Effectful Programs In order to reason about effects, we can define quantitative formulas to describe behavioural aspects of effectful programs. These formulas can for example express probabilities that (or sets of correct starting states for which) a program satisfies a property. Fundamental to this approach is the notion of quantitative modality, which is used to lift a property on values to a property on computations. Taking all formulas together, we say that two terms are equivalent if they satisfy all formulas to the same quantitative degree. Under sufficient conditions on the quantitative modalities, this equivalence is equal to a notion of Abramsky's applicative bisimilarity, and is moreover a congruence. We investigate these results in the context of Levy's call-by-push-value with general recursion and algebraic effects. In particular, the results apply to (combinations of) nondeterministic choice, probabilistic choice, and global store. Introduction There are many notions of program equivalence for languages with effects. In this paper, we explore the notion of behavioural equivalence, which states that programs may be considered behaviourally equivalent if they satisfy the same behavioural properties. This can be made rigorous by defining a logic, where each formula φ denotes a certain behavioural property. We write (P ... |= φ) to express the satisfaction of formula φ by term P ... , which is usually given by a Boolean truth value (true or false). Two terms P ... and R ... are said to be behaviourally equivalent if they satisfy the same formulas. Such an approach is taken in for example [9]. In particular, we use this method to define equivalence for a language with algebraic effects in the sense of Plotkin and Power [26]. Effects can be seen as aspects of computation which involves interaction with the world 'outside' the environment in which the program runs. They include: exceptions, nondeterminism, probabilistic choice, global store, input/output, cost, etc. The examples given have common ground in the work of Moggi [21], and can moreover be expressed by specific effect triggering operations making them 'algebraic' in nature. In the presence of such algebraic effects, computation terms need not simply reduce to a single terminal term (that is a value), they may also invoke effects on the way. Following [26,13], we consider a computation term to evaluate to an effect tree, whose nodes are effect operators and leaves are terminal terms. The paper [28] introduced modalities that lift boolean properties of values to boolean properties of the trees modelling their computations. See [23,22,27] for alternative ways in which logics can be used to describe properties of effects. The use of a Boolean logic does however not readily adapt to several examples of effects, for example the combination of probability and nondeterminism. The literature on compositional program verification shows the usefulness of quantitative (e.g. real-number valued) program logics for verifying programs with probabilistic behaviour, possibly in combination with nondeterminism [14,20]. The paper [28] develops a general Booleanvalued framework which, although featuring many examples, does not apply to the combination of probability and nondeterminism. This paper provides a general framework for quantitative logics for expressing behavioural properties of programs with effects, generalising the Boolean-valued framework from [28]. We consider a quantitative (quantityvalued) satisfaction relation '|=', where (P ... |= φ) is given by an element from a quantitative truth space A (a degree of satisfaction). This allows us to ask open questions about programs, like "What is the probability that ..." or "What are the correct global starting states for ...". We define equivalence by stating that programs P ... and R ... are equivalent, if for any formula φ we have (P ... |= φ) = (R ... |= φ) (P ... satisfies φ precisely as much as R ... does). A key feature of the logic is the use of quantitative modalities to lift quantitative properties on value types to quantitative properties on computation types. As in [28], we are able to establish that the behavioural equivalence defined as above is a congruence, as long as suitable properties on the quantitative modalities are satisfied. These properties require notions of monotonicity, continuity, and a notion of preservation over sequencing called decomposability. As in [28], the congruence is established by proving that given one of the properties (leaf-monotonicity), our behavioural equivalence is equal to an effect-sensitive notion of Abramsky's applicative bisimilarity [1,3]. Given further properties on the modalities, this relation can be proven to be compatible using Howe's method [11]. The main contribution of this paper is the generalisation of [28], and the corresponding generalised results. This goes through smoothly, though there are some subtleties like what to take as primitive in a quantitative setting. In particular, we will see the necessity of a threshold operation. The other main contributions are the examples illustrating the quantitative approach. Some examples such as the combination of nondeterminism with probabilistic choice, or with global store, do not fit into the Boolean-valued framework of [28], but do work here 1 . But there are also examples, such as probability, global store, and cost, whose treatment is more natural in our quantitative setting, even though they also fit in the framework of [28]. As a vehicle of our investigation we use Levy's call-by-push-value (CBPV) [17,16], together with general recursion and the aforementioned algebraic effects. As such, it generalises [28] in a second way by using callby-push-value to incorporate both call-by-name (CBN) and call-by-value (CBV) evaluation strategies. This is significant, since once either divergence or effects are present, the distinction between the reduction strategies becomes vital. For example, if we take some probabilistic choice por signifying a fair coin flip, we have that 'por(λx . 0, λx . 1) ≡ λx . por(0, 1)' holds in CBN, but not in CBV. So it is interesting to consider CBPV, as it expresses both these behaviours. The distinction is expressed in the difference between production-types FA where one explicitly observes effects, and types like A → C where the observation of effects is postponed to a later moment. As such, this language is an ideal backdrop for studying effects. In Section 2 we give the operational semantics of the language, starting with the effect-free version and working towards our treatment of algebraic effects. In Section 3 we present our quantitative logic, introducing quantitative modalities to deal with the observation of effects. In Section 4 we look at the resulting behavioural equivalence and the properties that establish the congruence property (or compatibility in its technical form). In Section 5 we relate this equivalence to applicative (bi)similarities by defining a relator using our modalities. This then allows us to adapt a Howe's method proof of compatibility from [3,28] for this equivalence. We finish in Section 6 with some discussions. Operational semantics We use a simply-typed call-by-push-value functional language as in [16,17], together with general recursion and a ground type for natural numbers, making it a call-by-push-value variant of PCF [24]. To this, we add algebraic-effect-triggering operators in the sense of Plotkin and Power [26]. We first focus on the effect-free part of the language, as we want to consider effects independently of the underlying language. The language We give a brief overview of the language and its semantics. The types are divided into two flavours, Value types and Computation types. Value types contain value terms that are passive, they don't compute anything on their own. Computation types contain computation terms which are active, which means they either return something to or ask something of the environment. Value types A, B and computation types C, D are given by: where I is any finite indexing set. By asserting finiteness of I in the case of product types, the number of program terms is kept countable (a property which will have benefits later on in the formulation of the logic). The type U C is a thunk type, which consists of terms which are frozen. These terms were initially computation terms but are made inactive by packaging them into a thunk. The type N is the type of natural numbers, containing the non-negative integers. With this type, we can program any computable function on the natural Figure 1: Typing rules numbers as in PCF [24]. The type FA is a producer type, which actively evaluates and returns values of type A to the current environment. As was stated, this is the type at which we can observe effects. The type A → C is a type of functions, which is a computation type since its terms are actively awaiting input. We have a countably-infinite collection of term variables x, and term contexts: Γ ::= ∅ | Γ, x : A. Note that contexts only contain Value types, meaning that like in call-by-value, we can only ever substitute value terms. This is no loss of generality, as we can simulate substituting computation terms by packaging them into a thunk. The terms of the language are as follows: Value terms: V, W :: Computation terms: M , N : We underline terms (M ) and types (C) when they are computation terms and computation types respectively. We will also use E ... , F ... and P ... , R ... to denote general types and their terms, e.g. they could be either value or computation types/terms. Following [16], their typing rules are given in Fig. 1. We distinguish two typing judgements, ⊢ v and ⊢ c , for value and computation terms respectively. We write Terms(E ... ) for the set of closed terms of type E ... . Note the addition of the fixpoint operator fix(−) which has been added to allow for general recursion and hence divergence. We write n : N for the numeral representing the n-th natural number. Semantics We give the semantics of this language by specifying a reduction strategy for computation terms in the style of a CK-machine [5]. We distinguish a special class of computation terms, called terminal terms, which will not reduce further. They consist of: return(V ) : FA, λx : A . M : A → C, and M i | i ∈ I : Π i∈I M i . We first give the rules for terms we can directly reduce. We denote these using relation symbol : The behaviour of the other non-terminal computation terms; M to x . N , M · V and M · i, is implemented using a system of stacks defined recursively: S, Z : We write S{M } for the computation resulting from applying S to M , which can be seen as evaluating the program M within the environment S. i} Whenever one encounters a computation of which one needs to first evaluate a subterm, one unfolds the continuation into the Stack and focusses on evaluating that subterm. This method is given by the stack reduction relation in the following way: Adding algebraic effect operators We add algebraic effects in the style of [13], given by specific effect operators. We use a type variable α for computation types. Effects are given by operators of the following arities (like in [13,25]): For each effect under consideration, we bundle together effect operators pertaining that effect in a set called an effect signature Σ. Given such a signature, new computation terms can be constructed according to the typing rules in Fig. 2. Example 3 (Probabilistic choice and global store). We will also consider the combination of the previous two examples, probabilistic choice with global store, given by effect signature Σ pg := Σ p ∪ Σ g . Example 4 (Cost). If we want to keep track of costs of an evaluation, we take the signature Σ c := {cost c : α → α | c ∈ C}, where we have a countable set of real-valued costs C. The computation cost c (M ) assigns a cost of c to the evaluation of M . This cost can represent a time delay or some other resource. Example 5 (Combinations with nondeterminism). We consider a binary operator nor : α 2 → α for nondeterministic choice, which contrary to probabilistic choice is entirely unpredictable. One interpretation is to consider it under the control of some external agent or scheduler (e.g. a compiler), which one may wish to model as being cooperative ( angelic), antagonistic ( demonic), or neutral. We will consider nondeterminism and it's operator in combination with any one of the previous three examples. The resulting signatures are named Σ pn , Σ gn , Σ gpn , and Σ cn respectively. Example 6 (Combinations with error). Lastly, given some set of error messages E, we consider adding error raising effect operations {raise e : α 0 → α | e ∈ E} to the language, where raise e () stops the evaluation of a term, and displays message e. There is no continuation possible afterwards. In the presence of such effects, the evaluation of a computation term might halt when encountering an algebraic effect operator. We broaden the semantics, where a computation term now evaluates to an effect tree, a coinductively generated term using operations from our effect signature Σ together with terminal terms and a symbol for divergence ⊥. This idea appears in [26], but here we adapt the formulation from [13] to call-by-push-value. We define the notion of an effect tree over any set X, where X can be thought of as a set of terminal terms. 1. An effect tree (henceforth tree) over a set X, determined by a signature Σ of effect operations, is a labelled and possibly infinite depth tree whose nodes have the possible forms given below. A leaf node labelled x where x ∈ X. The set of trees over set X and signature Σ is denoted T Σ (X). We can equip this set with a partial order ≤, where t ≤ r if r can be constructed from t by pruning (possibly infinitely many) subtrees and labelling the pruning points with ⊥. Moreover, the preorder is ω-complete, so each ascending chain of trees t 0 ≤ t 1 ≤ . . . has a least upper bound ⊔ n t n . For any x ∈ X, we denote η(x) ∈ T Σ (X) for the tree which only consists of one leaf labelled x. We also have a map µ : T Σ (T Σ (X)) → T Σ (X) which flattens a tree of trees into one tree, by transforming the leaves (which are trees) into subtrees. For each computation type C we define the evaluation map | − | : Terms(C) → T Σ (Terms(C)), which returns a tree, whose leaves are either labelled with ⊥ or labelled with a terminal term of type C. We define this inductively by constructing for each n ∈ N the n-th approximation of the tree. Using this, we define |M | := n |ε, M | n . We view |M | as an operational semantics of M in which M is reduced to its (possibly) observable computational behaviours, namely the tree of effect operations potentially performed in the evaluation of M . See Figure 3 for two examples of effect trees. These trees are still quite syntactic, and may contain lots of unobservable information irrelevant to the real-world behaviour of programs. In the next section, we will set up the quantitative logic which will extract from such trees only the relevant information, using quantitative modalities. Quantitative Logic We define a quantitative logic expressing behavioural properties of terms. Each type has a set of formulas, which can be satisfied by terms of that type to varying degrees of satisfaction. These degrees of satisfaction are given by truth values from a complete lattice. A countably complete lattice is a set A with a partial order , where for each subset X ⊆ A there is a least upper bound sup(X) and a greatest lower bound inf(X). In particular, we define T := sup(A) = inf(∅) as the completely true value, and F := inf(A) = sup(∅) as the completely false value. We also equip this space with a notion of negation or involution, which is a bijective map ¬ : A → A such that ∀a ∈ A, ¬(¬a) = a and ∀a, b ∈ A, (a b) ⇔ (¬b ¬a). We will use the words involution and negation interchangeably. Given the conditions of an involution, it holds that ¬T = F and ¬F = T . 2 Examples of complete lattices with involution/negation used in this paper are: 3. The powerset P(X) over some set X, whose order is given by inclusion ⊆, so T = X and F = ∅. Negation is given by the complement, where ¬A : 4. For A a complete lattice and X a set, the function space A X with point-wise order is a complete lattice. We construct a logic for our language in order to define a behavioural preorder. For each type E ... , value or computation, we have a set of formulas Form(E ... ). Greek letters φ, ψ, . . . are used for formulas over value types, underlined Greek letters φ, ψ, . . . for formulas over computation types, and underdotted Greek letters φ ... , ψ ... , . . . for formulas over any type. We are aiming to define a quantitative relation (P ... |= φ ... ) to denote the element of A which describes the degree to which the term P ... satisfies the formula φ ... (e.g. this may describe the probability of satisfaction or the amount of time needed for satisfaction). We choose the formulas according to the following two design criteria, as in [28]. Firstly, we design our logic to only contain behaviourally meaningful formulas. This means we only want to test properties that are in some sense observable by users and/or other programs. For example, for the natural numbers type N we have a formula {n} which checks whether a term is equal to the numeral n. For function types we have formulas of the form V → φ which tests a program on a specific input V , and checks how much the resulting term satisfies φ. Secondly, we desire our logic to be as expressive as possible. To this end, we add countable disjunction (suprema) and conjunction (infima) over formulas, together with negation ¬. Moreover, we add two natural quantity-specific primitives: a threshold operation and constants. Both such operations are used frequently (albeit implicitly) in practical examples of quantitative verification, e.g. in [20]. Quantitative modalities Fundamental to the design of the logic is how we interpret algebraic effects. In CBPV, effects are observed in producer types FA. In order to formulate observable properties of FA-terms in our logic, we include a set of quantitative modalities which lift formulas on A to formulas on FA. We bundle our a selection of quantitative modalities together in a set Q. Each modality q ∈ Q denotes a function q : T Σ (A) → A, which is used to translate a tree of truths into a singular truth value. Given a quantitative predicate Θ : X → A on a set X, we can use a modality q to lift it to a quantitative predicate q(Θ) : In the examples, we will define the denotation of a modality q by giving for each n ∈ N an approximation q n . These will follow the rules: q 0 (t) = F, q n (⊥) = F, and q n+1 (η(a)) = a, and effect specific rules given in the examples below. Given these approximations, the denotation q (t) is given by sup{ q n (t) | n ∈ N}. Example 1 (Probabilistic choice). We use as quantitative truth space the real number interval A := [0, 1] with := ≤, which denote probabilities 3 . We take a single modality E for expectation, where (M |= E(Θ)) gives the expected value of (V |= Θ) given the probabilistic distribution M induces on its return values V . This is achieved by giving E the denotation E : Trees Σp (A) → A which sends a tree of real numbers to the expected real number, where the approximation of the denotation is given by: E n+1 (p-or(t, r)) = ( E n (t) + E n (r))/2. Example 2 (Global store). Given a set of locations L, we have a set of states S := L → N. Our set of truth values is given by the powerset A := P(S) with := ⊆. We have a single modality G, where (M |= G(Θ)) gives the set of starting states for which M terminates with a value V such that the end state is contained in (V |= Θ). We define this formally with the following rules: G n+1 (lookup l (t 0 , t 1 , . . . )) = {s ∈ S | s ∈ G max(0,n−s(l)) (t s(l) )} and G n+1 (update l,m (t)) = {s | s[l := m] ∈ G n (t)}. Example 3 (Probabilistic choice and global store). For this combination of effects, we take as truth space the functions A := [0, 1] S with point-wise order, where S is the set of global states and [0, 1] the lattice of probabilities with standard order. Intuitively, this space assigns to each starting state a probability that a property is satisfied. We define a single modality EG which, for each state s ∈ S, is given by the following rules: EG n+1 (p − or(t, r))(s) := ( EG n (t)(s) + EG n (r)(s))/2, EG n+1 (lookup l (t 0 , t 1 , . . . ))(s) = EG max(0,n−s(l)) (t s(l) )(s), and EG n+1 (update l,m (t))(s) = EG n (t)(s[l := m]). Example 4 (Cost). We use the infinite real number interval A := [0, ∞] with := ≥ denoting an abstract notion of cost (e.g. time). Trees are just branches in this example. We have a single modality C, where (M |= C(Θ)) is the cost it takes for M to evaluate plus the cost given by Note that for any tree t either infinite or with leaf ⊥, we have C (t) = ∞. This reflects the idea that a diverging computation will exceed any possible finite cost. Example 5 (Combinations with nondeterminism). To add nondeterminism to any of the previous examples, we keep their truth space and extend the definition of their modality q ∈ {E, G, EG, C} in two ways, creating an optimistic modality q ♦ and a pessimistic modality q such that: . For the combination with probability, we can see the nondeterministic choice as being controlled by some external agent, which chooses a strategy for resolving the nondeterministic choice nodes, like in a Markov decision processes. E ♦ finds the optimal strategy to get the best expectation, whereas E finds the worst strategy. Similarly, C ♦ will search for the minimum possible execution cost, while C will look for the maximum cost. For instance, if the denotation |M | of a term M of type FN is given by the first tree in Fig. 3, then Example 6 (Combinations with error). There are two ways of defining combinations with error messages, akin to the sum and tensor approach of combining effects from e.g. [12]. Let Σ, A and Q be the signature, truth space, and quantitative modalities of the effects to which we want to add error messages from a set E. Given a modality q ∈ Q and some function f : E → A, assigning to each message a value, we define a new modality q f which, besides inheriting the rules from q, follows the rule q f n+1 (raise e ) = f (e). We define two new sets of modalities for this combination, giving a different interpretation of error. E.g. in the presence of global store (Example 2), the modalities from Q + are not able to observe the final global state when an error message has been raised, whereas some modalities from Q × can. For instance, for e ∈ E and f : E → A such that f (e) := {s[l := 1] | s ∈ S}, it holds that G f is in Q × but not in Q + ). Moreover, (update l (1, raise e ()) |= (G f (⊤))) = T whereas (update l (0, raise e ()) |= G f (⊤)) = F. Those two terms are however not distinguishable by any modality from Q + . All the Boolean-valued examples of modalities for effects in [28], can also be accommodated in our quantitative setting by taking A := {T , F}. These include for instance Input/Output. Formulation of the logic We write Form(E ... ) for the set of formulas over type E ... , which is defined by induction on the structure of E ... . Fig. 4 gives the inductive rules for generating these formulas. We have modality formulas q(φ), constant formulas κ a , and step formulas φ ... a . Note that conjunctions and disjunctions (i.e., meets and joins) are taken over countable sets of formulas only. Figure 4: Formula constructors The modality formula q(φ) is particularly important, as it expresses how the quantitative modalities are used to observe effects. The last couple of satisfaction rules are for formula constructors occurring at each type. All formulas together form the general logic V. We distinguish a specific fragment of V, the positive logic V + excluding all formulas which use ¬(). The logic V + can be interpreted without giving an involution on A. We end this section by looking at some interesting properties we can construct using the logic, illustrating the expressibility of the logic. In case of global store (Example 2), we can construct formulas in the style of Hoare logic. For instance, taking two subsets P, Q ∈ A = P(S) of global states, the statement M |= G(κ Q ) P will give T , precisely if, when starting the execution of M with a state from P , the execution will terminate with a state from Q. As another example, in case of global store with probability (Example 3), where A := [0, 1] S , we can construct, given a formula φ and a distribution of states µ ∈ [0, 1] S , a formula Σ µ (φ) such that (M |= Σ µ (φ))(s) = min(1, s∈S µ(s)·(M |= φ)(s)). Then (M |= Σ µ (EG(κ T ))) expresses the probability of termination of M , given that the starting state is sampled from µ. In the same vein, we can look at the combination of probability and nondeterminism (Example 1 and 5), where (M |= a,b∈[0,1] (E ♦ (κ T ) a ∧ E (κ T ) b ∧ κ (a+b)/2 )) expresses the probability that M terminates, given that the agent/scheduler in control of nondeterministic choice is sampled from a distribution of which 50% is helpful and 50% is antagonistic. Behavioural equivalence We can define a behavioural preorder for any sub-collection of formulas L. Definition 4.1. For any fragment of the logic L ⊆ V, the logical preorder ⊑ L is given by: The general behavioural preorder ⊑ is the logical preorder ⊑ V , whereas the positive behavioural preorder ⊑ + is the logical preorder ⊑ V + . We denote ≡ and ≡ + for the logical equivalences ≡ V and ≡ V + respectively (the behavioural equivalences). These closed relations can be extended to relations on open terms by using the open extension (where two open terms are related if they are related for any substitution of variables). A basic formula is a non-constant formula (not necessarily atomic) which on the top level does not have conjunction , disjunction , negation ¬, constant formula κ a or step-construction (−) a . It is not difficult to see that both ⊑ and ⊑ + are completely determined by basic formulas. Note that since V + ⊆ V, it holds that (⊑) ⊆ (⊑ + ) and (≡) ⊆ (≡ + ). Proof. Note that at each type level, the preorder is completely determined by basic formulas. All other formulas depend solely on the satisfaction of basic formulas, by a simple induction. As such, the above characterisations are a simple consequence of unfolding the satisfaction relation of basic formulas. Congruence properties A relation on terms is compatible, if it is preserved over the typing rules from Fig. 1. We introduce the three properties that we will require in order to establish that (the open extensions of) the behavioural preorders are compatible, hence precongruences. The space T Σ (A), which forms the basis of the technical definition of the modalities, plays a fundamental role in this. The first property considers the leaf order T Σ ( ) on T Σ (A), where t T Σ ( ) r if r can be created by replacing leaves a ∈ A of t by leaves b ∈ A of higher value a b. The ⊥ leaves can however not be replaced. This property is useful for establishing a variety of different results, but mainly just shows that modalities preserve the implicit (point-wise) order φ ). The second property considers the ω-complete tree order ≤ on T Σ (A), defined just after Definition 2.1. Definition 4.6. A modality q ∈ Q is tree Scott continuous if for any ascending chain t 0 ≤ t 1 ≤ t 2 ≤ . . . it holds that q ( n∈N t n ) = sup{ q (t n ) | n ∈ N}. This is property is necessary in the congruence proof for inductively approximating the satisfaction value of infinite trees generated by the fixpoint operator and infinite arity effect operators. The third and final property is the most technical one, and considers the preservation of the behavioural preorder over sequence operations such as (−) to x . (−). It considers the monad multiplication map µ : T Σ (T Σ (A)) → T Σ (A), and requires that the abstract generalisation of the behavioural preorder on T Σ (T Σ (A)) and T Σ (A) is preserved by the µ-map. To formulate this, we need first define these abstract relations. We write h : For a function h : X → A (a valuation on X) and a modality q ∈ Q, we write t ∈ q(h) for q (h * (t)). For any relation R ⊆ X × Y , and valuation h : X → A, we define (R ↑(h)) : Y → A to be the function such that R ↑(h)(b) := sup a∈X,aRb (h(a)). We classify abstract quantitative behavioural properties on T A. A function H : T Σ (A) → A is called quantitative behaviourally saturated if for any two trees t, t ′ ∈ T A such that t t ′ , it holds that H(t) H(t ′ ). We write QBS for the set of quantitative behaviourally saturated functions. Note that H ∈ QBS if and only if there is a function F : T A → A such that H = ↑(F ). Moreover, for any q ∈ Q, it is easy to see that q ∈ QBS. We define a relation on quantitative double trees T T A. Definition 4.8. We define the preorder on T T A by: for any two quantitative double trees r, r ′ ∈ T T A, r r ′ ⇐⇒ ∀q ∈ Q, ∀H ∈ QBS, r ∈ q(H) r ′ ∈ q(H). Proof. For '⇒', note that for any t ∈ T A, F (t) ↑(F )(t) so the result follows from leaf-monotonicity and the fact that ↑(F ) ∈ QBS. For '⇐', use that for H ∈ QBS, ↑(H) = H. We can define the third property, decomposability, together with its stronger counterpart, sequentiality 4 . Definition 4.10. Q is decomposable if for all t, r ∈ T Σ (T Σ (A)), if t r then µt µr. A modality q ∈ Q is sequential if for all t ∈ T Σ (T Σ (A)), q (µt) = q ( q * (t)). Lemma 4.11. If all modalities q ∈ Q are leaf-monotone and sequential, then Q is decomposable. The three properties defined above allow us to establish compatibility: Theorem 4.12. If Q is a decomposable set of leaf-monotone and Scott tree continuous modalities, then ⊑ and ⊑ + are compatible, hence precongruences. All our examples satisfy these three properties. Both leaf-monotonicity and Scott tree continuity are consequences of the inductive and hence continuous definitions of the modalities, while decomposability holds by observing that any modality from the examples is sequential. We illustrate this in the following lemma. Proof. Take r, r ′ ∈ T Σ (T Σ (A)) as above and assume E (µr) > a ∈ [0, 1], then since E (µr) = sup n ( E n (µr)) there must be an n ∈ N such that E n (µr) > a. By the recursive definition of E (−) we can see that E n (r[t → F ′ n (t)]) ≥ E n (µr), and hence E ( E * (r)) > a. Now assume E ( E * (r)) > a, then there must be an m such that E m ( E * (r)) > a. Now, E m only looks at a finite amount of leaves, and hence there must be a k such that E m ( E * k (r)) > a. Again, studying the recursive definition of E (−) we observe that E m+k (µr) ≥ E m ( E * k (r ′ )), so we conclude that E (µr) > a. This is for all such a ∈ A, so E (µr) = E ( E * (r)). We end this section with an example of an equivalence and an in-equivalence. It has to be said that the purpose of this paper is to give a widely applicable approach to defining equivalence, not to prove equivalence of terms. Moreover, for practical purposes, establishing an in-equivalence is easier than establishing an equivalence, since you only have to find one formula which distinguishes the two. Applicative Bisimilarity We investigate how our quantitative modalities can be used to define a notion of Abramsky's applicative bisimilarity [1], related to the behavioural equivalence (Theorem 5.7), starting off by defining a relator [18,29]. We write xRy for (x, y) ∈ R. Remember from the previous section that (t ∈ q(h)) = q (t[x → h(x)]) and (R ↑(h))(b) := sup{h(a) | a ∈ X, aRb}. Note that Q( ) = and Q( ) = (see Lemma 4.9). The following characterisation of the relator is immediate: The following lemma shows that this satisfies the usual properties of monotone relators from [18,29]. The proof is technical yet straightforward, and is left out to preserve space. Lemma 5.3. If all quantitative modalities from Q are leaf-monotone, then Q(−) has the following properties: 4 Sequentiality is one of two properties for q to be an Eilenberg-Moore algebra for the monad T Σ (−) 1. If R is reflexive, then so is Q(R). For R ⊆ X × Y and S ⊆ Y × Z, Q(R)Q(S) ⊆ Q(RS). Here RS is relational concatenation. Fundamental to the definition of the relator is the notion of the right-predicate R ↑(h). When the relation in question is our behavioural preorder, these right-predicates can be expressed in the logic. Proof. We use Lemma 4.3 to define φ D : In the case that R is a relation on terms of some value type A, we write Q(R) for the relation on terms of type FA given by Q({(return(V ), return(W )) | V R A W }). A relation R on terms is well-typed, if it only relates terms of the same type and context, and R is closed if it only relates closed terms. Definition 5.5. A well-typed closed relation R is an applicative Q-simulation if: The applicative Q-similarity is the largest applicative Q-simulation, whereas the applicative Q-bisimilarity is the largest symmetric applicative Q-simulation. Theorem 5.6. If all quantitative modalities from Q are leaf-monotone, then the positive behavioural preorder ⊑ + is the applicative Q-similarity. Proof. Note that ⊑ + satisfies the first 6 properties for being a Q-simulation as a consequence of Lemma 4.4. We prove the seventh property: Assume M ⊑ + N , q ∈ Q and D : Terms(A) → A. We use Lemma 5.4 to find a formula φ D such that φ D (V ) = (⊑ + ↑(D))(V ). By reflexivity of ⊑ + , we have D(V ) (⊑ + ↑(D))(V ), so by leaf-monotonicity and M ⊑ + N it holds that: q (D * (|M |)) (M |= q(φ D )) (N |= q(φ D )) = q ((⊑ + ↑(D)) * |N |). We can conclude that |M |Q(⊑ + A )|N |. So we proved that ⊑ + is a Q-simulation. We now need to prove that ⊑ + contains any other Q-simulation R. To do that, we show that R preserves any formula φ ... in the following sense: If P ... R R ... , then (P ... |= φ ... ) (R ... |= φ ... ). We do this by induction on formulas, using the fact that any formula is well-founded. Assume P ... R R ... . Suppose R preserves any formula from X ⊆ Form(P ... ). Then (P .. It is not difficult to prove that R preserves most basic formulas. The only difficult formula to consider is q(φ) ∈ Form(FA). Assume M RN , so by simulation property |M | Q(R) |N |. By induction hypothesis and relator property 2 in Lemma 5.3, it holds that |M |Q . We conclude that M ⊑ + N . We can conclude that ⊑ + is the largest Q-simulation, hence it is equal to Q-similarity. Note the crucial use of Lemma 5.4 in the proof, which explains the need of the step-formulas in the logic. Theorem 5.7. If all quantitative modalities from Q are leaf-monotone, then the general behavioural preorder ⊑ is the largest symmetric applicative Q-simulation, and hence equal to applicative Q-bisimilarity. Proof. Firstly, it holds by Lemma 4.2 that ⊑ is symmetric. Secondly, ⊑ is a Q-simulation by the same proof as above. Lastly, any symmetric Q-simulation R is included in ⊑, using a similar proof as above, proving with induction on formulas φ Howe's method In this subsection, we briefly outline how the Howe's method [10,11] can be used to establish compatibility for the open extension of applicative Q-similarity and Q-bisimilarity as in [3,28]. Firstly, we need some properties of the relators in addition to Lemma 5.3. The proofs are technical and are left out to preserve space. Lemma 5.8. If all q ∈ Q are leaf-monotone and Scott tree continuous, then the following four properties hold: 2. for any chain of trees t 0 ≤ t 1 ≤ t 2 ≤ . . . , ∀n(t n Q(R)r n ) ⇒ (⊔ n t n )Q(R)(⊔ n r n ). As a consequence of the above lemmas, the following holds. One of the contributions of this paper is identifying the properties on quantitative modalities for which the above relator properties are satisfied, such that we can apply Howe's method. The application of Howe's method itself is however not novel, and is simply an alteration of the proof used for the call by value case in [3,28] (untyped and simply-typed respectively), using results from [15]. As such, details of the proof have been omitted. In short, Howe's method allows us to establish the following theorem. Theorem 5.11. If Q is a decomposable set of leaf-monotone and Scott tree continuous quantitative modalities, then Q-similarity and Q-bisimilarity are compatible. Combining theorems 5.6, 5.7, and 5.11 we can derive Theorem 4.12, that the general and positive behavioural equivalence/preorder are compatible. Discussions We have generalised the logic from [28] to a quantitative logic for terms of a call-by-push-value language with general recursion and several (combinations of) algebraic effects. The quantitative logic is expressive, contains only meaningful behavioural properties, and induces a compatible program equivalence on terms. In this paper, we consider program properties (or observations) as the primary way of describing program behaviour. According to this philosophy, the generalisation to quantitative properties is natural. Alternatively, one could consider relations (or comparisons) as primary, and instead generalise to quantitative relations. The resulting theory is that of metrics, along the lines of [2,4,19]. Relating the logic from this paper, or a variation thereof, to metrics (e.g. like the ones in [7]) is a topic for future research. The quantitative logic does not however naturally induce a metric on the terms. This is mainly because of the inclusion of step-formulas φ a , which take the quantitative information from φ and collapses it to a binary value. These step-formulas are necessary for relating the behavioural equivalence with the applicative bisimilarity. Their necessity can be seen as a natural consequence of the non-linearity of the language. E.g., in the case of probability with A := [0, ∞], the step-formula can be constructed using products of formulas. The quantitative logic is very expressive, allowing one to deal with some awkward combinations of effects that are not amenable to a boolean treatment. Despite the many examples of combination of effects, there is no general theory for quantitative modalities of combined effects. Such a theory is a potential subject for further research. It would also be interesting to look at other examples of effects which the quantitative logic could describe, like a the algebraic jump effect described in [6], or some form of concurrency. The logic and examples from [28] can be considered as further examples for this paper, where one considers A := {T , F}. The property of Scott openness is the Boolean version of a combination of Scott tree continuity and leaf-monotonicity, and the notion of decomposability is a quantitative generalisation of the notion from [28] with the same name. It should be noted, however, that most modalities from [28] are not sequential. Along the lines of [28], it is possible to define a pure variation of the logic. This is a logic independent of the term syntax, using function formulas of the form The logical equivalence of this pure logic will be equal to the behavioural equivalence, if the behavioural equivalence is compatible. The denotation q : T A → A of quantitative modalities are, in the case of the running examples, Eilenberg-Moore algebras. These are algebras a : T X → X such that a • η X = id X and a • T a = a • µ X , the second statement coincides with the property of sequentiality in this paper. As such, our example modalities potentially fit into the framework of Hasuo [8]. Connections between the two approaches may be explored in the future. Since the theory has been formulated for call-by-push-value, it is not difficult to extract logics for specific reduction strategies including; call-by-name, call-by-value and lazy PCF [16,17]. The language can also be extended with universal polymorphic and recursive types. These extensions of the language are worked out in the author's forthcoming thesis. Further extensions could also be considered in the future. 4. This follows from the fact that if x R y then R ↑(h)(y) h(x), so we can use leaf-monotonicity. Now for the second property. Proof of Corollary 5.10. 1. Using point (iii) of Lemma 5.8 on the assumptions we get f * (t) Q(Q(S)) g * (r). We can then apply Lemma 5.9 to get the correct result. A.1 The Howe closure Given this definition, a well-typed relation R is compatible if and only if R ⊆ R. The Howe closure is also the least solution to the equation S = S(R) • and the least solution to the inclusion S(R) • ⊆ S. We look at some preliminary results, mostly from Lassen [15]: If R is reflexive, then: Proof. We prove the properties separately. 2. Note that the compatible refinement of a reflexive relation is reflexive. Proof. We proof the properties individually. 1. We use that R is transitive, hence (R) • is transitive meaning (R) 2. This follows from applying property 2 of Lemma 5.3 to the previous statement. A.2 The Howe closure of an applicative Q-simulation We look at the Howe closure of a Q-simulation preorder R. We assume that Q is a decomposable set of leaf-monotone and Scott tree continuous modalities. The lemmas proven in the previous two subsections are satisfied, hence we know that (R) ⊆ (R) • by Lemma A.2. We prove that (R) • is a Q-simulation by explicitly checking the seven conditions from Definition 5.5. Proof. Using the inductive definition of (R) • there must be an L : N such that V(R) • L and LRW , the latter meaning L = W because of the simulation property. The fact that V(R) • L must have come as a conclusion from either C3 or C4. In the first case, V = Z = L and hence V = L = W . In the second case, V = S(V ′ ) and L = S(L ′ ) with V ′ (R) • L ′ , and the proof has been reduced to showing V ′ = L ′ , since then V = S(V ′ ) = S(L ′ ) = L = W . We do induction on the structure of V , which cannot go on forever since V is a syntactically finite term. So eventually we get to Z and we can make a conclusion of the form V = nS(V ′′ ) = nS(Z) = nS(L ′′ ) = L = W for some n ∈ N. That concludes the proof. The following lemma is evident from the compatibility properties. Lemma A.5. By compatibility of (R) • it holds that: We can easily prove two more simulation properties. Proof. There is a pair (l, L) such that (j, V ) (R) • Σi∈I Ai (l, L)(R) • (k, W ). The latter implies l = k and LRW by simulation property. The former statement can only have come from compatible extension rule C13, so j = l and V (R) • L. We can now use Lemma A.3 to conclude that V (R) • W . Proof. There is a pair (L, L ′ ) such that (V, V ′ ) (R) • A×B (L, L ′ )(R) • (W, W ′ ). The latter implies LRW and L ′ RW ′ by simulation property. The former statement can only have come from compatible extension rule C15, so V (R) • L and V ′ (R) • L ′ . We can now use Lemma A.3 to conclude that V (R) • W and V ′ (R) • W ′ . So all conditions except 6 of being a Q-simulation are satisfied. Condition 6. is the most difficult to prove and requires an induction on the reduction relation of terms. It requires us to look at terms P ... , R ... of type FA such that P ... (R) • R ... , and prove that |P ... |Q((R) • )|R ... |. Using Lemma 5.8, this can be reduced to asking |P ... | n Q((R) • )|R ... | for all n. This allows us to do an induction on the denotation map |P ... |. In general, one would look at the shape of P ... and see what it reduces to after one step, so one can use the induction hypothesis. This is a relatively straightforward investigation in the fine-grained call by value case. For call-by-push-value, we have the problem that effects may occur in any computation type, which is particularly problematic when considering non-producer type. Concretely, it may be that our P ′ is of a computation type, it may not be of the form λx . M . This is problematic, as we still do not have any clue to what P ... might reduce to. To investigate that, we would require another case analysis on P ... ′ , which results in a bureaucratic nightmare. We can say that the application case is uninformative, and we need to continue doing case analysis until we find a term that is not of the form of an application, which we call informative. Doing structural induction on P ... , we observe the following result. Lemma A.10. Any computation term P ... is of the form S{P ... ′ } where S is a frame and P ... ′ is an informative term. Definition A.11. Two frames S and Z match when the following statements hold. If We have the following property. . Now for the induction step, assume the statement holds for any smaller frames S ′ . Matching frames are very handy, since they can make use of compatibility: The last important property of frames is that it works nicely with respect to the reduction relation. We have the necessary tools to prove the following lemma. Proof. We do an induction on n. Induction step (n + 1). We assume as the induction hypothesis that for any P ... We do a case distinction on P ... ′ : C, which is informative, so not of the form M · V or M · i. We start with the three unfold cases, where the S frame is actively used. 1. If P ... ′ = return(V ) : C, which can only be of F-type, so the frame S must be ε as no other frames accept a term of this type. Hence P . ′ could only have been derived via the lambda compatibility rule C17, so R ... We can do the following derivation using Lemma A.14 and the induction hypothesis: That finishes the case distinction, so we know that for any shape of P ... ′ it holds that |S{P ... ′ }| n+1 Q((R) • )|Z{R ... ′ }|. As was discussed before, this is sufficient in establishing that |P ... | n+1 Q((R) • )|R ... |, and hence this finishes the induction step. So the proof by induction has been finished. We can conclude that M (R) • N ⇒ |M |Q((R) • )|N | for closed terms of type FA. As such, we can conclude: In particular, the Howe's closure of Q-similarity is a Q-simulation, and hence the Q-similarity itself. Since the Howe's closure of a preorder is itself compatible, we can can conclude that the Q-similarity is compatible. We can now derive Theorem 5.11 as stated in Section 5.1, with the same method as in [28]. The bisimilarity part of this result is established using what is known as the transitive closure trick (see e.g. [28]).
2019-04-26T11:33:43.000Z
2019-04-26T00:00:00.000
{ "year": 2019, "sha1": "4dbb803c720be1d687c7974abbb7a17e015d9a3a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.entcs.2019.09.015", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "4dbb803c720be1d687c7974abbb7a17e015d9a3a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
17308759
pes2o/s2orc
v3-fos-license
Theoretical and Experimental Adsorption Studies of Polyelectrolytes on an Oppositely Charged Surface Using self-assembly techniques, x-ray reflectivity measurements, and computer simulations, we study the effective interaction between charged polymer rods and surfaces. Long-time Brownian dynamics simulations are used to measure the effective adhesion force acting on the rods in a model consisting of a planar array of uniformly positively charged, stiff rods and a negatively charged planar substrate in the presence of explicit monovalent counterions and added monovalent salt ions in a continuous, isotropic dielectric medium. This electrostatic model predicts an attractive polymer-surface adhesion force that is weakly dependent on the bulk salt concentration and that shows fair agreement with a Debye-Huckel approximation for the macroion interaction at salt concentrations near 0.1 M. Complementary x-ray reflectivity experiments on poly(diallyldimethyl ammonium) chloride (PDDA) monolayer films on the native oxide of silicon show that monolayer structure, electron density, and surface roughness are likewise independent of the bulk ionic strength of the solution. INTRODUCTION Coulombic interactions are ubiquitous in biological systems, as Nature uses them in aqueous environments to regulate the structure of biological macroions such as proteins so that desired catalytic properties can be maintained. 1,2 Whereas electrostatic interactions are very important to biological systems, water-soluble synthetic polymers (i.e., polyelectrolytes) also use electrostatic interactions to gain solubility in hydrophilic environments and, like proteins, are also expected to optimize their structures using secondary interactions such as dipole-dipole and hydrogen-bonding interactions. Coulombic interactions apply more generally to many industrial applications; for example, they are key to controlling the stability and flocculation properties of colloids. [3][4][5] A clear example of the significance of electrostatic interactions is the biologically inspired concept of molecular self-assembly on surfaces. In essence, molecular self-assembly is a phenomenon in which hierarchical organization or ordering is spontaneously established in a complex system without external intervention. Electrostatic interactions have been successfully employed in the fabrication of layered molecular assemblies, 6 including functional multilayered devices such as lightemitting thin films and diodes. 7,8 Here we discuss electrostatic interactions as a driving force in the spontaneous self-assembly of rigid polyelectrolytes on surfaces. Numerous analytical, simulational, and experimental polyelectrolyte adsorption studies (for reviews, see Refs. [9][10][11][12] have examined how the amount of adsorbed polymer and thickness of the absorbed layer depend on properties such as the solution ionic strength, solution pH, molecular weight or length of polymer, bulk polymer concentration, linear charge density of the polymer, and surface potential or surface charge density. Nearly all of these studies focus on flexible, "weak" polyelectrolytes of variable degree of dissociation along the chain. The difficulty in treating polyelectrolyte adsorption theoretically lies in the complex interplay among chain conformational entropy and long-ranged electrostatics. 10,[13][14][15][16] The entropy introduced into the system by the flexible backbones competes with the inherent bare attraction between the oppositely charged chains and surface. 17 Here, we study just one aspect of polyelectrolyte adsorption problem: the effect of ionic strength on the effective forces between charged polymers and an oppositely charged surface. Added salt has been noted for its dual effect on adsorption; whether or not increasing the salt concentration leads to an increase or decrease in the adsorbed amount depends on the balance between the screening of intrachain repulsion and chain-surface attraction. 18 For hydrophobically modified polyelectrolytes, added salt can act as a switch for adsorption. 19 To eliminate the issue of the internal degrees of freedom of the chains, we treat the chains as rigid rods so that the roles of electrostatics and ion entropy can be studied. The most often studied self-assembling experimental systems are self-assembled monolayers because they can be conveniently manipulated and studied on substrate surfaces. In particular, oxide surfaces with their low negative charge density can be used to anchor polycations-rather than monovalent cations-because the number of charge-charge attractions is greater. Since the Coulombic energy to separate a monovalent ion pair in water initially at a distance of 5Å is 3.5 kJ/mol, or about 1.4 times the thermal energy, a single charge-charge interaction is usually not strong enough to produce well-organized monolayer structures. Multiple charge attractions, however, are able to generate good surface adhesion between films and substrates. Poly(diallyldimethyl ammonium) chloride (PDDA), whose idealized structure is shown in Fig. 1, is an ideal example for this study for several reasons. PDDA is a "strong" polyelectrolyte; its backbone charge density (and hence morphology) is not influenced by the pH of the surrounding solution. As it also lacks lone electron pairs and empty orbitals, it neither participates in hydrogen bonding nor functions as a ligand to metal ions. Thus, the dominant interactions involving PDDA are expected to be electrostatic in nature. In this paper we calculate the effective interaction between an array of model rigid rods parallel to an oppositely charged interface in the presence and absence of added salt using Brownian dynamics simulations with explicit ions in a continuous aqueous medium. The re-sulting effective interaction is an attractive adhesion force that depends relatively weakly on the bulk concentration of monovalent salt. We also present complementary x-ray reflectivity measurements on single-layer PDDA films on the native oxide of a silicon substrate that show a weak dependence on ionic strength (due to monovalent salt) in the monolayer structural properties of electron density, surface roughness, and thickness. The model suggests that PDDA monolayers self-assemble via a largely salt-independent, adhesion-attraction force when the intermolecular interactions are governed by electrostatics. A. Model The model consists of a combination of mobile ions and fixed macroions together in a unit cell. Figure 2 shows two adjacent unit cells, each of which is rectangular region of dimensions L x × L y × L z and contains a single line charge (i.e., PDDA) of uniform charge density λ located a distance d from a fixed, flat surface (i.e., silicon) of a specified uniform charge density σ located at z = 0. To approximate an infinite thermodynamic system, the unit cell is replicated in the x and y directions using periodic boundary conditions to produce an infinite one-dimensional array of infinitely long, parallel line charges of spacing L x and repeat distance L y in the y direction, parallel to a charged, infinite plane. The z direction is not periodic. The dielectric constant within the unit cell is ǫ 1 , and that of the medium below (z ≤ 0) is ǫ 2 . For simplicity, the case where ǫ 1 = ǫ 2 is studied, thereby producing no dielectric interface. Counterions and co-ions, consisting of monovalent cations and anions, are added such that the system is overall charge neutral. To study systems having a "bulk" concentration of salt, a uniformly charge-neutral surface is placed at z = L z to confine the particles during the simulation. As exact correspondences between the physical parameters describing real PDDA polymers and silicon surfaces with a native oxide layer are difficult to make, we make the following approximations. Real PDDA polymers ( Fig. 1) have one positive charge per 5.4Å and have an anisotropic diameter, ranging from 4 to 12Å due to their molecular structure. The model rod is chosen to have a uniform axial linear charge density λ = e/10Å and a radius r 0 of an intermediate value of 4Å. This rod size enters through a short-ranged repulsion between the rod axis and the mobile ions and is discussed in the next section. The rod spacing L x is taken to be 40Å. The native oxide on silicon and its distribution of negatively charged hydroxyl groups is represented by a uniform surface charge density σ = −e/60Å 2 . The dielectric constants of the aqueous medium and substrate interior were both set equal to 80, and all simulations were carried out at room temperature. Each simulation further required the specification of the values for the rod-center-tosurface separation distance d, the repeat rod segment length L y , unit cell height L z , and the numbers and charges of the mobile particles. The values of L y and L z fell into the ranges 60-150Å and 60-120Å, respectively, and approximately 100 ions were introduced at random into the unit cell until the system became charge neutral. The Brownian dynamics algorithm 20 used to simulate the motion of the ions relates the positions r i of the ions i at a time t + ∆t to those at the previous time t according to the relation is the deterministic force acting on ion i due to long-ranged electrostatic and short-ranged nonelectrostatic interactions, r * i (∆t) represents the random displacement of ion i due to the random thermal motions of a discrete solvent, D is the isotropic diffusion constant, k B is Boltzmann's constant, and T is temperature. This description assumes the solvent to be a continuum. The diffusion constant D is related to the particle mass m and the coefficient of friction ξ, due to a particle moving against the solvent, by the relation D = k B T /mξ. In all simulations mξ ≡ 1 and the time step ∆t was 0.005, in normalized units. At every time step the force f i (t) is calculated as the gradient of the potential energy surface due to the ions, rods, and the surface, and the random displacement is chosen independently for each particle from a Gaussian distribution with a variance of 2k B T ∆t in each spatial component. 21 The positions of the particles were updated according to the periodic boundary conditions. During the simulations the average vertical z-force on the rod (i.e., the component of the force acting on the rod in the direction perpendicular to the charged surface) and the distributions of the ions were monitored. The system was considered to have reached equilibrium when the time-averaged vertical force on the rod reached a steady value, the average lateral x-force on the rod was zero, the total system energy reached a steady value, and the ion distributions remained stationary. These requirements necessitated the time averages to be accumulated for up to 11 × 10 6 time steps after the initial 5-25 × 10 4 steps were discarded. B. Interaction potentials The interaction potentials used in the simulations consist of pairwise, long-ranged electrostatic forces and short-ranged, nonelectrostatic repulsions, where the electrostatic interactions are exact for these periodic systems. The ion-surface electrostatic interaction V is (z) per unit cell is a function of the distance z between an ion with charge q and the surface: where ǫ 0 is the vacuum permittivity. The rod-surface electrostatic contribution V rs (z) per unit cell for a uniform surface charge distribution is similarly The ion-ion electrostatic potential energy V ii (∆r) per unit cell for an ion with charge q 1 at the point (x + ∆x, y + ∆y, z + ∆z) in the unit cell and an ion with charge q 2 in the unit cell and its replicas located at (x + mL x , y + nL y , z), where m, n are integers, resulting from the two-dimensional replication of the unit cell in the x and y directions, is 22 where K 0 is the modified Bessel function of the second kind of order zero. The ion-rod potential energy V ir (∆r) per unit cell is the combined logarithmic interactions between a point particle and a one-dimensional array of line charges. The analytic form of V ir (∆r) is derived from the potential energy of an ion interacting logarithmically with a two-dimensional array of line charges arranged on a rectangular lattice (see Eq. (14) in Ref. 23) by eliminating one of the dimensions. The result is The associated ion-ion and rod-rod self-energies arising from an ion/rod interacting with its own periodic replicas are given elsewhere 22,24 and need not be considered in this work. Finally, the ion-ion, ion-rod, ion-surface, and rod-surface short-ranged, nonelectrostatic repulsions were modeled as A ii /r 12 , A ir /r 11 , A is /r 10 , and A rs L y /r 10 , respectively, to prevent electrostatic collapse of the charge-neutral system. The combination of Coulombic attraction and short-ranged repulsion between two oppositely charged ions, ion and rod, ion and surface, or rod and surface introduces optimal ion-ion, ion-rod, ion-surface, and rod-surface distances, respectively. Chemically speaking, these optimal distances are a measure of the "polar-bond" distance between oppositely charged species. Values of the A coefficients were A ii = 5.26 × 10 3 kcalÅ 12 /mol, A ir = 3.17 × 10 5 kcalÅ 11 /mol, A is = 6.61 × 10 2 kcal A 10 /mol, and A rs = 1.82 × 10 4 kcalÅ 9 /mol, giving optimal ion-ion, ion-rod, ion-surface, and rod-surface distances of 2.4, 4.0, 2.4, and 4.0Å, respectively. In general, in order to use a line-charge model to represent a PDDA polymer and its nonaxially distributed ammonium charge centers, the model rod radius r 0 differs from the real size of the polymer so that the electrostatic "binding" energy between a counterion and a PDDA charge-on the order of a few k B T -can be obtained. The ion-rod short-ranged interaction was applied to all mobile charges and was taken according to the minimum image convention. 20 EXPERIMENTAL The preparation of self-assembled PDDA monolayers has been described previously. 25 Here we summarize the main points and describe the differences from previous experiments. The growth of PDDA monolayers was carried out on thin silicon wafers instead of thick silicon substrates. The PDDA solution concentration used for these experiments was 0.1 M instead of 1 mM, and the ionic strength of these solutions was tuned with monovalent salt (NaCl) to obtain ionic strengths of I = 0.001, 0.01, and 0.1 M. The reaction time for deposition of PDDA onto the substrate was extended from 5 min to 20 min at room temperature. The x-ray reflectivity measurements were carried out as described previously without modification, and the quality of the data for thin silicon wafers (500 µm) is the same as those of thick silicon substrates (0.1 cm). RESULTS AND DISCUSSION To see the qualitative similarity between theory and experiment on the structural properties of monolayers of rigid rods near oppositely charged surfaces, we first examine the results of the particle simulations. Figure 3 shows a typical concentration profile for the mobile ion species at equilibrium. The time-averaged monovalent cation distribution near the surface and monovalent anionic distribution near the rod are noticeably peaked. At large distances from both the rod and surface, the concentration profiles are featureless. This flat region is identified as the bulk electrolyte solution, whose salt concentration c s is given by the height of the plateau. The concentration in Fig. 3 is approximately 0.13 M. Clearly, the local concentration of ions can differ significantly from the bulk. For the surface charge density used in the simulations, the ion concentration near the surface is two orders of magnitudes greater than the bulk (not shown). The bulk concentration as indicated by the simulation corresponds to the concentration that would be measured experimentally. The time-averaged vertical z-force on the rod that we call the adhesion force and its dependence on c s for sample values of the fixed rod-surface distance d is shown in Fig. 4. Force values greater than zero indicate an effective rod-surface repulsion; less than zero, an attraction. Here, a bulk concentration of zero means that only a charge-neutralizing number of counterions was added to the simulation unit cell with the neutral lid removed. The data reveal that the effective attraction decreases rather weakly with increasing salt concentration for all distances d. For small d, this behavior may simply be the result of salt exclusion from between the rod and the surface. An alternate way of viewing the effect of c s on the adhesion force is shown in Fig. 5, where the force (+, ×, and *) is now plotted against the rod-surface spacing d for several values of the bulk salt concentration. Figure 5, obtained by linear interpolation of the data sets shown in the previous figure, shows three major features: 1) the force at d = 4Å is approximately zero; 2) there is a shoulder in the force curve near d ≈ 6 to 8Å; and 3) the effective attractive force generally decreases with increasing rod-surface distance, regardless of c s . The fact that the force crosses over from attraction to repulsion near d = 4Å results from our having fixed the equilibrium rodsurface distance in the absence of any ions at 4Å in the model via the short-range repulsive coefficients discussed earlier. The shoulder is likely due to the finite size of the ions because the ions are expected to be able to pass freely in between the rod and the surface only for d > 6.4Å. The interpolated force-distance data was compared to Debye-Hückel theory 26 (DHT), adapted to interactions between macroions. Briefly, DHT is a mean-field theory that gives an exponentially screened Coulombic electrostatic interaction between two ions due to their ionic atmospheres. The adhesion force per unit length of rod acting on each rod as a function of the rod-surface distance d is where κ −1 is the Debye screening length, given in terms of the (bulk) ionic strength I as κ 2 = 2Ie 2 /ǫǫ 0 k B T . As a guideline, DHT generally provides an adequate description of electrostatic interactions between two ions when their interaction energy is small compared to k B T . The above expression for the adhesion force, if it is valid, is thus expected to provide better predictions for the adhesion force as κd increases. The results of DHT are shown in Fig. 5 as the solid and broken lines for bulk ionic strengths of 0, 0.06, and 0.12 M, corresponding to κ values of 0, 0.08, and 0.11Å −1 , respectively, as derived from the simulation data. At small rod-surface separation distances, agreement between DHT and the simulations not surprisingly fails because the simulations include a short-ranged rod-surface nonelectrostatic repulsion that is absent from DHT. Under conditions of zero ionic strength, the simulations and DHT disagree severely as a result of the incompatibility of DHT and our method for determining c s from the simulations. Although DHT can in principle account for screening due to counterions alone, the formulation of f DHT implicitly assumes that screening is primarily due to added salt. In the absence of salt, the resulting electrostatic interaction is that of uniformly charged, bare macroions. At ionic strength values of 0.06 and 0.12 M, DHT is seen to capture fairly well the behavior of the interpolated force-distance curves for distances beyond the shoulder (d > 8Å), with better agreement occurring for the larger ionic strength values. Interestingly, the simulations and DHT results seem comparable for d > 5Å, although the validity criterion on the electrostatic interaction energy is marginally satisfied for the range of rod-surface distances shown. Finally, there may still be a qualitative disagreement in the force between the simulations and DHT for d > 50Å. Whereas DHT gives an adhesion force on the rod that decays exponentially with distance, simulations indicate a more slowly decaying force and may be due to the increased equilibration times needed for large rod-surface separations. Under the condition that DHT in Eq. (5) is a good approximation to the adhesion force curve, the adhesion or "dissociation" energy per unit length rod per rod may be calculated as the integral of the force: where d * is the equilibrium distance of the rod from the surface. The inset of Fig. 5 shows the adhesion energy per rod as a strongly decaying function of the bulk salt concentration c s that goes as W (d * ) ∼ c The theoretical model suggests that in experiments, where the polymer rods are mobile in solution, the rods would be attracted to the surface by a fairly salt-independent adhesion force and thus would move spontaneously toward the surface and possibly form a monolayer. Indeed, monolayer formation is observed in the molecular self-assembly of PDDA polymer as evidenced by x-ray reflectivity measurements. PDDA was found to form a uniform nanometer-thick thin film on a silicon substrate at various solution ionic strengths in the range 0.001 to 0.1 M. Other film properties, such as the film electron density and surface roughness, were also found to be independent of the bulk salt concentration. The driving force for the formation of the monolayers on the surface is the electrostatic attraction between the charged rods and surface. To verify the weak influence of ionic strength on the surface and PDDA rod adhesion characteristics, we performed x-ray reflectivity characterization of the monolayers by measuring their reflectivity profiles. The reflectivity profiles R(Q z ), normalized to unit reflectivity, are shown as a function of momentum transfer Q z in Fig. 6. The maximum of the reflectivity profile occurs at a value of Q z corresponding to the condition when the x-ray radiation in the sample is evanescent (Q z < Q c ), 27 and the sample surface subtends the full width of the x-ray beam. A difference, or contrast, between the electron densities of the PDDA monolayer and the silicon substrate produces fringes and oscillations in the x-ray reflectivity. The amplitude of the fringes is related to the magnitude of the contrast in electron densities. The oscillations with Q z are caused by interference between the x-ray beam reflected by the film-air and film-substrate interfaces, and the period of the oscillation is inversely related to the film thickness. In addition to the decay in the reflectivity of the sample with Q z due to the Fresnel reflectivity, the reflectivity profile may be further attenuated by roughness at the interfaces. This decay is related to the variation in the displacement of the interface in the direction normal to the surface about a mean value across the sample. The fluctuation in interface height forms a distribution whose root-mean-square width, σ i , increases the attenuation of the reflectivity profile with Q z . The x-ray reflectivity data was fitted to a model 27 for single-layer films on a substrate which yields the average electron density of the film, ρ e , the thickness of the film, ∆, and the surface roughness of the film, σ r . The values of these parameters were determined for the PDDA monolayers by perturbing the values from initial guesses until the weighted difference between the observed data (• in Fig. 6) and the fitted profile was minimized, and the resulting calculated reflectivity profiles are shown as the solid curves in Fig. 6. For PDDA monolayers formed from solutions of ionic strengths of 0.001, 0.01, and 0.1 M NaCl, the electron density values were calculated to be, respectively, ρ e = 0.225, 0.225, and 0.234 e − /Å 3 , with thicknesses ∆ of 12.7, 12.8, and 12.4Å, and surface roughnesses σ r of 1.2, 1.2, and 0.9Å for the PDDA-air interfaces. Not only are these thickness values consistent with the formation of a monolayer of PDDA whose molecules are upright, as shown in Fig. 1, but also the fitted profiles reveal that the monolayer structural parameters are independent of the bulk ionic strength of the initial PDDA solution. Thus, these experimental results support the theoretical simulation model in that ionic strength, as varied from 0.001 to 0.1 M experimentally and from 0 to 0.12 M theoretically, does not greatly affect the adhesion characteristics of the polymer rods to substrate surfaces. CONCLUSIONS We have developed a theoretical simulation model to predict self-assembly behavior of rod-like polymers in aqueous salt solutions based on electrostatic interactions and presented supporting experimental x-ray reflectivity data for PDDA monolayers. The model shows that the bulk ionic strength, due to monovalent salt, has only a small effect on the effective attractive force between model rods and an oppositely charged surface. Experimental x-ray reflectivity results demonstrated that PDDA monolayer structure (thickness) and morphology (roughness) do not vary significantly over the two orders of magnitude of solution ionic strength studied. Comparison of the model results to the prediction of Debye-Hückel theory (DHT) for macroions revealed fair agreement, even though the expected range of validity of DHT likely lies outside the region of parameter space studied by the simulations, and the reason for this apparent agreement is not well understood. Nonetheless, the theoretical and complementary experimental results of the adhesion of charged rigid rods onto an oppositely charged substrate are in good agreement. Dynamic simulations involving mobile rods and ions would be interesting, as the kinetics 28-33 of the adsorption process could be studied. Whereas we have shown in this paper that a planar array of rods is attracted to the surface via electrostatic interactions, new information as to 1) the surface distribution of the rods and 2) whether the rods adsorb independently or form bundle-like structures in solution prior to adsorption could be obtained. However, such simulations are expected to depend sensitively on the model rod parameters, in particular, on the rod size and on the distribution of the charged sites. It was recently shown 34 that for two isolated, like-charged rods in an infinite space under no-salt conditions that the rod size is a crucial control parameter for determining whether (divalent) counterions could mediate an effective attraction between the rods. This finding suggested that for a given linear charge density of the rods, there was a maximal rod size that would allow the rods to be mutually attracted. Similar conclusions about the rod size were reached in systems of like-charged rods and surfaces using the geometry discussed in this paper. 24 Nontrivial charge distributions on the rod surface are expected to complicate matters further. Whereas we have focused here on electrostatic interactions, other interactions such as hydrogen bonding, hydrophobic effects, and explicit dipole interactions are important in materials in many fields of research. However, the idea of incorporating combined secondary interactions in the design of new synthetic macromolecular materials remains largely unexplored systematically. 35 We anticipate that further modelling of electrostatic and other interactions will not only provide insight into the structure of biocompatible polymers in solution, but also lead to better design and construction of electronic and optical devices using molecular self-assembly techniques.
2014-10-01T00:00:00.000Z
1999-01-12T00:00:00.000
{ "year": 1999, "sha1": "482a9b0e6e97e2d9b30c53888920f5ab54e309ba", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9902347", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "482a9b0e6e97e2d9b30c53888920f5ab54e309ba", "s2fieldsofstudy": [ "Chemistry", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
192236580
pes2o/s2orc
v3-fos-license
Modern Movement Migrations: Architecture in Angola and Mozambique (1948-1975) The migration and dissemination of architectural models, which may be recognized all along the History of Architecture, accelerated throughout the 20 th century, particularly after the Second World War. If on the one hand the geopolitical map as then defined led to a new paradigm of globalization, on the other hand the doctrinal, and often dogmatic, consistency of the architectural thought and production right from the origins of the Modern Movement allowed for the construction of models that facilitated their spreading and acceptance. The concept of flow or exchange associated with the mobility of architects between regions and cultures or with the dissemination of ideas and works is one of the main features of the Modern Movement. It is also under the sign of the idea of flow and exchange that we may understand the architectural production in Angola and Mozambique, both former Portuguese colonies, during the period of the second post-war until their independences in 1975. This paper’s intention is, on the one hand, to understand the mechanisms of dissemination of international architectural models and their acceptance in those African countries from a historical point of view, and on the other hand, to analyse the processes of their interpretation in a more orthodox, hybrid or critical meaning. Africa meant for the architects who built there an ideal laboratory for experimenting the modern language, not only by adopting such formal vocabulary, but also by testing building and geography and tropical climate adaptation. At the same time, it is necessary to consider how such idea of modernity and progress was developed within the framework of a colonial society and led to an ideological paradox: how was the assertion of democracy that is present in the genesis of the Modern Movement reconciled with the colonial ideology? Introduction The migration and dissemination of architectural models, which may be recognized all along the History of Architecture, accelerated throughout the 20 th century, particularly after the Second World War. If on the one hand the geopolitical map as then defined led to a new paradigm of globalization, on the other hand the doctrinal, and often dogmatic, consistency of the architectural thought and production right from the origins of the Modern Movement allowed for the construction of models that facilitated their spreading and acceptance. The concept of flow or exchange associated to the mobility of architects between regions and cultures or to the dissemination of ideas and works is one of the main features of the Modern Movement generally, and the International Style in particular. During the period of the second post-war, the magazine "L"Architecture d"Aujourd"hui," 1 for instance, conveyed this spirit of globalization, seeking to disseminate an architectural culture and production that were not confined to the European and North-American broadcasting centres, widening the circle to Latin America, Asian countries and even the African continent, accounting not only for some of the most dominant examples within the framework of the "International Style," but also for the regenerating capacity of those models upon absorbing other cultures. It is also under the sign of the idea of flow and exchange that we may understand the architectural production in Angola and Mozambique, both former Portuguese colonies, during the period of the second post-war until their independences in 1975. Within the framework of architectural production in Angola and Mozambique, regardless of the specific circumstances in these two territories, of the different periods of time corresponding to the development of the works or of the specific interpretation of each author, there is a remarkable absorption of the models conveyed by Le Corbusier"s work and doctrine seasoned with the lexicon and plasticity of Brazilian modern architecture. But, as maintained by Dennis Sharp, the dissemination of modern architecture was not monolithic and did not result in a mere cloning operation. 2 Africa meant for those architects who built there an ideal laboratory for experimenting the modern language, not only by adopting such formal vocabulary, but also by testing building and geography and tropical climate adaptation. At the same time, it is necessary to consider how such idea of modernity and progress was developed within the framework of a colonial society and led to an ideological paradox: how was the assertion of democracy that is present in the genesis of the Modern Movement reconciled with the colonial ideology? 3 Ideological Contradictions: Tropical, Colonial and Modern The concept of tropical architecture may be read in various ways. In its usual definition, it means an architecture that is fit for the tropical climate, where the building is designed by observing the climate"s and the site"s specific conditions. Therefore, it takes into account objective criteria, such as location, space and programme organisation in accordance with the best exposure to 1. The magazine L'Architecture d'Aujourd'hui dedicates, for instance, some of its thematic issues to the miscegenation of the Modern works in different geographical contexts such as "Constructions en Pays Chauds" (L'Architecture d' Aujourd'hui, nº 67-68, August -September 1956. Paris), presenting works designed for the broad spectrum of tropical geography or "Afrique Noire" dedicated to francophone African countries. 2. Dennis Sharp, "Registering the Diaspora of Modern Architecture," in DOCOMOMO -The Modern Movement in Architecture (ed.) Dennis Sharp and Catherine Cooke. Rotterdam: 010 Publishers, 2000. 3. This theme was developed in the Phd thesis: Ana Magalhães, Migrações do Moderno: Arquitectura na diáspora-Angola e Moçambique . PhD diss. (Universidade Lusíada de Lisboa, 2015). the sunlight or the winds, using fixed or mobile devices for shade or soil and roof sealants in accordance with the rainfall. Such criteria were obviously watched in accordance with the specific conditions of tropical sub-regions with a dryer or more humid climate. As a general rule, the formal language of the Modern Movement"s architecture, particularly through the Corbusian work and the Brazilian (as well as the Latin-American) architecture disseminated during the second post-war, has easily and enthusiastically incorporated this new vocabulary applied to the specific conditions of the response to the tropical climate. In addition to the nature of a construction based on scientific parameters, tropical architecture acquires an aesthetic meaning. What we see, quite often, is a phenomenon of "tropicalization" of the architectural language in and outside the tropics, where the built work sometimes obeys rather aesthetic criteria than strict science. But, while this "tropicalization" effect may fall within the framework of the global mass influence and homogenization of modern architecture during this period, one may see, at the same time, a convergence on local cultures and identities through a reinterpretation of native elements and traditional techniques, the outcome of which is, in its turn, also disseminated on a large scale. Good examples of this are the Chandigarh and Ahmedabad projects, where Le Corbusier crossed his language with Indian native culture and geography and re-invents one of his most expressive words: the "brise-soleil." At the same time, the meaning of tropical architecture also observes readings of an ideological, or even political, nature. The Indian Prime Minister Jawaharlal Nehru , for instance, who had invited a large international team led by Le Corbusier (1887-1965 to design Chandigarh, the future capital of the Punjab, more than upholding modern architecture, would rather use the term tropical architecture, aiming at the specific conditions of the place, the society and the climate, and defined it as the antithesis of colonial architecture. 4 As a general rule, immediately after the Second World War, the agendas of the largest colonising powers placed their wagers, through their overseas urban planning offices, on public infrastructures investment policies (of which the school facilities programme is an example), and at the same time, on a specific architecture for the tropics (of which the studies carried out by M. Fry and J. Drew 5 or the Architectural Association"s pioneering course on Tropical Architecture 6 are good examples). It is interesting to see that, in spite of the substantial ideological differences between the , 1964). Maxwell Fry (1899-1987) | Jane Drew (1911-1996. 6. "The Department of Tropical Architecture began in 1955 under Maxwell Fry, with James Cubbitt on the staff, and was subsequently taken over by Otto Koenigsberger; the unit went on to become a unique and extremely important department with an international reputation -lasting until Koenigsberger"s resignation and its closure in 1970." https://www. aaschool.ac.uk/AASCHOOL/LIBRARY/aahistory.php. European States, their investment programmes had a similar template, were it under a neo-colonial viewpoint, as in the case of democratic states such as France or England, but also in the framework of a colonial incentive, as in the case of Portugal. 7 At the same time, it is important to underline the relationship between the aesthetic project of the Modern Movement understood in the Western sense and the modernization of these overseas, African or Asian territories. If, on the one hand, these territories functioned as laboratories of the modern project, on the other hand, it is necessary to consider that the very idea of alterity, of a cultural, geographic or social nature, allowed to integrate other parameters of identity. As Avermaete explains: "Architects attempted to engage in their projects on colonial ground with the local conditions by synthesising the way of living of the colonised [...] and the project of modernisation into a new and "other" modernism." The experiments of modern architects in the so-called colonial "laboratories" therefore played an important role in the critical revision of the modernism and the emergence of post-modernism within architectural discourse. 8 Dissemination and Reception of International Architectural Models in Lusophone Africa After World War II, when Portugal was still living under a dictatorship, anachronistically valuing its empire and its colonies, a number of young architects went to Africa and affirmed a modernity that was far from the State-sanctioned architectural models. Such modernity was translated into freedom in a firmer appropriation of the modern movement codes in an international meaning. It is permissible to establish that the first sign of flexibility and openness of Portuguese architecture to the forms and principles of international modern architecture was ensured at the 1 st National Architecture Congress, in 1948. In Portugal, upon the end of World War II and the democratization of the European States, the strife against the Salazar regime became manifest, leading to the organization of the various oppositions, who believed in a swift fall of the so-called "Estado Novo." The political crisis within the regime forced it to use efficient measures that led to a tougher, more consolidated government and, at the same time, to a growing agitation among the various opposing sectors in the Portuguese society, branding it politically, socially, economically and culturally. A new generation of architects, trained in the Arts Schools of Lisbon and Porto, laid claim to a new social, ethical and 7. It is also important to stress that unlike the majority of these former colonies that were decolonised since the beginning of the 1950"s until the mid 1960"s, the African lusophone colonies only became independent in 1975, as one of the fundamental consequences of the 1974"s Portuguese Revolution that put an end to the previous dictatorial regime. 8. Tom Avermaete, Serhat Karakayali and Marion Von Osten (ed.), Colonial modern: aesthetics of the past rebellion for the future (London: Black Dog, 2010), 10. political consciousness. If, on the one hand, they claimed a new vision of reality, on the other hand, they tried to theorize and reinforce an idea of architecture, international and orthodox, according to the premises of the Modern Movement. The diaspora of the Portuguese architects who, during the 50s and 60s, lived and worked in the Portuguese overseas territories was caused by personal factors with various originstheir birthplace, a family presence, political reasons or merely the ambition of new work prospectsand, in a way as well, fostered by the development policies for the colonies of the "Estado Novo." Upon the outcome of the 1 st National Congress of Architecture held in 1948, these newly-trained architects set out to the overseas territories with a clear prospect of the possibility of applying the modern vocabulary in a less restrictive way. In order to understand the architectural production in the African territories, it is also important to underline the significance of the training supplement "away from home", particularly in the case of the experience acquired as trainee in Le Corbusier"s ateliers by, for instance, Vasco Vieira da Costa (1911Costa ( -1982 or Fernão Simões de Carvalho (1929-), who simultaneously studies urban planning in Paris, or else Paulo Melo Sampaio , who studied urban planning in Milan. In this internationalization context, an architect stands out: Pancho Miranda Guedes (1925Guedes ( -2015, an exceptional figure, not only due to his academic training in South Africa, which will make him establish strong ties to the Anglo-Saxon culture, but also due to his constant travels (to Europe and Mozambique"s neighbouring countries) or his significant presence on some institutional stages of international architecture since the beginning of the 60s. Travelling to survey on-site the architectural works that were being made in Europe, the United States or Brazil, for instance, was not yet very frequent among Portuguese architects, particularly the ones who lived in the faraway African overseas territories. But it should be reminded that many architects who lived in Angola and Mozambique, in addition to performing their activity as professionals, conducted activities of a public nature, with technical positions at local municipalities or teaching, which allowed them to enjoy leaves of absence to visit Portugal, and therefore to make study or work visits to Europe. In the case of the architects living in Mozambique, they travelled often to the neighbouring countries or participated in architecture and urban planning congresses or international fairs, particularly in South Africa, the former Rhodesia or Malawi, which allowed them to make contact with their peers as well as with other realities under the point of view of both architectural production and development of urban centres. In this way, architects were in contact with international designs and works primarily through publications or periodicals, especially the magazine L'Architecture d'Aujourd'hui, the international periodical that was most read amongst Portuguese architects (for most of whom French was their first foreign language), or the magazine Arquitectura, which, during its post Congress period, often published exemplary international architecture works. Modern versus Colonial In the context of the colonial society, two models of adopting architectural languages co-exist: one model of a more historical or monumental nature, which was present, in particular, in public works produced in Portugal for its overseas territories, and the predominance of a formal modern vocabulary, of an international nature, mainly associated with private initiative. Apparently, these two models are opposed under the ideological viewpoint. But ideologies, even the most dogmatic, are, in themselves, also contradictory. And the production of architecture corresponding to such ideologies is often contradictory as well. But what one often sees in the architectural production during this period is the paradox in the design development, regardless of their formal model. In the case of the urban plan for the city of Beira, for instance, based on an urban model that falls within the "Garden-City" models advocated by the GUU, 9 its authors propose a zoning that, after all, corresponds to the "segregation of the dwellers according to their customs" 10 (a borough for European customs population, a borough for Asian customs population and a native borough), and at the same time they quote the Athens Charter, electing three functions as the guidelines of the project: "dwelling, working and entertainment." 11 In this same city, where good examples of modern vocabulary houses were built, the urban plots were drawn and dimensioned taking into account resident employee housing, which, although incorporated in the project, were segregated and less qualified spaces. Similarly, Vieira da Costa"s project "Design of a Satellite City for Luanda" (1948) (Figure 1), applying the modern dogmas to the erection of a colonial town, is surely a paradoxical view, as he wrote on his final diploma project: "It is therefore incumbent on the European man to create in the native the need for comfort and a higher life, thus inciting him to the work that will lead him to settle down, and this will facilitate a more stable workmanship. The positioning of the houses and the location of native boroughs are the two main constituents that should govern the composition of the plan of a colonial town (...). In this way, we would rather place native boroughs around the central hub, taking due care to locate it, at all times, toward the lee of European housing areas, which must nevertheless be, at all times, isolated by means of a green screen wide enough to prevent the mosquito from passing over it. As it seems of necessity, under a health and social point of view, native populations should form various scattered groups that will embrace as small satellites the European hub, and so each sector of this hub will be served by a native group. In this way, we will shorten the 9. Gabinete de Urbanização do Ultramar -Overseas Urban Office. 10. Câmara Municipal Beira, Cidade da Beiraprojecto de Urbanizaçãomemória justificativa (Beira: Empresa Moderna, 1951), 95. 11. Ibid, 11. distance to be covered between workplace and residence." 12 It should be noted, however, that this hierarchical social organization model is based perhaps more on Le Corbusier"s 1922 "Ville Contemporaine" project than on the Athens Charter postulates, in which the city was already thought for a "classless" society. Figure 1. Vasco Vieira da Costa -"Luanda Sathelite Plan nº 3." 1948. Civic Center Center Detail and Housing Units Source: Costa, 1984, p. 117. Vasco Viera da Costa, while his training was still fresh, between the modern paradigm and the colonial condition, asserted that it is "absolutely necessary to be colonial in order to be able to be a colonial urban planner." 13 As a counterpoint, one might stress the exemplary case of Bairro do Prenda (1962)(1963)1965), in Luanda, by Fernão Simões de Carvalho and José Pinto da Cunha (1921Cunha ( -2006, which falls within the neighbourhood unit system proposed under the Luanda Master Plan (1961)(1962)(1963)(1964) that was developed by a team led by the former (Figure 2). Based urban planning models that cross the Corbusian premises and doctrinesfrom the Athens Charter to Chandigarh"s traffic hierarchical systemand the social thinking of the urban planner Robert Auzelle , the Bairro do Prenda project incorporated not only housing (houses and collective structures) with community facilities, but also various social and ethnic groups. Architecture in Angola and Mozambique: A Modern Laboratory Regardless of the specific circumstances of those two African territories and the individual interpretations of the authors mentioned, it is possible to observe a common denominator in the developed works and assert an identity belonging to the various genealogies of the models conveyed by the Modern Movement. Such identity is clearly shown in the adoption of a formal and spatial vocabulary composed of a combination of invariable features. For these architects the chance to build in the African territory was the ideal laboratory, not only as to the more orthodox or more hybrid interpretation of the modern vocabulary, but also as to the construction techniques and adaptation to geography and climate. The specific nature of this architectural production was possible due to a strong mastering of the technical and structural capabilities of reinforced concrete as a standard, industrial production constituent, as well as due to its expressive qualities as regards plasticity or texture. As a good example of this we might mention the brise-soleil or the multi-drawing grids, which favour not only the shading but also the natural ventilation of the buildings in tropical climates. It is within this framework, between testing the modern lexicon and response to the tropical climate that the grid and the brise-soleil are exhaustively employed in the architectures in African territory: from the common, anonymous building to the highbrowed, author building (Figures 3 and 4). The search for the plastic expression and spatial qualification, the employment of colour, exhaustively studied in the Salubra 14 and employed in Le Corbusier"s post-war projects, as well as the search for the total work of art in the sense of integration or contamination between art and architecture, coming close to the Corbusian concept of "espace indicible," 15 all these are elements that cross over such African works. On the one hand, bright, saturated colours will be employed in many works in Angola and Mozambique, which, although not directly referenced in the Salubra catalogue, visually report to Le Corbusier"s of the end of the 40s and all along the 50s. On surfaces (such as floors, internal walls or external façades), colour could be applied through painting or by means of cladding materials, such as glazed mosaics or tiles ( Figure 5). Figure 5. Abreu, Santos e Rocha Building, Maputo, Mozambique (1953)-Pancho Miranda Guedes. Detail. Source: Inês Gonçalves, 2008 in Magalhães, 2009, p. 71. In the extensive and heterogeneous work of Pancho Miranda Guedes in Maputo, for instance, the total work dimension, i.e. architecture and painting and/or sculpture all at the same time, is expressively achieved. But there is neither integration nor disciplinary autonomy; there is rather a work and an author (or his other self). For Pancho Guedes, the process of creation of his work admits all sorts of ways, a lot of contaminations and metamorphoses from the various artistic areas in space and time: "Drawing, painting, sculpture and architecture are a single language with many words and an endless alphabet. The words they lend one another area ideas, dreams and gestures -Lines, shapes, colours, volumes and time." 16 An admirer of the great Mexican mural painters, such as José Orozco (1883-1949) or Diego Rivera (1886-1957, Pancho Guedes proposes, in most of his buildings, the creation of murals that are drawn by him and executed with lasting, resistant materials. 17 On the Abreu, Santos e Rocha building (1953), a huge mural made of small pebbles finishes off one of the volumes of the building, by means of a perforation effect that formalizes African arts and crafts imagery ( Figure 6). In the "Dragon" building, it is the mural itself, located in the entrance gallery, really visible from the street, that gives the building its name. This mural is 16. Pedro Guedes, Pancho Guedes: Vitruvius Mozambicanus (Lisboa: Museu Colecção Berardo, 2009), 39. 17. Ibid, 65. "[...] I convinced the clients that the use of natural resistant materials (neither plastered nor painted) on well-visible vertical surfaces was a good solution under the economic viewpoint." executed as a reinterpretation of the "calçada à portuguesa" (Portuguese stone paving), once again stressing the architect"s concerns as to durability of the materials and exploration of new (old) textures. 18 Figure 6. Abreu, Santos e Rocha building, Maputo, Mozambique (1953)-Pancho Miranda Guedes. Detail Facade. Source: Inês Gonçalves, 2008 In addition to the murals that are present in so many buildings he authored, with either internal or external presence, having either a more private or a more public, urban dimension, and being more or less conspicuous, such as the Zambi restaurant panel, the top piece of the "Leão que Ri" ("Laughing Lion,") the ceiling of the canopy of the Mann George building, this vocation of an architecture contaminated by the other arts is displayed in the sculptural, pictorial and chromatic nature of the universe of shapes and images that are present in his works. The African arts and crafts are a few of the keys to decode the extensive work of Pancho Guedes, himself a collector of artefacts from "all over the world, but mostly from Mozambique and Angola" 19 and, as already mentioned above, the driving force of the remarkable trajectory of the Mozambican painter Malangatana. On the other hand, the close cooperation between architects and artists, often using African imagery, has also contributed to the uniqueness of this universe of works. The imagery of the African art and culture became, all 18. Ibid. "Because I am a Portuguese citizen, it seemed to me that the solution would be to resort to the techniques and materials of the pavers who make drawings on the sidewalks and squares of Portugal." 19. Ibid, 165. "These objects remind me that there are other creatures, other ways of life and other forms of expression. They give me a passage to such other worlds." along the 60s, one of the most important references of plastic artists residing in Angola and Mozambique (Figure 7). (1958-1966) | Mural Jorge Garizo do Carmo (1927-1997 Source: Ana Magalhães, 2008. Modern Architecture: From Universal to Local In spite of the common template, different genealogies of architectural models and languages may be found in the path of those architects. In the case of Angola, for instance, Vieira da Costa and Simões de Carvalho, both Le Corbusier"s disciples, built up their language by reinterpreting the master"s references (Figure 8). Angola, Luanda, Angola (1963-1967)-Fernão Simões de Carvalho Source: Inês Gonçalves, 2008 In its turn, the influence of Brazilian modern architecture, reminding the work of Oscar Niemeyer or Affonso Reidy , is obvious in the architecture of the city of Beira, such as the Manga Church (1955)(1956)(1957), (Figure 9) by João Garizo do Carmo (1917Carmo ( -1974 or Motel Estoril (1957)(1958)(1959) by Paulo Sampaio. Figure 9. Manga Church, Beira, Mozambique (1955-1957, João Garizo do Carmo Source: Ana Magalhães, 2008. There are many projects that, by interpreting their models in a hybrid manner, and sometimes behind time, insist on the modern premises at a time when criticism is felt and new answers are looked for. In the African context, their search was for a less universal and more local answer. Or, as João José Tinoco proposed, they crossed "regional and universal" 20 and "cosmopolitan and native." 21 The heterodox architecture of Pancho Guedes goes far beyond this, and reinvents the modern and announces the post-modern. Figure 8. Radio Nacional de The role of private initiative orders in fostering freedom and as a catalyst for the modern project was of the essence. In the case of single houses, for instance, this allowed for testing multiple expressions of the modern vocabulary. The cases of José Pinto da Cunha, in Luanda, or Paulo Melo Sampaio and João Garizo do Carmo, in the city of Beira, should be stressed. In this set of works, tests are common, both in the space structures of the housing typology and in the formal nature, as well as in the way of effectively responding to climate conditions. Having as their common basis the Corbusian proposition 20. João José Tinoco, "Da Arquitectura Moderna em África e o seu panorama em Lourenço Marques," in Capricórnio magazine, no. 2 (September 1958), 6-9. of the "Five Points," 22 those architects sought other references, such as the plasticity of the Brazilian modern architecture, in the case of Beira"s architects, or the imagery of "Californian" houses, such those proposed by the "Case Study House Program," 23 in the case of Pinto da Cunha. The house that the architect Pinto da Cunha designed for his family in 1965 is a large parallelepiped volume partially elevated on pilotis. The volumetric clarity of the suspended parallelepiped, the strong interrelationship between interior and exterior, emphasized in the transition spaces that can be seen in the internal courtyard and the balcony that extends the living room or in the space continuity ensured by the transparency obtained from extensive glass panes, the construction details in the drawing of the staircase, furniture or water mirror, all these are a combination of constituents occurring frequently on the designs of Californian houses ( Figure 10). In a more heterodox sense, Pancho Guedes" extensive work in singlefamily houses in Maputo reveals a reinterpretation of the Corbusian thinking and work, but in this case with irony and eclecticism. This is the case, for instance, of the Matos Ribeiro Twin Houses (1952), where a rationalized space organization contrasts with the insertion of a combination of figurative elements that point to Art Nouveau language or Gaudi"s work. Although the articulation of the different floors in the house is made by staircases and ramps reminding Le Corbusier"s "promenade architectural," the important point to be stressed is the multiple conjunction of models in the same project, between the functional simplicity that shows that the lessons of the Modern Movement were learnt and the complexity generated by the overlapping (or collage) of more subjective, personal images from multiple origins and times ( Figure 11). Corbusier and Pierre Jeanneret: Funf Punkte zu einer neuen Architektur (Stuttgart: Akadem Verlag Dr. Fr. Wedekind and Co., 1927) and L'Architecture Vivante nº17, (1927). Alfred Roth (ed.), Zwei Wohnhauser von Le 23. Elisabeth A. T. Smith, Case Study Houses -The Complete CSH Program, 1945-1966(Taschen, 2009 The progressive urban model of the Marseille Housing Unit (1945)(1946)(1947)(1948)(1949)(1950)(1951)(1952) allowed for an intense typological research in the field of collective housing after World War II. The "Unité d"habitation de grandeur conforme" developed as a prototype re-equating the functionalist dimension through the expressionmanifesto of the house as a living machine will allow for a wide experimentation in the study of housing for the masses, by researching new forms of conjunction and internal organization of the dwellings, circulation schemes or space hierarchy. The Housing Unit model, a mixed repeatable block, will be thoroughly exploited, not only in the European reconstruction after the war, but also in other lands and geographies, seeking its adaptation to different climates, cultures and social contexts. But if "(...) as a prototype the Unité was unavoidable, the problem was to transform its fundamental lessons into a more flexible terminology attuned to particular cities, societies and climates." 24 The Angolan and Mozambican cities, which predominant model was characterized by the sectorial city having a design inspired by the Garden City, based on a radial and axial composition, with wide avenues and extensive low-density residential areas, favour the single house. However, here and there, particularly in Luanda or Maputo, one can find a few detail plans that fall within the conceptual and formal framework of the urban models based on the Charte d"Athènes and foster the construction of collective housing units. Such housing buildings addressed to an urban colonial middle class started to be designed at the end of the 50s and are a significant mark of the largest Angolan and Mozambican cities of the 60s. Although in a much lesser size than the Marseille Housing Unit, such buildings are mixed housing, service and shop blocks, which are based on the premises of the reference model and test new housing typologies as appropriate for the tropical climate. In Maputo, the Tonelli building (1954)(1955)(1956)(1957)(1958), designed by Pancho Miranda Guedes (1925Guedes ( -2015, is a housing tower with twelve floors, combining duplex and single apartments, has in the housing unit of the Unité a little of its genesis and consists in "the original human shelf" (Figure 12) as Pancho Guedes stated 25 or the TAP/Montepio building (1955)(1956)(1957)(1958)(1959)(1960), by Alberto Soeiro (1917nd.), where the conjunction of housing cells around two external circulation galleries is remarkable. In the city of Lobito, the Universal building (1957-1961), Francisco Castro Rodrigues (1920-2015 exploits the integration of housing spaces, community spaces and public spaces (Figures 13 and 14). A circulation through an external peripheral gallery, a skillful composition of the housing cell conjunction structure and a rational sense of the internal organization of the dwellings are common denominators of all three projects and reflect not only an appropriate response to the characteristics of the climate but also the colonial society"s desire for modernization. Such works, which are the heirs of the Unité and the modern premises, are, however, late examples, developed between the end of the 50s and all along the 60s, at a time when, in Europe, architects were acquiring a critical conscience over the dogmas of the Modern Movement. Among community facilities of private initiative, the typologies dedicated to culture, tourism and leisure should be highlighted. Unlike the large Government community buildings, often subject to the restrictive conditions of a cultural policy imposed by the regime and because they are intended for more informal functions, these works are ideal for applying freely the modern codes. Hotels, clubs, cinemas or theatres are important facilities in settling population, and they express, in an exemplary way, the idea of prosperity and well-being that was felt among the urban middle class of the Portuguese colonies, especially during the period that preceded the toughening of the colonial war (1961)(1962)(1963)(1964)(1965)(1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974). They reflect very well the wish of modernity and progress of this society. One of the most paradigmatic examples are the open-air movie theatres, the so-called cine-esplanadas. 26 If the movie theatre symbolically embodied the idea of progress, the architecture of such spaces affirmed its consciousness: the cine-esplanada is cinematographic in itself. Its spatial structures make us experience an "architectural promenade" with long cinema "travellings". These are large-sized buildings, intended for a mass public and seeking a monumental scale like Cine Flamingo in Lobito ( Figure 15) and Cine Miramar in Luanda. Their symbolic nature defines them as strategic landmarks in the urban context. The main concern that is common to all these projects is the structural design of the roof or projected roof platform aimed at giving it an expressive form, or else, the significance given to the size and plasticity of the screen. Such plasticity, as well as the textures and color combinations used are a clear reference to the "free form modern" developed by Brazilian architects in the 1940s and 1950s or by Le Corbusier"s late works. If private initiative functioned as a catalyst for the modern project, orders from local organizations, from the Church to local authorities, for instance, also made the more traditional and historical nature of public institutions to lose their place in favour of a growing opening for modern languages. In the lineage of the new monumentality 27 (Figure 16) advocated by post-war modern architects, since the middle of the 1960"s, the public work seeks a new relationship with the city and acquires a more symbolic and human sense. Beira, Mozambique (1958-1966 African Modern Architecture Legacy: Identity and Future Today, in a post-colonial context, the study of the architectural production in Angola and Mozambique in this specific colonial period raises relevant issues. In addition to the specific inventory and review of the works and their authors, it is essential to consider the value and place of this heritage in the History of Architecture. Firstly, on a broader approach, the idea of its legitimate belonging to the Modern Movement of the second post-war is to be stressed. In spite of the ideological contradictions between the assumptions of democracy and the colonial condition, the identity assertion of these works with the aesthetic and construction values of the Modern Movement architecture is undeniable. More than forty years after the independences of Angola and Mozambique, identity and heritage issues are still a sensible topic. Far beyond the difficulty of dealing with the memory of this Modern Movement heritage, we are confronted with the History of these countries in transformation: from decolonization to civil war, from the nationalist premises of independence to the search for a new course and a new identity. Only an in-depth knowledge of the cultural and scientific value of this heritage may overcome other priorities, whether of a political or a social or an economic nature.
2019-06-19T13:23:25.396Z
2017-12-29T00:00:00.000
{ "year": 2017, "sha1": "20a468bd548a42eedb8d3f19d804b30b78b4edf4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.30958/aja.4-1-2", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "faaa9f4cc5243eabf859fc758ffbb27f1f7bfca8", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [ "Art" ] }
35182589
pes2o/s2orc
v3-fos-license
Pathogenesis and therapeutic approaches for non-alcoholic fatty liver disease Non-alcoholic fatty liver disease affects approximately one-third of the population worldwide, and its incidence continues to increase with the increasing prevalence of other metabolic disorders such as type 2 diabetes. As non-alcoholic fatty liver disease can progress to liver cirrhosis, its treatment is attracting greater attention. The pathogenesis of non-alcoholic fatty liver disease is closely associated with insulin resistance and dyslipidemia, especially hypertriglyceridemia. Increased serum levels of free fatty acid and glucose can cause oxidative stress in the liver and peripheral tissue, leading to ectopic fat accumulation, especially in the liver. In this review, we summarize the mechanism underlying the progression of hepatic steatosis to steatohepatitis and cirrhosis. We also discuss established drugs that are already being used to treat non-alcoholic fatty liver disease, in addition to newly discovered agents, with respect to their mechanisms of drug action, focusing mainly on hepatic insulin resistance. As well, we review clinical data that demonstrate the efficacy of these drugs, together with improvements in biochemical or histological parameters. © 2014 Baishideng Publishing Group Inc. All rights reserved. INTRODUCTION Non-alcoholic fatty liver disease (NAFLD), the accumulation of lipid within hepatocytes, is a common disease [1] .The worldwide prevalence of NAFLD is estimated to be 20%-30% [2] , although increasing to 57%-74% among obese patients [3] .NAFLD refers to a wide spectrum of fatty degenerative disorders of the liver in the absence of alcohol intake, ranging from simple steatosis to steatohepatitis and cirrhosis [4] .Nonalcoholic steatohepatitis (NASH) is histologically characterized by inflammatory cell recruitment.NASH is a significant risk factor for hepatic cirrhosis, compared with simple steatosis [5] , and 4%-27% of cases of NASH progress to hepatocellular carcinoma after the development of cirrhosis [6] .In one study, NAFLD was pres-ent in 75% of obese [body mass index (BMI)≥ 30 kg/m 2 ] patients, 16% of non-obese patients, and 34%-74% of patients with type 2 diabetes [7] .Another study reported diagnoses of fatty liver in 39% of obese (BMI ≥ 30 kg/m 2 ) patients, 41% of patients with known type 2 diabetes, and 32% of patients with dyslipidemia [8] .Patients with NAFLD are not only insulin resistant, but also tend to present with alterations in plasma triglyceride (TG) levels [9] .NAFLD is strongly associated with metabolic syndrome, especially insulin resistance, central obesity, and dyslipidemia.Therefore, NAFLD is regarded as a difficult to treat component of metabolic syndrome [10] .In this review, we investigate the mechanisms of hepatic fat accumulation, focusing on the role of insulin resistance therein, and review current therapeutic options and new candidate drugs for the treatment of NAFLD. Insulin resistance -free fatty acid flux and hyperinsulinemia Hepatic steatosis is caused by an imbalance in triglyceride movement through the liver cell.Triglyceride is composed of free fatty acid (FFA) and glycerol.Total FFA is derived from three sources, the diet (15%), de novo synthesis (26%), and circulating FFA (56%) [11] .A highfat diet is known to lead to the development of hepatic steatosis.However, estimates suggest that approximately 60% of liver fat is derived from circulating nonesterified fatty acids (NEFAs) in individuals who eat a normal fat-containing diet [11] .Obesity is associated with insulin resistance and an elevated leptin level.In particular, increased visceral fat correlates with peripheral and hepatic insulin resistance [12,13] .Insulin resistance in skeletal muscle and adipose tissue results in increased levels of NEFAs through increased lipid oxidation in adipose tissue (Figure 1).Accordingly, NEFA flux plays an important role in hepatic fat accumulation [14] .An increase in hepatocellular diacylglycerol is associated with decreased tyrosine phosphorylation of insulin receptor substrate 2 (IRS-2) [15,16] .In turn, the decreased activity of IRS-2 and PI3K leads to increased hepatic glucose production [17] .Hyperinsulinemia also arises in response to insulin resistance in adipose tissue, leading not only to downregulation of IRS-2 in the liver, but also to a continued increase in the level of sterol regulatory element binding protein-1c (SREBP-1c) via the insulin signaling pathway involving AKT2, liver X receptor (LXR) and mammalian target of rapamycin [18,19] .Elevated levels of SREBP-1c up-regulate lipogenic gene expression, increase fatty acid synthesis, and accelerate hepatic fat accumulation [20] .Additionally, overexpression of SREBP-1c represses IRS-2 expression [21] .Glucosestimulated lipogenesis is mediated by carbohydrateresponsive element-binding protein (ChREBP) in the liver.Like SREBP-1c, ChREBP increases lipogenesis by inducing lipogenic gene expression during consumption of a diet high in carbohydrates [22,23] . Endoplasmic reticulum stress The endoplasmic reticulum (ER) is an intracellular organ-elle that plays an important role in the synthesis, folding, and trafficking of proteins.Cellular nutrient status and energy condition highly influence the function of the ER, and dysfunction in the ER causes accumulation of unfolded proteins therein, triggering an unfolded protein response (UPR) [24] .Under stress, such as hypoxia, inflammation and energy excess, UPR is characterized by adaptive cellular processes of increased degradation of proteins and translational arrest of protein synthesis to restore normal function of the ER.As well, UPR mediates metabolic and immune responses that aggravate insulin resistance [25][26][27] .Both PKR-like kinase and the α-subunit of translation initiation factor 2 (eIF2α), wellknown ER stress markers, are increased in hepatocytes of ob/ob mice, compared with control mice [26] .Obesity causes ER stress that leads to suppression of insulin signaling through serine phosphorylation of insulin receptor substrate-1 (IRS-1) and activation of the c-Jun N-terminal kinase (JNK) pathway [26] .Among subjects with metabolic syndrome, those with NASH showed higher levels phosphorylated JNK protein, compared to subjects with simple hepatic steatosis.Furthermore, subjects with NASH did not generate spliced manipulation of X-box-binding protein-1 (sXBP-1), which is a key regulator in ER stress in relation to insulin action [24,26] .Additionally, weight reduction in obese subjects has been shown to induce improvement in ER stress via suppression of phosphorylated JNK and eIF2α in adipose tissue and the liver [28] . Role of oxidative stress -mitochondrial dysfunction The two-hit hypothesis is a key concept of NAFLD pathogenesis.In fatty livers, simple hepatic steatosis (first hit) sensitizes the liver to inflammatory cytokines or oxidative stress (second hit), leading to development of steatohepatitis [29] .Oxidative stress is resulted from a serious imbalance between the limited antioxidant defenses and excessive formation of reactive species such as reactive oxygen species (ROS) or reactive nitrogen species (RNS) [30] .ROS is an integrated term that describes a variety of species of free radicals derived from molecular oxygen, such as superoxide, hydrogen peroxide, and hydroxyl [31] .In cells, mitochondria are a major source of ROS generation.The important factor modulating mitochondrial ROS generation is the redox state of the respiratory chain [32,33] .FFAs are metabolized via the mitochondrial β-oxidation pathway and the tricarboxylic acid (TCA) cycle, which generates citrate that in turn inhibits glycolysis.As a result, glucose oxidation and glucose uptake via glucose transporter type 4 (GLUT4) in skeletal muscle are reduced [34,35] .To compensate for the excessive fat storage in the liver, increased hepatic FFA uptake stimulates hepatic oxidation of fatty acids in obese individuals.Mitochondrial FFA oxidation is maintained until mitochondrial respiration becomes severely impaired [36,37] .However, accelerated β-oxidation not only causes excessive electron flux in the electron transport chain, but also leads to increased production of ROS, and can lead to mitochondrial dysfunction [38] .Excessive ROS production by mitochondria can lead to oxidative damage to the mitochondrial membrane and DNA and can impair mitochondrial metabolic functions [33] .The increase in hepatic lipogenesis in NASH results in increased production of malonyl-CoA.Inhibition of carnitine palmitoyltransferase-I (CPT-1) by malonyl-CoA leads to decreased entry of long chain fatty acid into the mitochondria, and causes reduced β-oxidation and enhanced triglyceride accumulation in the liver [38][39][40] .The nuclear receptor peroxisome proliferator-activated receptor α (PPAR-α) plays an important role in the transcriptional control of many enzymes involved in mitochondrial fatty acid β-oxidation.Peroxisome proliferator-activated receptor-gamma coactivator (PGC) -1α cooperates with PPAR-α and regulates genes that encode mitochondrial fatty acid oxidation enzymes, such as CPT-1 and medium chain acyl-CoA dehydrogenase [40] .Previously, a PPAR-α-deficient mouse model showed a lack of hepatic peroxisome proliferation and dyslipidemia with obesity and hepatic steatosis [41] . Inflammation and adipokines Overall obesity is correlated with NAFLD, and accumulation of intra-abdominal fat in particular is believed to play an important role in the development of insulin resistance [12,13] .Meanwhile, hepatic fat accumulation is associated with insulin resistance independent of intraabdominal fat accumulation and overall obesity.Even in normal weight subjects, hepatic steatosis has been shown to be related to various parameters of insulin resistance, such as basal glucose level or serum FFA level [42] .In addition to being a major organ of triglyceride deposition, adipose tissue acts an endocrine organ that secretes several hormones [43] .Adipocytes secrete adiponectin and leptin, in addition to the other adipokines, such as retinol-binding protein, tumor necrosis factor-α (TNF-α), interleukin 6 (IL-6), and plasminogen activator inhibitor-1 [43] .Adiponectin stimulates phosphorylation of AMP-activated protein kinase (AMPK) and acetyl-CoA carboxylase (ACC) in the liver and muscles, thereby, increasing glucose utilization and fatty-acid oxidation [44] .In a previous study, serum adiponectin levels decreased with an increases in obesity, in particular increases in intra-abdominal fat mass [45,46] .In another study, adiponectin knockout mice fed a high-fat diet exhibited increased incidences of obesity, hyperinsulinemia, and steatohepatitis.These experimental data indicate that adiponectin may play a key protective role against the progression of NASH [47] .Reportedly, adipose tissue in obese individuals stimulates a shift in macrophage activation from the al- Life style modification -diet and exercise Weight loss due to diet and exercise has been demonstrated to alleviate hepatic steatosis [61] .Body weight reduction and exercise are important independent factors for improvement of hepatic steatosis [62] .In obese women, hepatic fat content measured by magnetic resonance imaging was shown to decrease in response to weight loss interventions [63] .Several studies have shown a significant reduction in alanine transaminase (ALT) levels and improvement in biochemical markers following intervention with a calorie-restricted diet combined with exercise [63,64] . A few studies have also shown histologic improvement with increased exercise and weight reduction [65,66] (Table 1).Exercise improves insulin sensitivity in skeletal muscle via GLUT4 expression and increases glucose utilization.Thus, exercise decreases levels of serum glucose and insulin [67] .An improvement in hyperinsulinemia can result in decreased liver fat mass, because hyperinsulinemia stimulates hepatic steatosis via the SREBP-1c pathway [19] . In particular, NAFLD patients with metabolic syndrome show a great improvement in hepatic steatosis after weight loss [68] . Insulin sensitizer-thiazolidinedione, metformin Thiazolidinedione: Thiazolidinediones (TZDs) are insulin-sensitizing agents that have been shown to improve not only hepatic steatosis, but also whole body insulin resistance [69] .Improvements in insulin resistance and histologic and biochemical parameters were reported with TZD treatment [70][71][72][73][74] .Rosiglitazone is one TZD and is as-ternative response (M2) to the classic response (M1), and these classically activated macrophages (CAMs) secrete a variety of inflammatory cytokines, such as TNF-α, IL-6, and NO [48] .Additionally, studies showed that inflammatory activation of hepatic Kupffer cells in ob/ob mice promotes hepatotoxicity, resulting in hepatic insulin resistance and steatohepatitis [49,50] .Thus, increases in TNF-α and IL-6 in obese subjects may play an important role in insulin resistance and hepatic steatosis [51,52] . Gut-microbial alternation and TLRs stimulation As mentioned above, obesity is often associated with NASH and systemic inflammation characterized increases in inflammatory cytokine levels.Obesity also can cause increased intestinal mucosa permeability and endotoxin levels in portal circulation that can contribute to hepatocellular damage [53,54] .Kupffer cells in the liver play a key role in clearing endotoxin and are activated through Toll like receptor 2,3,4 and 9 signaling in the presence of endotoxin.In particular, activation of Toll like receptor4 (TLR4) is reportedly associated with stimulation of lipopolysaccharide (LPS) [55][56][57] .Previously, animal model studies showed that TLRs 2, 4 and 9 may contribute to the pathogenesis of NAFLD [55,58] .Activated Kupffer cells induce expression of pro-inflammatory cytokines, such TNF-α, IL-6, IL-18 and IL-12 as well as anti-inflammatory cytokines [59] .TLRs including TLRs 2,4 and 9 are activated via a MyD88 dependent pathway.This pathway consists of the activation of serine kinase IL-1R-associated kinase and TBFreceptor-associated factor 6 and is involved in the activation of the transcription factor NF-κB, which is related to inflammatory cytokine production [60] . Study Treatment group Control group No. Table 1 Treatment outcomes of variable regimens No.: Number; US: Ultrasonography; RCT: Randomized controlled trial. Yoon HJ et al .Non-alcoholic fatty liver disease sociated with an increased risk of myocardial infarction and cardiovascular death [75] .Meanwhile, pioglitazone is regarded as safe in regards to cardiovascular outcomes and is not associated with increased cardiovascular risk [76,77] .In patients with type 2 diabetes, pioglitazone has been recommended for the treatment of steatohepatitis proven by liver biopsy; however, its role in non-diabetic patients has not been established.The American Association for the Study of Liver Disease (AASLD) introduced pioglitazone as a first-line treatment of NAFLD in patients with type 2 diabetes [78] .TZDs increase glucose utilization of peripheral tissue and improve whole body insulin sensitivity as measured by the hyperinsulinemic euglycemic clamp technique, in patients with type 2 diabetes.Moreover, serum adiponectin levels increase and serum insulin levels decreases after treatment with pioglitazone [79,80] .An increase in serum adiponectin contributes to alleviation of hepatic steatosis and improves hepatic and peripheral insulin resistance [79] .As mentioned above, adiponectin increases lipid oxidation of FFA by ACC phosphorylation in the liver [44] , and promotes the activation of antiinflammatory M2 macrophages rather than M1 macrophages [81] .Obesity is closely related to an increase in NAFLD risk [82] .Increased levels of inflammatory adipose tissue macrophages (ATMs) and their secreted cytokines in a mouse model were shown to be related to systemic insulin resistance, which is associated with NAFLD development [15,83] .According to previous studies, ATMs are increased in obese subjects [84] , and pioglitazone treatment results in not only a decrease in ATM content, but also in the inflammatory markers, TNF-α, IL-6, and inducible nitric oxide synthase (iNOS) [85,86] .TZDs also promote the alternative activation of monocytes into macrophages with anti-inflammatory properties, as opposed to the proinflammatory phenotype [87] .Although the pathogenesis of NAFLD development is closely related to obesity, the distribution of fat is more important than overall obesity.Excessive visceral fat accumulation plays an important role in the development of insulin resistance and NAFLD by acting as a source of FFA [12] .Pioglitazone is strongly associated with fat redistribution, increases in subcutaneous fat area decreases in visceral fat area (visceral to subcutaneous fat ratio) [88] .Another study showed that the ratio of visceral fat thickness to subcutaneous fat thickness decreases after pioglitazone treatment and is correlated with a change in high sensitivity C-reactive protein levels [89] .TZD treatment results revealed a decrease in serum FFA levels, which in turn reduced FFA supply to the liver and led to a decrease in hepatic triglyceride content [90] .Recent studies have focused on the role of sirtuin-6 (SIRT-6) in the glucose and lipid metabolism associated with TZDs.TZD treatment reduced hepatic fat accumulation and increased expression of SIRT-6 and PGC1-α in rat livers [91] .Also, liver-specific SIRT-6 knockout mice exhibited fatty liver formation [92] , leading to NASH [93] . Metformin: Metformin improves insulin resistance and hyperinsulinemia by increasing peripheral glucose uptake and decreasing hepatic gluconeogenesis [94] .Metformin activates AMP kinase via a LKB-1 dependent mechanism in skeletal muscle.Also it can activate AMPK by stimulating AMP accumulation in hepatocytes.The increase in AMP interferes with glucagon action and decreases cAMP levels, leading to decreased production of hepatic glucose [95,96] .Activation of AMPK results in decreased hepatic triglyceride synthesis and increased fatty acid oxidation [97] , as well as attenuated hepatic steatosis due to decreased SREBP-1c activity [98] .A randomized controlled trial showed that subjects treated with metformin exhibit significant improvement in ALT levels, compared with those who were on a restricted diet or were treated with vitamin E, as well as improvements in histology after a 12 mo of treatment [99,100] .Many studies have shown that metformin treatment normalizes transaminase levels and decreases hepatic steatosis as determined by follow-up ultrasound; nevertheless, histologic data remain limited [100][101][102][103] .As NASH is closely associated with development of HCC and liver fibrosis, metformin may be limited in the reduction of these severe outcomes, including mortality [104] . Antioxidant -vitamin E (α-tocopherol), pentoxifylline As mentioned above, oxidative stress contributes to the progression of NASH from simple hepatic steatosis. A recent study reported that subjects who were treated with vitamin E (α-tocopherol) showed improvement in hepatic steatosis and serum aminotransferase levels compared to a placebo group [74] .Vitamin E (α-tocopherol) has been used to treat non-diabetic NASH patients diagnosed by liver biopsy [78] .Meta-analyses of vitamin E have revealed an increase in all-cause mortality with high dose (≥ 400 IU/d) vitamin E supplement use, especially in subjects with chronic disease or at high risk for cardiovascular disease events, such as type 2 diabetes.However, these results are uncertain in healthy subjects [105,106] .Two pilot studies reported improved ALT levels with vitamin E treatment [107,108] .However, two randomized controlled trials failed to show the efficacy of vitamin E treatment in NAFLD [109,110] .Pentoxifylline, a TNF-α inhibitor, has also been considered for the treatment of hepatic steatosis, since it plays an important role in the progression of simple hepatic steatosis to steatohepatitis.In previous studies, administration of pentoxifylline generated improvements in biochemical markers, such as aminotransferase and Homa-IR, in patients with NASH [111,112] .Nevertheless, further study is needed to prove the efficacy of pentoxifylline with respect to histologic improvement of NAFLD. Lipid-lowering agents -fibrates, ezetimibe and statins Hypertriglyceridemia is a major component of metabolic syndrome and is strongly associated with NAFLD.Increased FFA delivery to the liver causes accumulation of hepatic fat [9] .Many different lipid-lowering agents have been investigated for the treatment of NAFLD.Patients treated with gemfibrozil, one type of fibrate, showed decreased ALT levels, compared to the control group [113] .However, clofibrate did not show a beneficial effect on Yoon HJ et al .Non-alcoholic fatty liver disease NAFLD [114] .PPAR-α modulates not only FFA transport and β-oxidation to decrease triglyceride in hepatocytes, but also glucose and amino acid metabolism in liver and skeletal muscle.PPAR-α activation is involved in lipoprotein metabolism by increasing lipolysis, thus reducing the production of triglyceride-rich particles [115] .Fenofibrate increased levels of PPAR-α and decreased hepatic steatosis in an APOE2KI mouse model that represented dietinduced NASH [116] .A prospective study using atorvastatin reported significant reductions in serum transaminase level [117,118] .Atorvastatin induces hepatic low-density lipoprotein receptor-related protein 1 (LRP-1) that plays an important role in clearance of circulating triglyceride in the liver [119] .In disposal of chylomicron in hepatocytes, interaction of LRP-1 receptors and ApoE play important roles [120] .Thus, ApoE-deficient mice showed development of hepatic steatosis even when they were fed a normal chow-diet.Accordingly, ApoE may play a key role in intracellular metabolism and control of VLDL production by hepatocytes [121] .Statins are very important drugs to treat dyslipidemia in subjects with both insulin resistance and NAFLD.However, there is continued concern about the use of statins in subjects with established liver disease.According to several randomized controlled studies and retrospective studies, statin rarely induces serious liver injury [122][123][124][125] .Ezetimibe, a potent inhibitor of cholesterol absorption, has been reported to improve hepatic steatosis in obese Zucker fatty rats [126] .In a randomized controlled study, six months of treatment with ezetimibe led to improvements in serum ALT levels and histologic observations [127,128] . Ursodeoxycholic acid Ursodeoxycholic acid (UDCA) is widely used in subjects with abnormal liver function.Several studies have investigated the efficacy of UDCA as a treatment drug of NAFLD, reporting that UDCA treatment attenuated hepatic steatosis, including histologic improvement [114,129,130] .However, in a placebo controlled, randomized control trial, UDCA exhibited limited efficacy in histologic improvement in subjects with NASH and improvements in liver enzyme did not differ in the UDCA group, compared to the placebo group [130] .Accordingly, AASLD does not recommend UDCA for the treatment of NAFLD [78] . Other treatment options -future candidates Cilostazol: SREBP-1c is a key regulator of lipogenic gene expression in hepatocytes.Recent data have shown that cilostazol, a selective type Ⅲ phosphodiesterase inhibitor, inhibits SREBP-1c expression via the suppression of LXR and Sp1 activity [131] .Cilostazol also decreases serum triglyceride levels by increasing lipoprotein lipase (LPL) activity in STZ-induced diabetic rats [132] .Also, experimental data show that cilostazol stimulates LRP1 promoter activity in hepatocytes, leading to increased hepatic LRP1 expression [133] .In a study that used two experimental NAFLD models, both high-fat/high-calorie (HF/HC) diet mice and the choline-deficient/L-amino acid-defined (CDAA) diet mice, cilostazol generated improvement in hepatic steatosis in both mice models [134] .Cilostazol exhibits the potential for improvement of hepatic steatosis, and further data on its role in NAFLD are needed. Polyunsaturated fatty acids and monounsaturated fatty acids: Polyunsaturated fatty acids (PUFAs) are found primarily in safflower, corn, soybean, cottonseed, sesame, and sunflower oils.Omega-3 fatty acids are representative of PUFA.A marked increase in long-chain PUFA n-6/n-3 ratio is observed in NAFLD patients and is associated with increased production of pro-inflammatory eicosanoids and dysregulation of liver and adipose tissue function [135] .PPAR-α activity is impaired in conditions in which levels of circulating n-3 PUFA are decreased and the n-6/n-3 fatty acid ratio is increased [136,137] .Treatment with n-3 PUFA was shown to improve biochemical parameters and alleviated hepatic steatosis by ultrasound follow-up [138,139] .Monounsaturated fatty acids (MUFAs) are comprised in olive oil.In a rat model, supplementation with MUFA resulted in improved insulin sensitivity, compared to rats fed a saturated fatty acid (SFA) diet.Additionally, GLUT4 translocation in skeletal muscle was decreased in rats fed a SFA diet, but not in those fed a MUFA diet.Increased GLUT4 translocation is related to an improvement in insulin sensitivity [140] .In obese rats, MUFA diet attenuated hepatic steatosis and altered hepatic fatty acid levels [141] .The beneficial effects of dietary MUFA in NAFLD patients should be investigated. GLP-1 analogue: Exenatide is the synthetic form of exendin4 and it stimulates endogenous insulin secretion, leading to decreases in blood glucose.In one animal study, treatment of exendin4 resulted in a decrease of hepatic fat content, as well as reduction of fatty acid synthesis, in the liver of ob/ob mice [142] .In patients with type 2 diabetes, an exenatide treatment group showed greater improvements in liver enzymes, attenuating hepatic steatosis, than the metformin treatment group.However, this study had limitations of a lack of histologic confirmation of the liver [143] .To prove the efficacy of glucagon like peptide-1 (GLP-1) analogue in treatment of NAFLD, randomized controlled trials over a longer period are required. MK615: MK615 is extracted from Japanese apricots, and can suppress the production of inflammatory cytokines such as TNF-α and IL-6 by inactivating NF-κB [144,145] .MK615 is regarded as a hepatoprotective agent, as it has been shown that a MK615 treatment group exhibited greater decreases in liver enzyme levels, compared with control groups.In rat models, MK615 treatment mice showed more improved liver histology than control mice [146] .Thus, further studies are required to clarify the effects of MK615 in subjects with NAFLD. CONCLUSION NAFLD is a common disease that can progress to liver cirrhosis.Moreover, NAFLD is strongly associated with type 2 diabetes and insulin resistance.NAFLD is the result of complex interactions among diet, metabolic components, adipose tissue inflammation, and mitochondrial dysfunction.The pathogenesis of hepatic steatosis has not yet been fully determined.In this review, we outlined previously known mechanisms of NAFLD, as well as introduced new mechanisms that have been recently discovered.Above all, we reviewed the mechanisms of drugs matched to the pathogenesis of NAFLD.Furthermore, we introduced future treatment option for NAFLD.TZDs play a key role in restoring insulin sensitivity and decreasing adipose tissue inflammation, generating histologic improvements in hepatic steatohepatitis. Pioglitazone can be used to treat NASH in patients with type 2 diabetes with biopsy-proven NAFLD; meanwhile, non-diabetic patients can be treated with vitamin E. Metformin is a well-known insulin sensitizer; however, further study is needed to prove histologic improvements in patients with NAFLD.Additionally, the cholesterol-lowering agent ezetimibe has also shown histologic improvements.Cilostazol acts on SREBP-1c and can improve dyslipidemia; however, further research is needed to clarify the relationship between NAFLD and cilostazol.Finally, there is an outstanding need for effective preventive and therapeutic regimens to overcome NAFLD. Figure 1 Figure 1 Mechanism of hepatic insulin resistance and the key pathway of drug action.Delivery of FFAs to the liver and skeletal muscle is increased in insulin resistance conditions, and these are metabolized via mitochondrial β-oxidation.Consequently, hyperglycemia and increased hepatic FFA uptake reduce glucose uptake and oxidation in skeletal muscle.Diet and exercise are the main treatment strategies for this pathogenesis; insulin sensitizers and MUFA may contribute to reducing peripheral insulin resistance.Pioglitazone and fenofibrate act on β-oxidation of mitochondria and reduce hepatic steatosis.Accelerated β-oxidation also causes increased production of ROS.Vitamin E can reduce oxidative stress.Adipose tissue inflammation of the liver leads to inflammatory activation of hepatic Kupffer cells via classic response (M1) and produce inflammatory cytokines.This is also associated with decreased adiponectin levels and promotes hepatic steatohepatitis.Pentoxifylline inhibits TNF-α and alleviates steatohepatitis.Hyperglycemia caused by insulin resistance up-regulates lipogenic gene expression, such as SREBP-1c and ChREBP, and induces lipogenesis in hepatocytes.Cilostasol may inhibit SREBP-1c.FFA: Free fatty acid; TG: Triglyceride; CPT-Ⅰ: Carnitine palmitoyltransferase-Ⅰ; ACC: Acetyl-CoA carboxylase; ATGL: Adipose triglyceride lipase; ChREBP: Carbohydrate responsive element binding protein; SREBP-1c: Sterol regulatory element binding protein-1c; TCA: Tricarboxylic acid; ROS: Reactive oxygen species; IRS: Insulin receptor substrate; DAG: Diacylglycerol; G-6-P: Glucose 6-phosphate; TNF-α: Tumor necrosis factorα; MUFA: Monosaturated fatty acids; M1: Kupffer cells activated via classic pathway.
2017-10-24T16:24:18.634Z
2014-11-27T00:00:00.000
{ "year": 2014, "sha1": "9fb6042586466169f480a37c960da58d41ac02ca", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4254/wjh.v6.i11.800", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "9fb6042586466169f480a37c960da58d41ac02ca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254072602
pes2o/s2orc
v3-fos-license
Patient safety culture in the operating room: a cross-sectional study using the Hospital Survey on Patient Safety Culture (HSOPSC) Instrument Background Credible evidence has established a link between the level of patient safety culture in healthcare environments and patient outcomes. Patient safety culture in the operating room has received scant attention despite the burden of adverse events among surgical patients. We aimed to evaluate the safety culture in our operating rooms and compare with existing data from other operating room settings. Methods We investigated the patient safety culture in the operating rooms of our hospital as perceived by the surgeons, nurse anaesthetists and perioperative nurses using the Hospital Survey on Patient Safety Culture (HSOPSC) instrument. IBM Statistical Package for Social Science software, version 25, was used for data entry and analysis. Differences were considered significant when p < 0.05. Results Only 122 completed surveys were returned out of a survey population of 132 frontline staff, yielding a response rate of 92.4%. The overall average composite score was 47%. The average composite scores ranged from 17–79.6% across the 12 dimensions of the HSOPSC, with teamwork within units being the only dimension with demonstrable strength. Non-punitive response to error, communication openness, feedback and communication about error”, frequency of events reported”, handoffs and transition and staffing need improvement. The perceived safety culture varied according to work areas and professional roles with nurse anaesthetists having the highest perception and the surgeons the least. Conclusion Patient safety culture in our operating rooms is adjudged to be weak, with only one of the twelve dimensions of HSOPSC demonstrating strength. This is notwithstanding its comparative strengths relative to other operating room settings. Introduction Two decades ago, while adopting the resolution on patient safety in healthcare at the 55th World Health Assembly the World Health Organization (WHO) recognized the need to promote patient safety as a fundamental principle of all health systems and urged support for member states to promote a culture of safety within health care organizations and encourage research into patient safety [1]. But unsafe care has remained a major source of morbidity and mortality [2], prompting the WHO to launch the first World Patient Safety Day on 17 September, 2019. Beyond the ethical issues of patients suffering personal harm while undergoing care, the burden on the health system encompasses additional treatment, prolonged hospital length of stay, disability and deaths. Very profound too, is the associated phenomena of 'second' 'third' and 'fourth' victims; representing further impact on the involved healthcare professionals, the hospital reputation, and patients who may be harmed subsequently, respectively [3]. Reports from the Irish National Adverse Events Study (INAES) indicate that about 7% of the healthcare adverse events contribute to death, while as much as 70% of these events were considered preventable [4]. Surgical procedures probably account for the majority of healthcare adverse events [5][6][7]. Recent global estimates suggest that over 7 million people suffer surgical complications annually, with over 1 million deaths [8]. Patient safety culture is a product of individual and group values, attitudes, perceptions and competencies that determine a pattern of behavior and commitment to the safety of patients. Despite the established link between the level of patient safety culture in healthcare environments and patient outcomes [9][10][11], safety culture in the operating room (OR) has not received significant attention. Unfortunately too, the few available studies suggest that poor safety culture in the OR is pervasive. In one multicentre study poor scores were reported in all the 10 dimensions (composites) of patient safety culture evaluated [12]. While not losing sight of the numerous studies on other work areas in the hospital, the variability of safety culture even across units within a hospital make extrapolations with these other settings untenable [13]. The increasing interest in patient safety over the past two decades has led to the development of several tools for assessing safety culture and climate in healthcare. Among these the Hospital Survey on Patient Safety Culture (HSOPSC) [14] and the Safety Attitudes Questionnaire (SAQ) [15] are the most utilized. A direct comparison of the SAQ and the HSOPSC by simultaneous administration of both tools on healthcare workers (HCWs) had concluded that the reliability of both instruments showed marked similarity [16]. However, based on the analysis conducted on several safety culture assessment tools, the HSOPSC was adjudged to be the most psychometrically sound, by Fin et al. [17]. Earlier large scale studies in healthcare organizations indicate that frontline personnel's perceptions of better safety climate were superior to management's perceptions in predicting the risk of adverse outcomes [18,19]. The perception of frontline personnel regarding the safety culture in the OR could therefore present a reliable basis for evaluating and improving the safety of surgical patients. Our objective was to assess the current state of patient safety culture in the OR of our hospital, identifying strengths and areas that require improvement. We also sought to compare our safety culture with OR settings elsewhere. This process is in tandem with the Council of Europe's recommendation on management of patient safety, to the effect that defining the existing safety culture in the organization is the first stage in developing a safety culture [20]. Methods This is a cross-sectional descriptive paper-based survey of a purposive sample of frontline operating room personnel in a regional trauma and burns centre. Study setting The 400-bed referral centre which was established in 1973 serves a population of about 80 million encompassing the Southeast where it is located, but also the Southsouth and North-central geopolitical zones of Nigeria. It is one of the three tertiary hospitals in the city, located within 10 min drive from the Akanu Ibiam international airport, Enugu. The hospital also provides care to a substantial number of secondary, and even primary care patients. There are six operating theatres on-site, at different locations. The surgical specialties include trauma, orthopaedics, spine, plastic and reconstructive surgery, and burns care. Our OR personnel typically consists of; the resident surgeons (with, or without the consultant surgeon), the nurse anaesthetist (with, or without the consultant anaesthetist), the perioperative/circulating nurse, the operating room attendant, and sometimes the radiographer. Anaesthesia service in the institution is essentially nurse-based, with one resident doctor and three consultant anaesthetists. The HSOPSC was designed by the United States Agency of Healthcare Research and Quality (AHRQ) in 2004, for the purpose of measuring patient safety culture in individual health institutions [14]. It is a self-reported tool designed specifically for HCWs, requesting for their opinions about the culture of patient safety at their hospitals. It proposes the assessment of 12 dimensions/ composite measures pertaining to the climate of patient safety in hospital setting. The culture of safety is measured from the staff perspective. The HSOPSC consists of 42 items distributed among the 12 dimensions namely; overall perception of patient safety, teamwork within units, teamwork across units, supervisor/manager expectations and actions promoting safety, organizational learning-continuous improvement, executive management support for patient safety, feedback and communication about error, communication openness, frequency of error reporting, staffing, handoffs and transitions between units and shifts and non-punitive response to error. The answers to the items are scaled 1 to 5; 1 and 2 were considered negative towards patient safety, 3 was considered neutral and answer 4 and 5 were considered positive towards patient safety. Out of the 42 items, 17 were negatively framed for psychometric balancing; with the answers reverse-scored prior to recoding into positive, neutral or negative. In effect whereas "agree" and "strongly agree" are ordinarily positive responses, in the negatively framed items "disagree" and "strongly disagree" represent the positive responses. In sum higher values always indicate better perceived safety culture. The composite scores were expressed as the mean percentage of positive answers in the items within each dimension/composite. The overall average composite score was determined as the average of the 12 composite scores. In addition to the 42 items there were also one item each on the respondent's perception of safety quality in their respective work area and the number of adverse events they have reported in the past 12 months. Six other items sought information regarding the respondent's service background. The English version of the survey instrument; SOPS® Hospital Survey Version 1.0. [14] which was obtained online was used for the paper survey. Modification of the instrument A minor modification was deemed necessary in order to facilitate effective communication and comprehension of the item in our cultural background. This was effected in Section F, item 3: "Things fall between the cracks" when transferring patients from one unit to another was changed to… "things escape attention" when transferring patients from one unit to another' , as the former is an unfamiliar phrase in our environment. Such minor modification which will minimally impact on the psychometric properties of the instrument is in compliance with the instrument guideline [14]. Inclusion criteria All operating room frontline personnel (attending surgeons, resident surgeons, nurse anaesthetists and perioperative nurses) who have been in active clinical service in the hospital for a period of not less than six months were eligible to participate in the study. Exclusion criteria Eligible but non-consenting OR personnel were excluded. New employees who had spent less than 6 months in the hospital employment were deemed ineligible and consequently excluded. Sampling method and respondent selection The questionnaires were distributed to the entire population of eligible OR frontline personnel who consented to participate in the survey. The sparse number of physician anaesthetists and radiographers precluded them from consideration in the survey in line with the instrument guideline [14]. This recommendation was in consideration of the need to protect the confidentiality of the respondents. On the other hand, the operating room attendants in our setting who are less well-educated had considerable difficulty comprehending the instrument items during pretesting of the instrument and were thus excluded. Study procedure The eligible respondents were invited to participate in the survey in the understanding that their participation is voluntary and that they are at liberty to withdraw their consent at any stage, if they so wish. The paper survey instrument was distributed in-person by the research assistant to all the consenting HCWs at their duty posts for self-administration. The questionnaires were anonymized by leaving no identification code and distributing them enclosed in brown envelopes. Each respondent was urged not to discuss their responses with other staff while further assuring them that their responses will be kept confidential. In order to enhance capture, the nominal role obtained for each of the relevant units was used to tick off the respective staff during distribution and collection of the questionnaires. The questionnaire administration lasted until all those who were absent during the initial phases of the distribution on account of shift duty, or short periods of leave, were captured. Due to the busy schedule of duty the questionnaires were left with the staff to fill in at their soonest convenient time, while the research assistant made repeat visits to further distribute and recover the filled questionnaires. Each of the three groups of HCWs was surveyed sequentially, and the returned surveys marked with a group identifier to ensure that no false claim of group identity occurs. This is then followed the by another group; in the sequence; nurse anaesthetists, perioperative nurses, and surgeons. The survey was conducted from February 7, 2022 to March 11, 2022. Data management This paper survey did not use individual identifiers. Instead, group identifiers were marked on the returned surveys of each of the three categories of HCWs. Later, all the completed paper surveys were marked with identification numbers to serve as respondent identifier but without any information linking the identifiers to individual worker. The respondent and group identifiers were reflected as such in the electronic data file entry. All data entry was accomplished using the IBM SPSS version 25 statistical package. The obtained data were illustrated using tables and bar charts. The percent positive scores for each item of safety culture, as well as the composite scores were computed from the scaled responses of the HCWs. For each item, or composite measure, percentages greater than 75% are considered as strengths and deemed to have positive perception of patient safety culture, while those ≤ 50% indicate weak perceptions of safety culture and require improvement [21]. Comparison of mean scores between the three study groups was done using One-way analysis of variance (ANOVA) test while inter group comparisons were done using Tukey HSD post hoc test. A difference was considered significant when p < 0.05. Research Ethics The study protocol for the survey was reviewed and approved by the Research Ethics Committee of National Orthopaedic Hospital, Enugu. (IRB Number S.313/IV/; Protocol Number 2022/1/103). Only consenting eligible HCWs were recruited, having duly signed to a written informed consent form. Results The derived overall average composite score of patient safety culture was 47%. Out of the 132 eligible and consenting personnel that were invited to the survey, 122 surveys were returned yielding a response rate of 92.4% (122/132). Only three eligible OR personnel were not invited; one on account of maternity leave while two others were on annual leave. None of the returned surveys was excluded in the analysis as they were duly completed and deemed eligible, save for the few items that some respondents did not oblige a response. Twenty six surveys (21%) did not have complete answers to all the items but were utilized in the analysis. The incomplete surveys mostly have only one missing answer, but one survey had as many as 28 missing answers. Where the respondents marked two answers for one item, such inappropriate response is treated as missing/no response. Complete responses were provided in 96 (79%) of the surveys. One hundred and twenty two (122) HCWs; comprising consultant surgeons (11), resident surgeons (48), nurse anaesthetists (27) and perioperative nurses (36) participated in the study. Over 93% of the respondents have worked in the hospital for more than a year, while 78.7% have worked in their current unit for more than a year and 95.1% have worked in their current specialty for more than one year. About 11% of the respondents reported working for more than 100 h per week ( Table 1). The item with the highest percentage positive response was "People support one another in this unit" (91.8%) while "Staff feel free to question the decisions or actions of those with some authority" received the lowest percentage positive response (10.2%); (Table 2). Only five (A1, A3, A4, A6, B4) out of the 42 items of the HSOPSC instrument were perceived as having 'strength' regarding safety culture by the OR personnel ( Table 2). The composite that has the lowest average percentage positive score was non-punitive response to error, at 17%. This composite, along with communication openness, feedback and communication about error, frequency of events reported, handoffs and transition and staffing was perceived by the HCWs as having weak safety culture (composite score ≤ 50%) and therefore need improvement. Out of the 12 composites, teamwork within units has the highest average percentage positive score and was the only area of demonstrable strength (composite score ˃75%), with a score of 79.6% (Table 3) The derived overall average composite score was 47%, indicating that overall the HCWs have a weak perception of safety culture in the hospital, necessitating an improvement. In this study the professional roles of the surgeon, nurse anaesthetist and perioperative nurse, correspond with the work areas of surgery, anesthesiology and perioperative nursing, respectively. There were significant differences between the various work areas/ professional groups perception of safety culture in as many as seven out of the twelve composites/dimensions; with the nurse anaesthetists having the highest perception of safety culture and the surgeons having the least (Table 4). Tukey HSD post-hoc test for inter-group comparability further highlighted the relative position of the various professional roles regarding the respective composites ( Table 4). The perception of safety culture by the various work areas/professional groups regarding the other five composites was similar. As much as 85.2% of the respondents (104/122) did not report any adverse event in the past 12 months (Table 5). Only 38.2% of the respondents (44/115) regarded the patient safety quality in their own work area as very good or excellent (Table 5). Figure 1 summarizes the percentage scores of the patient safety composites and overall average composite score in our operating rooms, alongside others from different countries where the HSOPSC instrument was used for safety culture assessment. The comparative surveys were conducted among OR personnel in five Tunisian hospitals [12] and a singlecenter survey each in Norway [22] and the United States [23]. The overall average composite score of patient safety culture in our ORs was 47%, compared to the ORs in Tunisia; 29.5%, Norway; 47% and the United States; 48% (Fig. 1). Our ORs had the lowest scores in the dimensions of "non-punitive response to error"; 17% and communication openness"; 27.4% compared to the ORs in Tunisia, Norway and the United States. Discussion This study investigated the safety culture in the operating rooms of a Nigerian referral hospital using the HSOPSC. The overall average composite was 47%, but dimension scores ranged from 17% for non-punitive response to error to 79.6% for teamwork within units. Very few publications have evaluated the patient safety culture in the operating room, among them a Tunisian multicenter study [12], a Norwegian single-institution survey [22] and an American single-institution survey [23]. A few others utilized survey tools other than the HSOPS [24][25][26] but comparisons with these are not tenable owing to differences in the factor components. The high response rate of 92.4% obtained in our study may in part derive from the paper-based mode of the survey, single-site location and the size of the sample population. It compares well with the 70.8% response rate of the paper-based multicenter survey that targeted 544 OR staff in five Tunisian hospitals [12]. The Norwegian survey that utilized the mixed distribution method of survey (web and paper modes) in assessing patient safety culture among 575 OR staff had reported a response rate of 62% [22], whereas the online survey that evaluated patient safety culture among 431 OR staff in a United States hospital recorded a response rate of 67% [23]. Paper-based surveys yield higher response rates compared to webbased surveys making them less prone to non-response bias and more reflective of the sample population [27,28]. The overall average composite score of patient safety culture in our ORs compares well with that of operating rooms in Norway [22] and the United States [23], but exceeds that in Tunisia [12]. Nigeria currently has no national policy on patient safety. However, the implementation of the WHO Surgical Safety Checklist (SSC) in our hospital since 2013 may have impacted on the safety culture despite the obvious constraints of infrastructure and socioeconomic limitations in our environment. Kawano et al. had earlier documented the positive effect of WHO SSC implementation by surgical teams on safety attitudes and climate in the hospital setting in Japan [29]. With the implementation of a National Patient Safety Campaign in Norway (2011-2013) which has SSC compliance rates at hospital level as a quality indicator, a longitudinal cross-sectional study was conducted in a large Norwegian tertiary hospital to evaluate its impact on safety culture by comparing the pre-and post-intervention safety culture perception among OR personnel [30]. Their study revealed that introduction of the WHO SSC brought about improvement in all the patient safety culture composites and that compliance rate in the use of the SSC correlated positively with improvements in safety culture composites/dimensions. We observed wide disparity across the patient safety composites, being highest in teamwork within hospital units and lowest in non-punitive response to error. Teamwork within hospital units defines the extent to which the respondents in a unit support each other, treat each other with respect, and work together as a team. This composite was the strongest in our study as well as the studies conducted in the United States [23,31] and Tunisia [12]. Non-punitive response to error defines the extent to which the respondents feel that their mistakes and event reports are not held against them and that mistakes are not kept in their personnel record. The very low score is indicative of a prevalent culture of blame. Blame, and the fear of blame have been recognized as constituting pernicious impediment to patient safety as they are associated with lack of trust and poor reporting culture [32]. The poorest perception attributed to this safety culture composite in our study was shared by other studies [12,31] but contrasts sharply with the Norwegian ORs where it was the strongest composite [22]. The high score of this composite in the Norwegian ORs could be reflective of an enduring "system approach" as against the more prevalent "person approach" to error management [32]. The very low perception regarding non-punitive response to error in our ORs correspond with an equally low tally of frequency of events reported; both signifying a poor reporting culture. Compared to the ORs in Tunisia, Norway and the United States, our personnel had better perception of seven patient safety composites regarding; teamwork within units, teamwork across units, organizational learning-continuous improvement, management support for patient safety, supervisor actions promoting safety, handoffs and transition, and feedback and communication about error. None of the three considered studies in previous literature could pride itself of any area of strength with respect to the 12 patient safety dimensions [12,22,23]. Equally dismal were the findings of a recent survey of five cardiovascular surgical centers in the United States, further alluding to pervasive poor safety culture in the ORs [33]. The safety culture perception in our ORs is comparable, and arguably better, than that in ORs cited in Norway and the United States despite the huge socioeconomic disparities. It would thus appear that factors beyond the socioeconomic milieu, such as the implementation of the WHO Surgical Safety Checklist (SSC) in our hospital may have played a positive role in the perception of patient safety culture by the OR personnel. The impact of such protocol implementation on safety attitudes is documented, and has been stated earlier [29]. Thus, even resource-poor environments could have better or comparable patient safety culture with those that have economic advantage. In the light of suggestions that variations exist in the perception of safety culture by HCWs with different professional backgrounds [33], we conducted a subanalysis of the responses based on professional roles and work areas. We observed that the various categories of personnel in the OR rated safety culture differently. Our finding was supported by the Norwegian study wherein anesthesiologists and nurse anaesthetists had higher mean scores than the surgeons and operating theatre nurses [22]. Similarly, the Tunisian study reported that physicians rated the safety culture of operating rooms lower than the paramedical staff (nurses, anaesthetic and surgical technicians, nurses' assistants) in most of the dimensions [12]. The lowest perception of patient safety among the surgeons implies that they were the least optimistic of the existing safety culture. An international survey on safety culture and attitudes among spine professionals had earlier revealed that most of the respondents believe that the surgeon has responsibility for both the prevention of adverse events and improvement of the safety culture in the operating room [25]. Such a mindset could influence a more critical appraisal of patient safety among the surgeons compared to the perioperative nurses and the nurse anaesthetists. Remarkable variation in perception between the different categories of personnel had also been reported by studies on safety climate conducted with the SAQ among OR personnel in Brazil and Sweden [34,35]. As much as 85% of our respondents made no report of adverse events over the past 12 months. The Tunisian study which presented data on adverse event reporting also declared that 90.2% of the respondents had reported [12,22,23] no adverse event in the past 12 months [12]. It is likely that events were underreported in these settings and several potential patient safety problems may not have been recognized and addressed, posing further danger to the patients. The systems approach has been recommended and effectively implemented in error management in high risk industries like aviation [36]. However, its application in healthcare is still constrained while the persisting 'culture of blame' propels wanton administrative, professional and legal liabilities on HCWs for medical errors [37]. This culture arguably contributes to the festering poor scores in the dimensions of non-punitive response to error and frequency of adverse events reported. Furthermore, the decision to report errors by HCWs is influenced by their proneness to shame and their perception of the organizational attitude towards restoring their self-image [38]. This would suggest that both non-punitive response to errors and management support for patient safety would enhance error reporting which is a crucial process in medical error management and patient safety. The 2021 User Comparative Database Report for Version 1.0 obtained from 191,977 hospital staff in 320 hospitals in the United States who were surveyed between December 2017 and. October 2020, recorded a much better 'overall average composite score' of 65% compared to the 47% of our study [31]. But a major flaw in comparing OR patient safety culture reports with the 2021 User Comparative Database Report for Version 1.0 of the United States derives from the fact that the latter surveyed hospitalwide personnel encompassing administrative staff, rehabilitation, medicine, pharmacy, et cetera. Interestingly, respondents in the work areas of anesthesiology and surgery constituted only 1% and 11% of the surveyed population, respectively; suggesting that the majority of the respondents were non-OR personnel. With such differing characteristics in work area and staff position the perception of the respondents in the latter could not be adjudged to represent the culture of the OR environment in view of known professional and work area-related disparities in safety culture perception. For instance, the work area characteristics of the 2021 User Comparative Database Report revealed that respondents from rehabilitation section (work area) had the highest average composite score of 72%, while respondents from administration (staff position) had the highest average composite score of 78%. These were much higher than the composite scores attributed to anesthesiology, surgery, attending surgeons, residents and registered nurses who characterize the OR environment. Moreover, as much as 22% of the respondents in the comparative database do not have any direct interaction with the patients. Several broad-based initiatives have been embarked upon by governments to systemically and specifically improve patient safety. In Sweden patient safety got a boost in 2011 with the enactment of the patient safety act and implementation of government-supported financial incentive for patient safety actions in healthcare facilities, including safety culture improvement [39]. In Denmark too, the legislation on patient safety was passed in 2003 and sought to improve patient safety by; ensuring that (i) frontline personnel report all adverse events (ii) hospital acts on the reports (iii) the National Board of Health disseminate learning from them while protecting the personnel from disciplinary investigations and legal sanctions [40]. In aligning with the above initiatives, 'improvement in organizational culture to encourage reporting and avoid blame' received the strongest recommendation of 22 suggested options for enhancing patient safety by Swedish patient safety-oriented healthcare professionals, whereas increasing the number of physicians, nurses, and hospital beds were rated 12th, 15th and 17th respectively and 'increased penalty for personnel who make mistakes' got the least recommendation [39]. The HSOPSC survey, like the other patient safety instruments measure abstract phenomena termed composite/ dimensions from self-reported perceptions of safety culture and attitudes. Such models facilitate data reduction by means of orderly simplification of a number of interrelated measures. The use of instruments with sound psychometric property is thus critical since the multiple items measured are presumed to represent the fewer underlying constructs. The HSOPSC has been validated in over 62 studies conducted in over 29 countries [41]. However, in spite of its popularity and wide application its psychometric properties have been challenged, with some researchers advocating revision of some of the instrument's items and composites [22,41]. A revision of the original Hospital Survey on Patient Safety Culture version 1.0 survey has recently been released by AHRQ (HSOPS 2.0) [42]. Furthermore, the survey being a self-reported perception the potential for response bias cannot be ruled out. Our assessment of patient safety culture in the OR was not comprehensive, as the very few number of physician anaesthetists precluded them from the study (in line with the instrument guideline), while health attendants were not considered owing to their poor comprehension of the instrument. Nevertheless, the surveyed personnel represent over 85% of the OR staff and could justifiably be deemed representative. Despite having the English language as the lingua franca in Nigeria, we did not conduct cultural adaptation and further validation of the original English version of the HSOPSC which was developed within the American cultural environment in order to ensure the equivalence of meaning for this cross-cultural research as it were. Hence, whereas we did not substantially alter the original validated English version which would contribute to psychometric distortions the results we obtained may not have accurately reflected what they are supposed to measure. Thus, we concede that the use of a previously validated instrument does not necessarily imply validity in another culture or context [43,44]. However, such limitations that may arise from variations in the psychometric properties of measurement instruments are a common feature in cross-cultural research, including those conducted with the HSOPSC [45]. It must also be acknowledged that the multiplicity of other factor models proposed for the HSOPSC instrument in different studies such as the 11-factor [22], 10-factor [46], 9-factor [47] and 8-factor models [48] complicate the process of comparing outcomes. Our study instituted only a minor modification of item F3 by rewording it as indicated in the methods section; a minimum which is permissible by the instrument developers [14]. The French validated version of the Hospital Survey on Patient Safety Culture questionnaire used by Mallouli et al. comprised only of 10 composites with 45 items [12], as against 10 composites and 42 items in the original version which we used. In view of such variations which are rather common with the different adaptations of the instrument direct comparison of the results of different surveys demands circumspection. Recommendations It is hoped that the implementation of relevant interventions that this study has spurred will bring about improvement in the safety culture of our ORs, as have been observed in follow-up studies conducted in Saudi Arabia [49], Japan [29] and Norway [30]. Furthermore, with this benchmark a follow-up survey to evaluate the outcome of implemented interventions would be necessary, and is highly recommended. Conclusion So far, despite subtle variations in the versions of the HSOPSC questionnaire used in the different studies, our study appears to be the first to record even one area of strength across the composites of patient safety culture. With a low overall average composite score and as many as half of the composites requiring improvement our OR safety culture could be adjudged to be weak, its comparative strengths notwithstanding. The finding is disconcerting owing to the association between weak patient safety climate and poor patient outcomes. The picture of safety culture emanating from the ORs discussed herein is worrisome and may indeed be a major contributor to the gloomy statistics of surgery-related morbidity and mortality, globally.
2022-11-30T14:39:59.676Z
2022-11-29T00:00:00.000
{ "year": 2022, "sha1": "a1b79960fd04959b0096489a7f5bc60225ec3cba", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "a1b79960fd04959b0096489a7f5bc60225ec3cba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }