id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
236572633
pes2o/s2orc
v3-fos-license
Perception of the Relevance of Soil Compaction and Application of Measures to Prevent It among German Farmers Intensive field traffic and high axle loads can lead to soil compaction, with ecological and economic consequences. However, the relevance of this issue among practitioners is largely unknown. Therefore, the aim of this study was to determine the relevance of this issue for farmers in Germany, whether and which mitigation measures are applied to avoid it, and what a (non-) application might depend on. We conducted an online survey among farmers in Germany in winter 2017/2018. For the majority of the respondents, soil compaction is a relevant issue on their own farm, and even at higher share rates, this issue is important for Germany as a whole. To prevent or avoid soil compaction, 85% of the participants apply agronomic, 78% tyre/chassis, and 59% planning/management measures. The farm size, tractor power, working in fullor part-time, estimated relevance of soil compaction for Germany, and the estimated yield loss were positively associated with the application of management measures. The insights gained suggested that more effort is needed to encourage farmers’ perceptions regarding soil compaction in order to generate demand-oriented and practice-oriented recommendations for action for various target groups and thus promote the application of soil-conserving measures on a broad scale. Introduction Background Soil compaction of arable soils is caused by intensive field traffic on wet soils due to under unfavourable weather conditions [1][2][3][4]. Soil compaction leads to decreased porosity and changed pore size distribution and disturbs the water and gas regime of soils [5,6]. It also reduces hydraulic conductivity and increases bulk density, which can cause floods [7], disturbs biological processes in the soil, and promotes nitrous oxide emissions (N 2 O) [8][9][10][11] or reduces crop growth [12,13]. Soil degradation caused by compaction is receiving increasing attention from policymakers as it is considered as a major soil threat in Europe [14]. Among suggestions for efficient soil management, preventing soil compaction is one of the key objectives for the future Common Agricultural Policy (CAP) [15] and is seen as one lever to achieve the goals of the European Green Deal [16]. Measures to Prevent or Mitigate Soil Compaction In Germany, crops such as silage maize and sugar beet are especially associated with high machinery loads during harvest in late summer/autumn, when the weather is rainy and soils have a high moisture content and are therefore susceptible to compaction [17]. The area under silage maize and sugar beet regionally accounts for up to 20-34% and 14-24% of arable land in individual districts, respectively [18]. Additionally, a large amount of liquid manure is applied to the fields in spring, when soils can be even wetter than in autumn. With climate change, drier summers and wetter winters are expected for Germany [19]. This also brings drier soils in the summer, but the regional expression of this is associated with considerable uncertainties [20]. Dry conditions in summer could be beneficial for wetter regions in terms of the number of trafficable days [21]. However, this has not been demonstrated for Germany so far. To prevent or mitigate soil compaction, farmers can choose between a variety of mitigation measures, including agronomic, technical, or management measures [22]. Agronomic measures include, for example, the cultivation of cover crops, direct seeding, or no or no-turn tillage without ploughing. These measures have a rather indirect effect on the prevention of soil compaction by stimulating soil biota, thereby improving aggregate stability and thus the resilience of soils [2,[23][24][25]. As a further side effect, the number of machine passes is reduced and the generation of a so-called "plough sole" is avoided. They are referred to as indirect measures for the purposes of this paper because they are not primarily applied to prevent soil compaction. The technical measures include tyre variations and configurations as wide tyres, twin tyres, or technical options to adopt the tyre inflation pressure and chassis options as rubber tracks or crab steering. These measures increase the contact area or decrease the number of wheelings and thus the associated soil pressure [26]. Information on the functionality, advantages, and disadvantages of these measures in the form of manufacturers' recommendations, practitioner reports, or articles in agricultural journals is widely available (e.g., [27][28][29][30][31][32]). Separating street and field transport during harvest and manure spreading or the adaption of the machine utilisation scope to the trafficable period of the soil are among the management measures. When separating street and field transport, the tyre pressures of the transport vehicles on the field are adjusted to the respective requirements (low tyre pressure for soil protection). For this measure, an additional transport vehicle is needed, which causes additional operational costs. When adapting the machine utilisation scope, it is generally not expected with 100%. The 100% utilisation scope of a beet harvester, for example, would be 1000 ha per year and 10 years of utilisation. This way, the highest machine efficiency and thus the lowest machine costs per ha are achieved. If it is now planned with a utilisation scope of 70%, farmers can react flexibly to weather conditions and are not under pressure to use the machine under any conditions. In this case, the machine costs per ha will increase. There is much less information available on these measures and it is provided rather by official bodies (e.g., [33,34]). (Pro-) Soil Conservation Behaviour and Decision Making of Farmers Thorsøe et al. [35] described subsoil compaction as a "wicked" problem. Contrary to tame problems, wicked problems are "ill-defined, ambiguous and associated with strong moral, political and professional issues. Since they are strongly stakeholder dependent, there is often little consensus about what the problem is, let alone how to deal with it. [ . . . ] they are sets of complex, interacting issues evolving in a dynamic social context" [36]. In the context of soil compaction, pragmatic trade-offs, technological barriers, knowledge deficit, and responsibility outsourcing are to be mentioned [35]. Furthermore, yield effects and thus the direct economic consequences largely depend on the soil type and soil conditions at the time of wheeling and type of machinery [37]. For decisions on sustainable soil management as made by local actors, knowledge of the local soil properties and management is necessary. Moreover, each player acts in an individual socioeconomic environment, which also needs consideration [38,39]. In the past, exploring what farmers in industrialised countries know about soil compaction, how they perceive it, and what measures they implement to avoid or mitigate it were issues that received little attention from a scientific perspective. However, there are quite a number of studies on different aspects of sustainable land management in developing countries (e.g., [40][41][42][43][44]) but only a few in industrialised regions such as Central Europe. For Central European conditions, Reichardt and Jürgens [45] studied the adoption of precision farming in Germany and found technical challenges (e.g., data handling and interpretation, incompatibility between machines) to be the main barrier for a broad adop-tion. Caffaro and Cavallo [46] found perceived, not further specified, economic barriers to have a negative effect on the application of smart farming technologies in Italy. Farm size, in contrast, had a positive effect on the implementation. Tamirat et al. [47] showed for Germany and Denmark that farm size, age, and information/demonstration events significantly influence the decision of farmers to adopt precision agriculture. Regarding the acceptance of conservation measures in Germany, Sattler and Nagel [48] observed that associated risk, effectiveness, and the efforts needed to implement a certain measure are equally or even more important than economic considerations. For a change in land management practices in order to avoid soil erosion in UK, Boardman et al. [49] pointed out the importance of financial incentives as a motivator, in addition to socioeconomic influences. According to Barnes et al. [50], farm size and income had an influence on the adoption of precision agriculture technologies, but so did expectations of economic benefits from adoption and personal attitudes towards information and innovation. In the review of Bartkowski and Bartke [51] on decision making concerning soil management, economic considerations and pro-environmental attitude were found to be studied most often, and studies that reported a significant influence of these variables on decision making predominate. Concerning the effect of information and advisory service, Klerkx and Jansen [52] and Baumgart-Getz et al. [53] pointed out the important role of advisory service in terms of capacity and awareness building for sustainable farming and management among farmers. Within the stakeholder groups from practice and policy design and implementation, Prager et al. [54] identified advisory services as impotant players for the promotion of conservation measures. Especially for the case of sustainable soil management, Ingram and Mills [55] suggest for Europe that not all needs of farmers and advisors are met to push forward sustainable soil management. Aim of This Study In order to promote measures against soil compaction, e.g., by policy interventions or by information and education, it is of high importance to know how widespread such measures are and on which factors application depends. With this knowledge, certain measures can be promoted in a targeted manner and the promotion can be designed in a target-group-oriented way. Moreover, knowledge on the perceived relevance of the issue by farmers, as the main decision makers, is of strong importance. From this, conclusions may be drawn about the type of interventions that can promote adoption. If the relevance is assessed as being high but adoption is low, suitable measures are probably lacking or are unknown. If the relevance is assessed as being low, it is possible that the relevance is actually low or that the sensitivity to the issue needs sharpening. To the best of our knowledge, no scientifically based information is available on the perception of soil compaction as a relevant problem in Germany. The same applies to the adoption of measures to avoid it in Germany because the technical and management measures described above are not included in any agri-environmental program or agricultural surveys. Thus, the aim of this study was to explore the perception and knowledge of soil compaction, to find out how widespread mitigation measures to avoid soil compaction are, but also to identify possible variables that may determine the adoption of measures preventing soil compaction among German farmers. Materials and Methods Due to the lack of a complete and accessible contact list of farmers in Germany, we contacted as many farmers as possible to obtain a broad sample. We did this by distributing the invitation to the online survey through numerous channels, including articles in agricultural magazines, press releases of official institutions, interest groups, and magazines and announcements published by farmers' associations. In particular, by contacting agricultural magazines/media and farmers' associations in all Federal States of Germany, we aimed to obtain a regionally balanced sample (see Appendix A, Table A1 for complete list). In addition, we offered non-cash rewards to increase motivation for participation. The survey was active from February to April 2017. To conduct the survey, we used the software LimeSurvey. The questionnaire consisted of 5 sections which addressed variables recognised from the literature to influence pro-environmental behaviour in a broader sense: 1. general information on the farm, 2. crop rotation and soil tillage, 3. perception of and measures applied to prevent soil compaction, 4. technical equipment and process organisation, and 5. use of consulting and information offers. We used five different question types. Single-choice questions were chosen for categories which were mutually exclusive. Multiple-choice questions were asked when a selection of expected answers was known but not mutually exclusive. Open-text/numeric questions were asked when the answers were unperceivable or when a number was required. For personal assessments, a five-point rating scale was chosen. When a specification of categories was desired, the multiple-and single-choice questions were combined with open-text questions. In total, the survey was accessed 285 times, of which 124 respondents dropped out before the questions of interest (Section 3). Of the remaining 161 observations, only those which reported practicing arable farming were included in the evaluation presented here. The remaining 154 observations were included in the further analyses. Not all of them were complete, and, therefore, the number of observations considered for each question varies and is indicated accordingly. To evaluate variables influencing the application of measures, we adopted the scheme of Bartkowski and Bartke [51] and allocated the variables queried to the respective groups ( Figure 1). we aimed to obtain a regionally balanced sample (see Appendix A, Table A1 for complete list). In addition, we offered non-cash rewards to increase motivation for participation. The survey was active from February to April 2017. To conduct the survey, we used the software LimeSurvey. The questionnaire consisted of 5 sections which addressed variables recognised from the literature to influence pro-environmental behaviour in a broader sense: 1. general information on the farm, 2. crop rotation and soil tillage, 3. perception of and measures applied to prevent soil compaction, 4. technical equipment and process organisation, and 5. use of consulting and information offers. We used five different question types. Single-choice questions were chosen for categories which were mutually exclusive. Multiple-choice questions were asked when a selection of expected answers was known but not mutually exclusive. Open-text/numeric questions were asked when the answers were unperceivable or when a number was required. For personal assessments, a five-point rating scale was chosen. When a specification of categories was desired, the multiple-and single-choice questions were combined with open-text questions. In total, the survey was accessed 285 times, of which 124 respondents dropped out before the questions of interest (Section 3). Of the remaining 161 observations, only those which reported practicing arable farming were included in the evaluation presented here. The remaining 154 observations were included in the further analyses. Not all of them were complete, and, therefore, the number of observations considered for each question varies and is indicated accordingly. To evaluate variables influencing the application of measures, we adopted the scheme of Bartkowski and Bartke [51] and allocated the variables queried to the respective groups ( Figure 1). In group (1), we included the variables education, function, age, and full-/part-time occupation. For group (2), we captured the variables problem perception and organic/conventional management as an indicator of environmental attitude. For group (3), we captured the variables farm size, share of rented land, machinery, crop rotation, and soil characteristics. For group (4), we recorded the variables use of advisory service, and for group In group (1), we included the variables education, function, age, and full-/parttime occupation. For group (2), we captured the variables problem perception and organic/conventional management as an indicator of environmental attitude. For group (3), we captured the variables farm size, share of rented land, machinery, crop rotation, and soil characteristics. For group (4), we recorded the variables use of advisory service, and for group (5), the variables estimated yield loss by soil compaction and farm diversification. It should be noted that the allocation of variables to the respective groups was partly subjective. For example, the variable full-/part-time occupation was allocated to "characteristics of the farmer" because it can influence focus and prioritisation in terms of how much time and money a farmer invests. Another scientist could assign this variable to the "economic conditions" (see Appendix B, Table A2 for questions, question type, and unit). We distinguished the applied measures, which we asked as multiple-choice questions, into three groups. The first differentiation was made according to the effects on soil compaction into direct and indirect effects. The second differentiation was made according to the type of measure. This resulted in the first group of "agronomic" measures with a more indirect effect in terms of soil compaction. The second group consists of measures with a direct effect on soil compaction of the type "tyre/chassis", which are associated with a low planning effort (adjusting the internal tyre pressure), are well known (wide tyres), or are partly standard from the manufacturer (rubber tracks). The third group also consists of measures with a direct effect, but of the type "planning/management", which are associated with a much greater long-term planning effort (adapt machine utilisation scope) or a short-term crop and operation specific management with additional machine capacity requirement (separation of field and street transport) ( Table 1). For a deeper evaluation of the variables influencing the application of measures, we focused on the direct measures of the group "planning/management". We did so because these measures are less promoted and more complex than those of the group "tyre/chassis" and have a kind of innovative character and are therefore subject to special consideration within this analysis. Statistical data from the survey year (2017) were used to contextualise our dataset, but for some characteristics, the most recent data were taken from the Farm Structure Survey in 2016 (FSS 2016). We used descriptive statistics; additionally, the chi-square test at p ≤ 0.05 for categorical data was used to evaluate significant differences between observed and expected distributions between the groups "measure applied" and "no measure applied" among the tested variables. For numerical data, the t-test was used to assess whether the differences in the expression of the variables between the group applying direct measures and the group not applying direct measures of type "planning/management" were assumed to be significant at p ≤ 0.05. The exact p values are provided at the appropriate places. General Description of the Dataset Out of the 154 observations, the largest proportion of respondents were from Lower Saxony (32%), followed by Bavaria (16%), Baden-Würtemberg (8%), and Northrhine-Westphalia (7%) ( Table 2). The remaining federal states were represented with 1-5% of the respondents, except the city states Berlin, Hamburg, Bremen and Saarland, and Rhineland-Palatinate, with no respondents. The location was not specified by 20%. A comparison of the distribution of farms with the real distribution of arable farms in Germany as captured by FSS 2016 indicated that our dataset overrepresented Lower Saxony and underrepresented Bavaria [56]. The remaining federal states were quite well represented. With 86%, the majority of the participants were the farm managers, 7% were family member employees, 1% non-family member employees, and 5% had another function or did not respond to this question. While the official statistics for Germany showed an employment rate of 48% full-time and 52% part-time (FSS 2016, [57]), the majority in our dataset were running the farm full-time (76%) and the smaller share part-time (22%). A small share gave no answer (2%) ( Table 3). Thus, the group of full-time farmers was overrepresented in our dataset. The smaller share of participants practiced organic farming, with 13%, and the larger share of 85% practiced conventional farming; 2% gave no information on this. For the year 2017, the official statistics reported that 11% of the farms in Germany practiced organic farming [58], which was quite well-represented in our dataset (Table 3). With 35% of the farm managers having a university degree in our dataset, this group was overrepresented compared to the official statistics for arable farms in Germany, with 9% (FSS 2016, [58]) ( Table 3). The majority (68%) of the corresponding farms in our dataset had a total area of arable land between 50 and <500 ha, whereas our dataset slightly underrepresented the farm groups below <50 ha and overrepresented the farms ≥50 ha (Table 4). Table 4. Distribution of participating farmers in our dataset (Germany-wide survey: "Technical soil protection" 2017) and official statistics [58] according to arable land. Arable Land Our Dataset Statistics 2017 The mean area of cultivated arable land was 314 ha (standard derivation SD = 193 ha) and 45 ha of grassland (SD = 7 ha), the most powerful tractor had a mean power of 182 hp (SD = 76 hp), and the share of rented land was 50% (SD = 30%). Perception of Soil Compaction To investigate the perception of soil compaction, we asked the farmers about the relevance of soil compaction for their own farm (n = 152) and for Germany (n = 153). For Germany, six participants answered "can not judge"; for their own farms, none did so. In general, from "not relevant at all" to "very relevant" on a five-point rating scale, the number of answers increased more strongly for Germany than for participants' own farms ( Figure 2). arable land between 50 and <500 ha, whereas our dataset slightly underrepresented the farm groups below <50 ha and overrepresented the farms ≥50 ha (Table 4). Arable Land Our Dataset Statistics 2017 The mean area of cultivated arable land was 314 ha (standard derivation SD = 193 ha) and 45 ha of grassland (SD = 7 ha), the most powerful tractor had a mean power of 182 hp (SD = 76 hp), and the share of rented land was 50% (SD = 30%). Perception of Soil Compaction To investigate the perception of soil compaction, we asked the farmers about the relevance of soil compaction for their own farm (n = 152) and for Germany (n = 153). For Germany, six participants answered "can not judge"; for their own farms, none did so. In general, from "not relevant at all" to "very relevant" on a five-point rating scale, the number of answers increased more strongly for Germany than for participants' own farms ( Figure 2). Whereas 76% of the 152 participants who answered this question perceived soil compaction as "relevant" or "very relevant" (point 4 and 5 on the rating scale) for Germany, just 57% did so for their own farm. On the contrary, 8% perceived soil compaction as "not relevant" or "not relevant at all" (point 1 and 2 on the rating scale) for Germany and 27% for their own farm. We cannot exclude the possibility that the stated high relevance and sensitivity to soil compaction issues is a result of the recruiting procedure. Therefore, we assume that "innovators" and "early adaptors" are somewhat overrepresented. Around 60% rated the relevance higher for Germany than for their own farm and around 40% the other way around ( Figure 3). Whereas 76% of the 152 participants who answered this question perceived soil compaction as "relevant" or "very relevant" (point 4 and 5 on the rating scale) for Germany, just 57% did so for their own farm. On the contrary, 8% perceived soil compaction as "not relevant" or "not relevant at all" (point 1 and 2 on the rating scale) for Germany and 27% for their own farm. We cannot exclude the possibility that the stated high relevance and sensitivity to soil compaction issues is a result of the recruiting procedure. Therefore, we assume that "innovators" and "early adaptors" are somewhat overrepresented. Around 60% rated the relevance higher for Germany than for their own farm and around 40% the other way around ( Figure 3). In their study, Thorsøe et al. [35] detected similar patterns for Denmark, as 77% of the respondents regarded soil compaction as a "high" or "considerable" risk for Danish farming, and 39% for their own farm. There seems to be a gap between the individual and the overarching, collective concern. Since soil compaction is a difficult topic with complex underlying processes ("wicked problem" as described by Thorsøe et al. [35]), one explanation could be that individuals underestimate their exposure as a kind of moral exclusion. Opotow et al. [59] described moral exclusion as a way to avoid the complexity and ambiguity of environmental problems. This moral exclusion leads to an underestimation of environmental threats to one's own land [60,61]. However, the results of our survey may have further explanations. Using the argumentation of Dessart et al. [61], perception is influenced by what others do or say-in other words, by the social system. Consequently, the increased perception of soil compaction as a problem for Germany compared to participants' own farms can be seen as a result of social norms and expectations. This may in turn be reinforced by the increased media coverage of the issue of soil compaction in agriculture. As a second indicator for the perception of soil compaction, we asked about the estimated yield loss due to and the area affected by soil compaction. This question was only posed to participants who rated the relevance of soil compaction for their own farm as 3 or higher (n = 106). The mean area affected was estimated to be 17% (median 10%) and the correspondent yield loss (n = 105) on the affected area to be 22% (median 20%) ( Figure 4). In their study, Thorsøe et al. [35] detected similar patterns for Denmark, as 77% of the respondents regarded soil compaction as a "high" or "considerable" risk for Danish farming, and 39% for their own farm. There seems to be a gap between the individual and the overarching, collective concern. Since soil compaction is a difficult topic with complex underlying processes ("wicked problem" as described by Thorsøe et al. [35]), one explanation could be that individuals underestimate their exposure as a kind of moral exclusion. Opotow et al. [59] described moral exclusion as a way to avoid the complexity and ambiguity of environmental problems. This moral exclusion leads to an underestimation of environmental threats to one's own land [60,61]. However, the results of our survey may have further explanations. Using the argumentation of Dessart et al. [61], perception is influenced by what others do or say-in other words, by the social system. Consequently, the increased perception of soil compaction as a problem for Germany compared to participants' own farms can be seen as a result of social norms and expectations. This may in turn be reinforced by the increased media coverage of the issue of soil compaction in agriculture. As a second indicator for the perception of soil compaction, we asked about the estimated yield loss due to and the area affected by soil compaction. This question was only posed to participants who rated the relevance of soil compaction for their own farm as 3 or higher (n = 106). The mean area affected was estimated to be 17% (median 10%) and the correspondent yield loss (n = 105) on the affected area to be 22% (median 20%) ( Figure 4). Above the 75% quantile, the mean area affected was 44%, with a higher mean estimated yield loss than the total mean of 26%. Below the 25% quantile, values were 1% and 17%, respectively. When multiplying the share of affected compacted area with the corresponding yield loss, the mean estimated "effective" yield loss was 3% (max. = 36%; min. = 0%). The results are in line with findings from Schleswig-Holstein, where farmers esti- Above the 75% quantile, the mean area affected was 44%, with a higher mean estimated yield loss than the total mean of 26%. Below the 25% quantile, values were 1% and 17%, respectively. When multiplying the share of affected compacted area with the corresponding yield loss, the mean estimated "effective" yield loss was 3% (max. = 36%; min. = 0%). The results are in line with findings from Schleswig-Holstein, where farmers estimated 10% of their land to be affected by soil compaction, but the estimated yield loss was higher, ranging from 5 to 9% [62]. Scientific research to estimate yield effects of soil compaction is diverse in terms of investigated soils, crops, weather conditions, and machine configurations and varies on a wide range along these factors. Keller et al. [7] and Chamen et al. [37] gave an overview of numerous individual studies in their reviews and reported yield effects due to soil compaction between −2.5 and −27% (mean = −11%, number of studies cited = 15) and between +12 and −47% (mean = −16%, number of studies cited = 35), respectively. To gain an insight into how farmers perceive soil compaction, we asked how they recognised that their fields may be affected. Out of 154 participants, 94 perceived soil compaction based on different indicators, which they were asked to name in a free-text question; multiple answers were possible. Of these, 50 participants named one indicator, 35 mentioned two, eight mentioned three, and one mentioned four indicators. Visual compaction phenomena were most often referred to (44 times) ( Figure 5). The major statements in this indicator category were waterlogging on the field and visible traffic lanes in the field. Plant physiological indicators such as growth depressions or restricted root growth were mentioned 42 times, followed by other indicators which could not be clearly assigned to one of the other categories, such as plough sole or compaction with 31 mentions. Economic indicators such as yield decrease or yield loss were mentioned 25 times. In the category pests and diseases, with two mentions, increased abundance of field horsetail and fungal infection were specified. For soil biological indicators, with also two mentions, improvement of the soil life and less earthworms were mentioned. Indicators can be distinguished into primary ones, which indicate directly the compaction itself, and secondary ones, which rather indicate the indirect effects. The indicators listed up to this point, except the category others, describe the possible secondary effects of soil compaction. Generally, secondary effects are easier to detect and more visible than primary effects [63,64]. The soil physical indicators such as water storage or formation of clods, with two mentions, and in situ measurements such as spade, penetrologger, or soil penetrometer diagnosis, with six mentions, describe the primary effects of soil compaction (with hatching in Figure 5). Such in situ measurements can detect changes in bulk density, soil structure, and soil strength as a direct result of the process of soil compaction [63,65,66]. While these indicators are clearly measurable and scientifically based, the previously mentioned indicators of secondary effects are based more on perception and experience. Since this was a free-text question, the assignment of the answers to the respective categories, especially for the secondary effects, is subjective. Nevertheless, these indicators were observed clearly more frequently than those of the primary effects. We conclude that farmers either rely more on their perceptions and experience to identify soil compaction, or that easily applicable and comprehensible methods to verify these perceptions are lacking in practice or not known. Applied Measures The participants were asked what kind of measures they apply to prevent soil compaction. Multiple answers were possible and 154 participants answered the question. As for the indirect, "agronomic" measures, 85% reported using at least one of them. In total, Indicators can be distinguished into primary ones, which indicate directly the compaction itself, and secondary ones, which rather indicate the indirect effects. The indicators listed up to this point, except the category others, describe the possible secondary effects of soil compaction. Generally, secondary effects are easier to detect and more visible than primary effects [63,64]. The soil physical indicators such as water storage or formation of clods, with two mentions, and in situ measurements such as spade, penetrologger, or soil penetrometer diagnosis, with six mentions, describe the primary effects of soil compaction (with hatching in Figure 5). Such in situ measurements can detect changes in bulk density, soil structure, and soil strength as a direct result of the process of soil compaction [63,65,66]. While these indicators are clearly measurable and scientifically based, the previously mentioned indicators of secondary effects are based more on perception and experience. Since this was a free-text question, the assignment of the answers to the respective categories, especially for the secondary effects, is subjective. Nevertheless, these indicators were observed clearly more frequently than those of the primary effects. We conclude that farmers either rely more on their perceptions and experience to identify soil compaction, or that easily applicable and comprehensible methods to verify these perceptions are lacking in practice or not known. Applied Measures The participants were asked what kind of measures they apply to prevent soil compaction. Multiple answers were possible and 154 participants answered the question. As for the indirect, "agronomic" measures, 85% reported using at least one of them. In total, 94% of the farmers applied at least one direct measure to prevent soil compaction, 78% applied at least one measure of the group "tyre/chassis", and 59% applied at least one measure of the group "planning/management" (Figure 6; for grouping, see Table 1). Cultivation of cover crops was most frequently mentioned within the group of "agronomic" measures (75%). Within "tyre/chassis" measures, soil-protecting tyres were most often named (78%). Adjustment of internal tyre pressure (pressure adjustment with tyre inflation system or quick exhaust valves for manual pressure control) was stated to be applied by 56% of the participants. As the only available approximate estimate, Volk [30] estimated the number of users of tyre inflation systems at 10,000 in 2018 for Germany. With 275,392 arable farms in 2016 (FSS [58]), this corresponds to a share of 4%. The adoption rate of quick exhaust valves, which we asked in the same answer option, is probably a lot higher, as they are easier to upgrade on the tyre and cheaper. However, no information on this is available. Therefore, we cannot make a statement regarding the representativeness of our sample in this respect. Within "planning/management" measures, adaption of machine utilisation scope was most often mentioned (32%), followed by separation of street and field transport during manure application (27%) and during harvest (22%). The last mentioned measures of the "planning/management" group are addressed when talking about measures in the following chapters of this paper. In the evaluations, we focused on the comparison between the group that has applied these "planning/management" measures ("measures applied", 59%) and the group that has not applied them ("no measure applied", 41%). Objective Characteristics of the Farm Within the objective characteristics of the farm, we considered the variables total arable land, the power of the most powerful tractor, the share of rented land, the share of different crop-groups within the crop rotation, the area share of different soil textures (light soils = predominantly sandy substrate; medium soils = predominantly silty/loamy substrate; heavy soils = predominantly clayey substrate), and the number of operations outsourced to contractors. The group of farmers "measure applied" cultivated 233 ha of arable land and the most powerful tractor had a mean power of 204 hp (Figure 7a,b). In the group of farmers named "no measure applied", these were 134 ha and 158 hp, respectively. The differences between the two groups of farmers were significant for these two Cultivation of cover crops was most frequently mentioned within the group of "agronomic" measures (75%). Within "tyre/chassis" measures, soil-protecting tyres were most often named (78%). Adjustment of internal tyre pressure (pressure adjustment with tyre inflation system or quick exhaust valves for manual pressure control) was stated to be applied by 56% of the participants. As the only available approximate estimate, Volk [30] estimated the number of users of tyre inflation systems at 10,000 in 2018 for Germany. With 275,392 arable farms in 2016 (FSS [58]), this corresponds to a share of 4%. The adoption rate of quick exhaust valves, which we asked in the same answer option, is probably a lot higher, as they are easier to upgrade on the tyre and cheaper. However, no information on this is available. Therefore, we cannot make a statement regarding the representativeness of our sample in this respect. Within "planning/management" measures, adaption of machine utilisation scope was most often mentioned (32%), followed by separation of street and field transport during manure application (27%) and during harvest (22%). The last mentioned measures of the "planning/management" group are addressed when talking about measures in the following chapters of this paper. In the evaluations, we focused on the comparison between the group that has applied these "planning/management" measures ("measures applied", 59%) and the group that has not applied them ("no measure applied", 41%). Objective Characteristics of the Farm Within the objective characteristics of the farm, we considered the variables total arable land, the power of the most powerful tractor, the share of rented land, the share of different crop-groups within the crop rotation, the area share of different soil textures (light soils = predominantly sandy substrate; medium soils = predominantly silty/loamy substrate; heavy soils = predominantly clayey substrate), and the number of operations outsourced to contractors. The group of farmers "measure applied" cultivated 233 ha of arable land and the most powerful tractor had a mean power of 204 hp (Figure 7a,b). In the group of farmers named "no measure applied", these were 134 ha and 158 hp, respectively. The differences between the two groups of farmers were significant for these two variables (ha arable land p = 0.02; hp most powerful tractor p = 0.0001). In the literature, the influence of farm size, here indicated by the area of arable land, on farmers' participation in environmental measures was reported to be contradictory [67]. Wuepper et al. [68], for example, concluded that small family farms are not principally more sustainably oriented. Van Vliet et al. [69] stated that environmentally sustainable practices cannot be associated directly with farm size, and Novelli [70] supposed that farm size plays an important role in the decision making of farmers because it affects the emerging opportunity costs of a certain measure. It can be argued that larger farms have greater capacity in terms of machines and manpower to implement complex "planning/management" measures. the influence of farm size, here indicated by the area of arable land, on farmers' participation in environmental measures was reported to be contradictory [67]. Wuepper et al. [68], for example, concluded that small family farms are not principally more sustainably oriented. Van Vliet et al. [69] stated that environmentally sustainable practices cannot be associated directly with farm size, and Novelli [70] supposed that farm size plays an important role in the decision making of farmers because it affects the emerging opportunity costs of a certain measure. It can be argued that larger farms have greater capacity in terms of machines and manpower to implement complex "planning/management" measures. The share of rented land in percent was slightly, but not significantly (p = 0.12), higher for the group "measure applied" (Figure 7c). Caswell et al. [71] argued that farmers who lease fields for long periods feel responsible to the landlord or are afraid of being held responsible for possible damages. Therefore, renters act the same as or similarly to landowners with regard to soil protection. A similar conclusion was drawn by Leonhardt et al. [72] for Austria, where tenure is seen as a long-term choice and therefore the land is treated equally in terms of soil protection. For the variables area share of soil textures and share of crops, the difference between the groups of farmers "measure applied" and "no measure applied" was small and not significant, except for the share of forage grass (area share of soil textures p = 0.47 (light soils), 0.26 (medium soils), 0.19 (heavy soils); share of crops p = 0.36 (root crops), 0.25 (grains), 0.29 (maize), 0.05 (forage grass)), between 0 and 4% for soils and 0 and 5% for crops (Figure 8a,b). The share of rented land in percent was slightly, but not significantly (p = 0.12), higher for the group "measure applied" (Figure 7c). Caswell et al. [71] argued that farmers who lease fields for long periods feel responsible to the landlord or are afraid of being held responsible for possible damages. Therefore, renters act the same as or similarly to landowners with regard to soil protection. A similar conclusion was drawn by Leonhardt et al. [72] for Austria, where tenure is seen as a long-term choice and therefore the land is treated equally in terms of soil protection. For the variables area share of soil textures and share of crops, the difference between the groups of farmers "measure applied" and "no measure applied" was small and not significant, except for the share of forage grass (area share of soil textures p = 0.47 (light soils), 0.26 (medium soils), 0.19 (heavy soils); share of crops p = 0.36 (root crops), 0.25 (grains), 0.29 (maize), 0.05 (forage grass)), between 0 and 4% for soils and 0 and 5% for crops (Figure 8a,b). As soil texture is one of the most relevant factors (besides soil moisture at the time of wheeling and loads applied) influencing the risk of soil compaction [73][74][75][76][77], we expected differentiation in the application of measures according to the area share of light, medium, and heavy soil textures. However, we cannot confirm an effect of the dominant soil structure. In particular, root crops (sugar beet or potato) and (silage) maize harvests involve heavy machinery with harvest dates in late summer/fall. In Germany, considerable rainfall often occurs at this time of year, making the soils susceptible to compaction. Therefore, we expected an impact on the grown crops but could not confirm any association. In total, 130 participants answered the question regarding whether they engage agricultural contractors and 80% of them do so. For specifications of operations outsourced, multiple answers were possible. Among those who engage agricultural contractors, most often, harvest was mentioned to be outsourced (73%), followed by the application of liquid manure (56%), seeding (21%), others (18%, e.g., mulching or application of solid manure), tilling (10%), pest control (6%), and mineral fertilisation (4%). There was no influence of the number of outsourced operations on the application of measures to prevent soil compaction. The outsourcing of operations is a crucial factor for soil compaction risk on arable land, since "farmers partly lost control" [35,78] concerning the timing of fieldwork and the machine used and its configuration (e.g., internal tyre pressure). Von Buttlar et al. [62] reported that 91% of the farmers participating in a survey used agricultural contractors or machinery cooperations, of which 43% state that soil-protecting technology is "used" or "mostly used"; in 25% of the cases, it is "partly used", and in 33%, no such technology is used or it is not known. Besides this study, no information is available on the use of soil-protecting technologies among agricultural contractors. Since agricultural contractors play such a substantial role in minimising soil compaction on arable land, we suggest investigating in more detail how the topic is integrated in these companies in order to engage these stakeholders in soil conservation as well. As soil texture is one of the most relevant factors (besides soil moisture at the time of wheeling and loads applied) influencing the risk of soil compaction [73][74][75][76][77], we expected differentiation in the application of measures according to the area share of light, medium, and heavy soil textures. However, we cannot confirm an effect of the dominant soil structure. In particular, root crops (sugar beet or potato) and (silage) maize harvests involve heavy machinery with harvest dates in late summer/fall. In Germany, considerable rainfall often occurs at this time of year, making the soils susceptible to compaction. Therefore, we expected an impact on the grown crops but could not confirm any association. In total, 130 participants answered the question regarding whether they engage agricultural contractors and 80% of them do so. For specifications of operations outsourced, multiple answers were possible. Among those who engage agricultural contractors, most often, harvest was mentioned to be outsourced (73%), followed by the application of liquid manure (56%), seeding (21%), others (18%, e.g., mulching or application of solid manure), tilling (10%), pest control (6%), and mineral fertilisation (4%). There was no influence of the number of outsourced operations on the application of measures to prevent soil compaction. The outsourcing of operations is a crucial factor for soil compaction risk on arable land, since "farmers partly lost control" [35,78] concerning the timing of fieldwork and the machine used and its configuration (e.g., internal tyre pressure). Von Buttlar et al. [62] reported that 91% of the farmers participating in a survey used agricultural contractors or machinery cooperations, of which 43% state that soil-protecting technology is "used" or "mostly used"; in 25% of the cases, it is "partly used", and in 33%, no such technology is used or it is not known. Besides this study, no information is available on the use of soil-protecting technologies among agricultural contractors. Since agricultural contractors play such a substantial role in minimising soil compaction on arable land, we suggest investigating in more detail how the topic is integrated in these companies in order to engage these stakeholders in soil conservation as well. Objective Characteristics of the Farmers To capture the objective characteristics of the farmers, we queried the highest level of agrarian education, age, their own function on the farm, and whether they run the farm full-or part-time. Within the group "measures applied" (n = 77), 44% were agricultural engineers/Master's degree holders, and within the group "no measures applied" (n = 55), this figure was 27% (Figure 9). Objective Characteristics of the Farmers To capture the objective characteristics of the farmers, we queried the highest level of agrarian education, age, their own function on the farm, and whether they run the farm full-or part-time. Within the group "measures applied" (n = 77), 44% were agricultural engineers/Master's degree holders, and within the group "no measures applied" (n = 55), this figure was 27% (Figure 9). The share of master training (in German, "Meisterabschluss") of all education types was 30 and 33% for the groups "measures applied" and "no measures applied", respectively. The share of farmers who were state-certified technicians was 9 and 4% and the share who had formal agricultural training was 8 and 16% in the group "measures applied" and in the other group, respectively. The chi-square test indicated no significance (p = 0.08) for the distribution of the degrees, even when aggregating university degrees and non-university degrees before statistical evaluation. However, other studies found the level of education to be a critical variable influencing pro-environmental behaviour among farmers [79][80][81] and scientists are calling for more education, especially in the field of soil protection [82,83]. We suggest that our results do not follow this general recommendation since an agricultural degree can be obtained in different ways in Germany: there is the possibility of studying agriculture at university, where (presumably) rather theoretical expertise is taught, or the option to follow a formal vocational training, which is more focused on practical knowledge. Moreover, informal education in the sense of social learning has been reported to play a significant role in strengthening sustainable agriculture [84,85], as sharing information and learning in a group of peers can shift social norms [60]. To date, there are no studies on how the topic of soil compaction is included in the curricula of different types of study and training in Germany. We consider that this open question needs illumination first in order to strengthen formal education in terms of soil compaction. We asked the age by ranges (n = 133), with the result that the shares of the respondents within the respective ranges were only slightly shifted between the group "measures applied" and "no measures applied". No significant (p = 0.82) difference was found for this characteristic, although younger people displayed a higher level of environmental awareness [86]. On the other hand, it could be argued that older farmers apply more soilconserving measures due to the experience and knowledge gained in their working life [87]. While Knowler and Bradshaw [88] explored in the wider field of conservation agriculture both positive and insignificant correlations between adoption and experience, we found no significant connection here, assuming that age equals experience. Further, we compared the groups "measure applied" and "no measure applied" among different the functions (manager, not the farm manager) of those running the farm. Figure 9. Percentage of agrarian education type by the groups "measure applied" and "no measure applied" (n = 77) and "no measure applied" (n = 55). (Germany-wide survey: "Technical soil protection" 2017). The share of master training (in German, "Meisterabschluss") of all education types was 30 and 33% for the groups "measures applied" and "no measures applied", respectively. The share of farmers who were state-certified technicians was 9 and 4% and the share who had formal agricultural training was 8 and 16% in the group "measures applied" and in the other group, respectively. The chi-square test indicated no significance (p = 0.08) for the distribution of the degrees, even when aggregating university degrees and non-university degrees before statistical evaluation. However, other studies found the level of education to be a critical variable influencing pro-environmental behaviour among farmers [79][80][81] and scientists are calling for more education, especially in the field of soil protection [82,83]. We suggest that our results do not follow this general recommendation since an agricultural degree can be obtained in different ways in Germany: there is the possibility of studying agriculture at university, where (presumably) rather theoretical expertise is taught, or the option to follow a formal vocational training, which is more focused on practical knowledge. Moreover, informal education in the sense of social learning has been reported to play a significant role in strengthening sustainable agriculture [84,85], as sharing information and learning in a group of peers can shift social norms [60]. To date, there are no studies on how the topic of soil compaction is included in the curricula of different types of study and training in Germany. We consider that this open question needs illumination first in order to strengthen formal education in terms of soil compaction. We asked the age by ranges (n = 133), with the result that the shares of the respondents within the respective ranges were only slightly shifted between the group "measures applied" and "no measures applied". No significant (p = 0.82) difference was found for this characteristic, although younger people displayed a higher level of environmental awareness [86]. On the other hand, it could be argued that older farmers apply more soil-conserving measures due to the experience and knowledge gained in their working life [87]. While Knowler and Bradshaw [88] explored in the wider field of conservation agriculture both positive and insignificant correlations between adoption and experience, we found no significant connection here, assuming that age equals experience. Further, we compared the groups "measure applied" and "no measure applied" among different the functions (manager, not the farm manager) of those running the farm. The largest share in our dataset (88%) were the farm manager. Among the farm managers, a larger proportion applied measures than not. Non-farm managers showed the reverse trend, without significance (p = 0.16) for this variable (Table 5). Table 5. Distribution between the groups "measure applied" and "no measure applied" according to participants' own functions within the farm (n = 151) and whether the farm is run full-time or part-time (n = 151). Apply Not Apply A significant (p = 0.01) association between the groups "measure applied" and "no measure applied" and whether the farm is run full-or part-time was found (Table 5). Those who run the farm full-time were more likely to apply measures than those running the farm part-time. Of the 151 participants who answered the two previous questions, around half (48%) were farm managers who run the farm full-time. Murphy et al. [89] found that the more working time farmers spend on the farm, the more likely they are to participate in the Rural Environment Protection Program. Behavioural Characteristics Behavioural characteristics describe, among others, the influence of the perceptions and attitudes of a farmer on decision making [61]. As an indicator for perception, we referred to the estimated relevance of soil compaction in Germany and in the participants' own farms (Figure 2). Those participants who estimated soil compaction as not relevant for Germany (point 1 and 2 on the rating scale) all belonged to the group "measure applied" ( Table 6). Of those respondents who rated soil compaction for Germany as relevant (point 4 and 5 on the rating scale), around half applied the measures. The chi-square test suggested a significant (p = 0.001) association between the estimated relevance of soil compaction for Germany and the application of measures. Since the subsample not relevant for Germany was relatively small, this result should not be overinterpreted. Table 6. Distribution between the groups "measure applied" and "no measure applied" according to the perception of soil compaction (sc) for participants' own farms, for Germany, and according to management. In both groups for which soil compaction for participants' own farms is estimated as relevant or not relevant, the majority of participants applied measures (59 and 67%), and the difference was not significant (p = 0.38). Even if the perception of environmental risks can influence the application of measures to prevent them [61], there was no unambiguous direction in our evaluation. Moreover, those participants who rated soil compaction to be not relevant did rather apply measures to prevent it than the others. There are studies reporting positive effects of individual risk perception on the pro-environmental behaviour of farmers (e.g., [90,91]), no significant effect [92], and even a mismatch between risk perception and risk management strategies [93]. The expression of a perception involves a prominent psychological component and other studies already described similar discrepancies between perception and action [94] as we found here. Apply As an indicator for the environmentally friendly attitude, we referred to whether the farm is managed conventionally or organically, assuming that organic farmers are more environmentally aware. However, among the conventional farmers, more participants applied measures (63%), and among the organic farmers, who were clearly a smaller subsample here, the majority of participants did not apply measures (60%), but this figure was not significant (p = 0.06) ( Table 6). This is in line with the study of McCan et al. [94], who found no clear indication that organic farmers have a higher environmental awareness, as they previously hypothesised. Michel-Guillou and Moser [95] concluded that social variables had a greater influence on pro-environmental behaviour than environmental awareness. In fact, it is difficult to imply that organic farmers are less environmentally friendly based on the results that they apply fewer of the measures considered. As McCann et al. [94] noted in their study, organic farmers achieve higher sustainability through a variety of measures in the areas of fertilisation, winter cover crops, and diversity of crop rotations. Social-Institutional Characteristics Around 35% (n = 54) of the participants claimed to use advisory services, 51% (n = 79) did not, and 14% (n = 21) did not answer this question. In the group "measure applied", more participants use advisory services; in the group "no measure applied", it is the other way around ( Table 7). The differences in the distributions are not significant (p = 0.18). Table 7. Number of participants who use or do not use advisory services in general and corresponding numbers within the groups "measure applied" and "no measure applied". Apply Not Apply Use of advisory services (n = 54) 65% (35) a 35% (19) The type of advisory service used was also asked and multiple answers were possible. Professional associations were mentioned 37 times ( Figure 10). not relevant did rather apply measures to prevent it than the others. There are studies reporting positive effects of individual risk perception on the pro-environmental behaviour of farmers (e.g., [90,91]), no significant effect [92], and even a mismatch between risk perception and risk management strategies [93]. The expression of a perception involves a prominent psychological component and other studies already described similar discrepancies between perception and action [94] as we found here. As an indicator for the environmentally friendly attitude, we referred to whether the farm is managed conventionally or organically, assuming that organic farmers are more environmentally aware. However, among the conventional farmers, more participants applied measures (63%), and among the organic farmers, who were clearly a smaller subsample here, the majority of participants did not apply measures (60%), but this figure was not significant (p = 0.06) ( Table 6). This is in line with the study of McCan et al. [94], who found no clear indication that organic farmers have a higher environmental awareness, as they previously hypothesised. Michel-Guillou and Moser [95] concluded that social variables had a greater influence on pro-environmental behaviour than environmental awareness. In fact, it is difficult to imply that organic farmers are less environmentally friendly based on the results that they apply fewer of the measures considered. As McCann et al. [94] noted in their study, organic farmers achieve higher sustainability through a variety of measures in the areas of fertilisation, winter cover crops, and diversity of crop rotations. Social-Institutional Characteristics Around 35% (n = 54) of the participants claimed to use advisory services, 51% (n = 79) did not, and 14% (n = 21) did not answer this question. In the group "measure applied", more participants use advisory services; in the group "no measure applied", it is the other way around ( Table 7). The differences in the distributions are not significant (p = 0.18). Table 7. Number of participants who use or do not use advisory services in general and corresponding numbers within the groups "measure applied" and "no measure applied". Apply Not Apply Use of advisory services (n = 54) 65% (35) a 35% (19) The type of advisory service used was also asked and multiple answers were possible. Professional associations were mentioned 37 times ( Figure 10). Germany-specific professional associations such as GKB e. V. (society for conservation tillage), Bioland e.V. (association for organic farming in Germany), or DLG (German Agricultural Society) were mentioned most often, namely 37 times. This is followed by private advisory services, with 20 mentions; the chamber of agriculture, with 19 mentions; public authorities ("Offizialberatung" in German), with 18 mentions, and others, with 10 mentions. Within the category "others", the Swiss online tool Terranimo [96] was mentioned, as well as agricultural magazines. Marx and Jacobs [97] concluded in their overview of official recommendations for action and advisory material concerning soil compaction in Germany that some of the existing recommendations on national and federal state level are partly difficult to access or out of date. Therefore, they advocated for easier access to recommendations and advisory tools and for more target-group-orientated presentation and modern design. In our study, the professional associations were mentioned twice as often as the official state institutions. An alternative explanation is that organisations with an agricultural background are more likely to be seen as a reliable peer group and are therefore used more often [60]. However, it should also be noted that the advisory structure in Germany varies from region to region. In Southern Germany, advice is mainly provided by official state institutions; in the north-west, it is mainly by chambers of agriculture; and in the east, private advisory services dominate [98]. Economic Conditions In our survey, economic conditions were captured by estimated yield loss and farm diversification. In total, 106 and 105 participants estimated the affected area by and yield loss due to soil compaction (see Section 3.2). For comparison purposes, the surveyed yield loss and the affected area were multiplied because, otherwise, for example, an estimated yield loss of 50% on a corresponding area of 1% could not be compared to the same yield loss on an estimated area of 20%. There was a significant difference in the estimated "effective" yield loss (estimated yield loss multiplied by the estimated share of affected compacted area) between the group "measures applied" and the group "no measures applied", with a mean of 3% and 6% yield loss, respectively. Therefore, we assume that the greater the estimated yield loss-hence, the level of one's own risk-the more likely farmers are to apply complex "planning/management" measures. The prerequisite for an appropriate reaction on a perceived risk is understanding and knowledge about possible interventions. In order to characterise the diversity of the farms, we asked if there was any other farm activity besides arable farming. Business diversification can broaden the income base and enhance the viability of a business [99]. Income dependency on arable products can be reduced, highlighting the compelling need to maintain a productive soil through soil conservation measures. In all four groups of farming sectors, a higher percentage applied measures than did not, and there was no significant (p = 0.39) link between farm diversification and the application of measures (Table 8). Table 8. Number of participants within each farming sector and corresponding numbers within the groups "measure applied" and "no measure applied" (n = 154). Apply Not Apply Farm size is also an economic constraint. An increased farm size, where we observed a higher rate of the application of management measures (Figure 7a), may increase the farm income and also the capability of risk management [100]. Higher income, greater machinery, and human resources on larger farms allow financial and organisational flexibilities that are needed for the "planning/management" measures under consideration. They also require a certain amount of strategic thinking, as they are more organisational in nature and less based on technical solutions that are already more established (e.g., wide tyres). An increasing farm size can foster innovation, whereas running the farm part-time, where we observed a lower rate of application of management measures (Table 5), can hold back innovations [101]. Recommendations and Options for Action From our results, we derived various options for action that will support and promote soil conservation. They are: (1) an objective assessment of the relevance of soil compaction for farmers, (2) research and development activities to identify soil damage using noninvasive methods, and (3) recommendations for soil protection in agricultural practice. Measures in these three areas support different objectives, address different target groups, and can thus be used in the sense of a modular system. (1) We recommend the development of methods that allow farmers to conduct a "soil compaction" survey for their soils using low-threshold offers. Regional soil characteristics and crops grown, but also the use of already existing data, e.g., from field documentation, need to be considered. There are already some methods in place, such as the "Simple soil structure assessment for the farmer" [102] or the "BASIS TERRA BOX" [103] with materials and a method manual for the analysis and evaluation of soil conditions. These are to be refined and communicated more effectively (3, iii). The overall aim is to achieve a better self-assessment of the risk of soil compaction by farmers and thereby to promote the need of application of soil protection measures (3). (2) Activities to identify soil damage with non-invasive methods are currently in early research stage using close-range remote sensing via drones and remote sensing with satellite data. While close-up sensing allows short-term and event-related interventions, the analyses with remote sensing data are rather an evaluation of time series and images taken cannot be influenced by the researcher. Once these methods are applicable on a large scale, they can support the proposed actions (3), e.g., by identifying areas that are particularly threatened by or vulnerable to soil compaction and therefore deserve support. (3) In soil protection, three types of support can be distinguished: (i) investment support for technical measures, such as tyre pressure control systems, (ii) area-related support in the context of agri-environmental measures for the application of soil conservation practices, and (iii) expansion of knowledge transfer to prevent soil compaction and disseminate soil conservation measures. (i) Investment support for the establishment of technical measures aims to increase equipment for the application of technical soil conservation measures by the farmer or the contractor. Advantageously, such funding is easy to administer. Disadvantages are the risk that investment supports may be taken up even though investments in soil conservation measures would also take place without it and the limited possibility to control application of technical measures. (ii) In the context of agrienvironmental measures, the application of specific measures can be made more attractive through area-related support. The aim is to promote the use of soil-conserving measures specifically for critical works such as manure spreading in spring or sugar beet harvest. (iii) The expansion of knowledge transfer on soil conservation is aimed at professional farmers and contractors as well as those in training or education. In addition to traditional knowledge transfer activities, peer-to-peer formats should also be promoted. Particularly for education and training, it is important to examine how soil protection is currently addressed and which improvements are conceivable. The expansion of knowledge transfer can in turn promote the appropriate application of soil condition assessment methods by farmers (1) and the acceptance and uptake of possible funding options (i, ii). Regarding possible target groups, we see a need for action in addressing contractors, farmers in training and further education, as well as part-time farmers. For these target groups, it is necessary to create suitable information opportunities that address the specific needs (e.g., little timeframe for new impulses, narrow time windows for crop management). Conclusions Our study is the first record of the adoption of mitigation measures to avoid or reduce soil compaction in Germany, although we assume that a follow-up study with a larger and more representative sample size is needed. Farmers sometimes need to take contradictory requirements into account within their decisions (economics, market demands, delivery dates, arable restrictions), of which the avoidance of soil compaction is only one aspect [35]. Thus, the application of mitigation measures to prevent soil compaction seems rather to be seen as an add-on within the management when the farm is large enough to give economic flexibilities for voluntary measures. We found few significant differences between the group of farmers who apply measures and those who do not. However, it is important to keep in mind that a correlation is not a causality and that no single factor can be used to explain the application or non-application of soil conservation measures alone and that there might be socio-psychological components in addition to what a quantitative survey can cover [81,104]. Thus, we suggest qualitative follow-up in-depth surveys and interviews on variables which drive farmers during decisions pro or contra a measure. Against the background of supporting a transition of agricultural practices towards soil conservation, more educational work is needed. This concerns formal education as well as informal and advisory service since they shape the socio-psychological background of farmers. Table A2. Overview of the analysed groups, underlying variables, applied questions, and question (translated in English from the original questionnaire) types to investigate technical soil protection. For original version of the questionnaire, see "Data Availability Statement". (Germany-wide survey: "Technical soil protection" 2017).
2021-08-02T00:05:40.489Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "f2e34a8104dd2e12767fca6151596a5860346e18", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/11/5/969/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a6c7b8d7cfeac7497f1bbc251a102d12e885b611", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
225554278
pes2o/s2orc
v3-fos-license
Regulatory autonomy and regulatory chill in Opinion 1/17 This article analyses the aspect of the Court’s reasoning in Opinion 1/17 that focuses on the regulatory autonomy of the Parties to the Comprehensive Economic and Trade Agreement (CETA) to decide on levels of protection of public interests. The European Court of Justice’s (ECJ) introduction of regulatory autonomy under EU law coincides with the wider debate around ‘regulatory chill’ under international investment law. This article finds the ECJ’s concept of regulatory autonomy to be narrower than that of the regulatory chill hypothesis put forward by critics of investor-state dispute settlement (ISDS). It further analyses the ECJ’s reasoning that the CETA’s investment tribunals do not have jurisdiction to call into question the levels of protection sought by the EU. In so doing, it will critically evaluate the certainty of the ECJ’s promise that there will be no negative effect on public interest decision-making through CETA’s investment chapter. Finally, it will explore the legal consequences of Opinion 1/17 for future awards and investment agreements. Introduction The lively public debate on investment arbitration in recent years is in part the result of public fears of 'regulatory chill' that may result from investor claims against government public interest action under investment agreements. 1 It is not the only ground for opposing investor-state dispute settlement (ISDS) 2 but concerns over regulatory chill have made the debate more prominent. The debate around regulatory chill and ISDS is often based on a number of well-known examples of actual litigation resulting in a regulatory change on the part of a government. For instance, the government of New Zealand delayed the introduction of its plain packaging legislation for tobacco products for six-and-a-half years until the investment arbitration case initiated by Philip Morris against Australia had been resolved. 3 The government of Indonesia reversed its ban on open-cast mining in several protected forests following the threat of ISDS arbitration. 4 The government of Romania requested that a World Heritage Site nomination be referred back following a claim brought by a Canadian mining company because of delays in permitting procedures surrounding the biggest open-cast gold mine in Europe. 5 Regulatory chill can take various forms and essentially comes down to an effect whereby the government delays, waters down, or otherwise negatively affects public interest decision-making out of fear of investment arbitration litigation. Regulatory chill caused by investment arbitration played a significant part in debates surrounding the negotiations of both the Comprehensive Economic and Trade Agreement (CETA) with Canada and the proposed Transatlantic Trade and Investment Partnership (TTIP) with the United States. On the one hand, academics, civil society organisations, and various political groups have warned that regulatory chill may result from the investment arbitration provisions contained in these agreements. 6 On the other hand, the Commission has argued that these agreements contain sufficient guarantees that public interest decision-making would not be affected. 7 The request for Opinion 1/17 by the Belgian government did not concern a request for a clarification on this debate. The request had raised four different points for legal clarification by the European Court of Justice (ECJ), none of which touched upon the issue of regulatory chill, even indirectly. 8 In fact, the scope of Belgium's request was limited to Section F of Chapter Eight of CETA (the Investment Court System -ICS), not the entire investment chapter including the substantive provisions. In that sense, the request was by and large inspired by the ECJ's decision in Opinion 2/13, which dealt with the possible negative effects of the EU's accession to the European Convention on Human Rights (ECHR) on the ECJ's own powers. The Court's case-law on autonomy and external oversight mechanisms under international law had so far focused on a judicial understanding of autonomy based primarily on the powers of the EU's judiciary. Nonetheless, the ECJ decided to weigh in on this public debate in Opinion 1/17, providing the Commission with a helpful formal legal authority in the public debate in Europe. 9 In Opinion 1/17 the Court moved away from that judicial understanding of the autonomy of the EU legal order in order to add a regulatory understanding of that concept. This regulatory understanding of the autonomy of the EU legal order focuses more on the independence of the institutions involved in the EU's regulatory processes. The ECJ thus introduced a new test for the constitutional limits of the EU and the Member States to conclude agreements with dispute settlement provisions in general. It held that if the Union were to enter into an international agreement capable of having the consequence that the Union -or a Member State in the course of implementing EU law -has to amend or withdraw legislation because of an assessment made by a tribunal standing outside the EU judicial system of the level of protection of a public interest established, in accordance with the EU constitutional framework, by the EU institutions, it would have to be concluded that such an agreement undermines the capacity of the Union to operate autonomously within its unique constitutional framework. 10 However, after scrutinising the relevant substantive investment standards in CETA the Court concluded that the ICS tribunals would not be in the position to require the EU institutions to change the level of protection of a public interest. The Court concluded that by expressly restricting the scope of Sections C and D of Chapter Eight of that agreement . . . the Parties have taken care to ensure that those tribunals have no jurisdiction to call into question the choices democratically made within a Party relating to, inter alia, the level of protection of the public order or public safety, the protection of public morals, the protection of health and life of humans and animals, the preservation of food safety, protection of plants and the environment, welfare at work, product safety, consumer protection or, equally, fundamental rights. 11 On the face of it, the ECJ's Opinion appears positive from the perspective of those concerned that these public interests may be affected by agreements such as CETA. After all, the ECJ suggests that the investment chapter of CETA will not have a negative impact on the level of protection set by any of the Parties to the agreement. However, these guarantees offered by the ECJ do raise questions. It is, after all, up to ICS tribunals and not the ECJ to interpret and apply the investment provisions in CETA and weigh public interests against the freedom to conduct business and not the ECJ. Will these tribunals follow the ECJ's assessment of CETA and what exactly are the parameters of that assessment of the ECJ? When are tribunals calling into question the choices democratically made within a Party relating to the level of protection of various public interests and when are they merely 'confining' themselves to applying the CETA investment provisions without affecting the level of protection of a public interest sought by one of the Parties? What is more, the ECJ focuses solely on the actions of the tribunals themselves whereas regulatory chill is a more dynamic concept. It focuses on government action that can be anticipatory but also a reaction to threats and claims by investors, as well as actual awards by tribunals. In addition, while the Court suggests it is vouching for CETA's investment chapter's guarantee of autonomy of regulatory action in general, in reality it is only concerned with regulatory autonomy of the EU institutions, and thus not with the regulatory autonomy of third states or the Member States. 9 The Court apparently decided to take up this issue because Belgium and 'some' of the governments submitting observations had alluded to the fact that the ICS tribunals would be required to weigh public interests against 'the freedom to conduct business'. See para 137 of the Opinion. This was sufficient for the Court to proceed with its analysis on the autonomy of EU decision-making in the public interest. The Court proceeded with the need to 'respond to those doubts'. 10 Opinion 1/17 (CETA) ECLI:EU:C:2019:341, para 150. 11 ibid, para 160. This article proceeds in the context of one policy area where the issue of regulatory chill and investment arbitration is particularly relevant: climate change mitigation. It will use this context for two reasons. First, regulatory chill over climate change measures has been a particular concern for elected representatives in the EU as well as within Member States. The European Parliament's resolution of 14 October 2015 adopted in the wake of the Paris Agreement, for instance, calls on the Commission and the Member States to ensure that any measure adopted by a Party to the Paris Agreement relating to the objective of stabilising greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system, or relating to any of the principles or commitments contained in Articles 3 and 4 of the United Nations Framework Convention on Climate Change, will not be subject to any existing or future treaty of a Party to the extent that it allows for investor-state dispute settlement. 12 Second, as this article will argue below, climate change policies may be susceptible to regulatory chill because, on the one hand, typical measures to promote renewables such as feed-in tariffs are vulnerable to challenge under trade and investment agreements and, on the other, investment arbitration presents an opportunity for the fossil fuel industry to stall government action harming its investments. This article will proceed as follows. The second section will analyse the relationship between the ECJ's definition of regulatory autonomy and 'regulatory chill'. It will explore how 'regulatory chill' is understood in the context of international investment law and then compare it with the ECJ's definition of regulatory autonomy in relation to ICS in CETA. The third section will analyse the ECJ's application of its test of regulatory autonomy to the investment chapter in CETA. It will describe the ECJ's test and then comment on various aspects of the ECJ's analysis and conclusions. The fourth section will explore the possible legal consequences of CETA tribunals exceeding their jurisdiction by calling into question the levels of protection of public interests set by the EU as understood by the ECJ. The final section will conclude. 2. Regulatory autonomy in Opinion 1/17 and regulatory chill 2.1. The regulatory chill hypothesis and ISDS Tienhaara has developed a useful understanding and dichotomy of the concept of regulatory chill in the context of investment arbitration. According to her, investment arbitration has no direct impact on public interest regulation because governments cannot be forced by investment tribunals to roll back regulations that have been put in place. However, there may be significant indirect effects. Regulatory chill is then understood as the phenomenon whereby 'governments will fail to enact or enforce bona fide regulatory measures (or modify measures to such an extent that their original intent is undermined or their effectiveness is severely diminished) as a result of concerns about ISDS'. 13 These concerns are about both the substantial financial risks involved in investment arbitration cases and the difficulty in predicting the outcome of a given investment arbitration case. 14 Importantly, this definition of regulatory chill concerns delays in regulatory action, as well as modification or abandonment of a particular course or regulatory action. Furthermore, when analysing the concept of regulatory chill, it is important to keep in mind when regulatory chill occurs, what causes regulatory chill in the context of ISDS, and the factors that may influence decision-makers to negatively change the course of regulatory action. In relation to timing, regulatory chill is predominantly anticipatory or may be the result of a settlement between the investor and the government subject to a claim. Thus, a government may at the very early stage of regulatory action, when rules are being developed, decide to opt for one course of action over the other for fear of any potential claims it may face in the future. At the very end of this spectrum is when 12 European Parliament resolution of 14 October 2015 on Towards a new international climate agreement in Paris (2015/2112(INI)) 80. 13 Tienhaara (n 3) 233. 14 ibid. the government decides to either settle a case with an investor, or where it takes regulatory action in order to limit the amount of damages awarded to the investor. For instance, in the Vattenfall I case, Germany and Vattenfall reached an amicable settlement as a result of the commitments by the Hamburg government to lower the environmental restrictions imposed on the Moorburg coal-fired power plant in Hamburg, which made the project 'uneconomical'. 15 In relation to the latter, the CETA text is an example of an agreement that explicitly provides governments with a financial incentive to take regulatory action to limit the amount of damages awarded to the investor. Article 8.39(3) CETA provides that for 'the calculation of monetary damages, the Tribunal shall also reduce the damages to take into account any restitution of property or repeal or modification of the measure'. More unusual is the hypothetical situation whereby a government takes regulatory action after it has been ordered to pay damages. In other words, regulatory chill happens as the result of an award, not because of the claim itself, the threat of a claim, or because a regulatory action may in the design of a measure anticipate such a claim. While such regulatory action may be too late to avoid paying damages to the investor obtaining the award, it may prevent similar claims from arising in the future. 16 On the second point (what causes regulatory chill in the context of ISDS), regulatory chill may be caused by a claim, an award or the terms of the investment agreement itself. These causes may be specific and direct, in the sense that they respond to a particular claim by an investor against the regulator itself, or may be more indirect and a response to a claim or award against another country. Lastly, factors that may be relevant in causing regulatory chill are diverse and in essence revolve around a risk-assessment by the regulator in relation to ISDS. Structural aspects of the investment agreement itself may determine the likelihood of regulatory chill, such as the procedural ease by which a claim can be brought. This may include provisions that require an investor to exhaust domestic remedies first, by limiting the ability to bring claims on behalf of shell companies, or by allowing for counterclaims against investors. Substantive provisions that may limit the risk of regulatory chill are carve-out provisions or limitations on the rights of investors in an investment agreement. Factors connected to the regulator may be the willingness of a regulator to comply with the standards in an investment agreement compared to the degree of importance attached to a particular regulatory measure. For instance, a regulator may be more keen to not pursue a particular course of regulatory action if the regulator is not particularly attached to obtaining a regulatory goal, but does attach great importance to compliance with investment agreements. Factors connected to the investor may be the credibility of a particular claim or threat of a claim, or the amount of damages demanded. Lastly, factors connected to a particular policy field or type of regulatory action may increase the risk of regulatory chill. Some policies are simply more difficult to square with what investment agreements protect. An obvious example is the UK Labour Party's idea to renationalise the UK's electricity network in order to decarbonise the economy. 17 The feasibility of this plan may to a large extent depend on how and under what conditions investors will be compensated, something that most investment agreements provide for. What is more, two structural features of climate change policies, in particular, make such policies susceptible to regulatory chill. The first is that climate change policies are often discriminatory in nature. Government action promoting renewables consists of various forms of support of domestic renewables and for that reason is vulnerable to challenge before the World Trade Organization (WTO), EU courts, and investment arbitration tribunals. The international framework to combat climate change takes a bottom-up approach focusing heavily on domestic efforts to mitigate climate change. The Paris Agreement, for instance, stipulates that in order to achieve the Paris goals of staying below two degrees of global warming, a Party 'shall prepare, communicate and maintain successive nationally determined contributions that it intends to achieve. Parties shall pursue domestic mitigation measures, with the aim of achieving the objectives of such contributions.' 18 Government efforts have in large part focused on promoting the production of renewables within their own territories in order to achieve international climate change goals. Such measures allow governments to be more directly in control of greenhouse gas (GHG) emission reductions within their own territory rather than more diffuse consumer-oriented policies. EU efforts are subdivided into efforts by Member States that individually have mitigation targets contributing to the EU's overall target to reduce GHGs. According to the preamble of the EU's renewables Directive, Member States have different renewable energy potentials and operate different support schemes at national level. The majority of Member States apply support schemes that grant benefits solely to energy from renewable sources that is produced on their territory. For the proper functioning of national support schemes, it is vital that Member States continue to be able to control the effect and costs of their national support schemes in accordance with their different potentials. 19 Governments, including those of EU Member States, employ a variety of mechanisms through which domestic renewables production is supported. These vary from introducing quota obligations and green certificates requiring electricity suppliers to purchase a certain quantity of domestic renewables, to price-based mechanisms such as feed-in tariffs that guarantee renewables producers a certain price for their supply. In addition, governments (including in the EU) resort to local content requirements in promoting renewables policy. 20 Second, most investment agreements protect investors against significant and sudden changes in the regulatory framework that harm their investments. 21 It is this type of regulatory action that is currently needed in order to achieve the goals of the Paris Agreement, and such regulatory action may harm investments in the fossil fuel industry; such a revocation or restrictions of exploitation permits, restrictions on the sale or use of fossil fuels, or fuel quality standards may equally be susceptible to challenge. An example is the announcement by the German company Uniper that it was preparing a billion-euro claim under the Energy Charter Treaty (ECT) against the Netherlands for the decision to ban the use of coal in power plants in 2030. Uniper had committed to constructing a coal-fired power plant in the Netherlands that became operational in 2016 and now argues that the ban on the use of coal amounts to indirect expropriation without compensation as required by the ECT. Uniper has stated that it will file the claim as soon as the Netherlands adopts the coal use ban in Parliament. As Tienhaara has argued, investment agreements have been designed primarily to protect the status quo. Conversely, compliance with the objectives of the Paris Agreement will require radical change: a future in which governments have met the collective goal of keeping below the 2 • C guardrail is a future without fossil fuels. Civil society and governments at all levels will have to fight for this future, regardless of whether any of the recently negotiated regional trade agreements ever actually come into force. However, providing fossil fuel corporations with ISDS under these agreements is akin to handing your opponent extra weapons and ammunition before stepping onto the battlefield. Fossil fuel corporations will always have sufficient incentive to bring ISDS cases because they are fighting for their survival. For as long as there is any ambiguity in the substantive provisions of investment agreements -allowing cases to play out over several years, cost millions, and leave governments uncertain about outcomes -there will be policy delays. In a rapidly warming world, we simply cannot afford these delays. 22 As of 2019, the International Centre for Settlement of Investment Disputes (ICSID), a major investment arbitration institution of the World Bank, had already recorded that 24% of its registered cases involve the oil, gas and mining sector. 23 fossil fuel industry may resort to investment arbitration in the future. Global fossil fuel investmentsgenerally long-term upstream extraction projects -are still expanding, totalling US$935 billion in 2018 alone, while government action necessary to achieve the goals of Paris may result in many of these investments becoming stranded investments. 24 Research suggests that no less than 80% of current global coal reserves, a third of oil reserves and half of the world's gas reserves should not be exploited from 2010 to 2050. 25 Investors in fossil fuel extraction thus have a clear financial incentive to turn to tools such as investment arbitration to secure their investments. Several other exacerbating factors -climate change policies posing an existential threat to the business model of the fossil fuel industry, past experience and knowledge of investment arbitration, the financial benefits of delays to introduction of climate change policies alone, and the lack of any clear disadvantages of using investment arbitration for the fossil fuel industry -all add to a heightened risk for claims targeting climate change policies. 26 Tienhaara suggests that three forms of regulatory chill can be distinguished: internalisation chill, threat chill and cross-border chill. Internalisation chill is the effect whereby decision-makers take into account the potential of investment disputes before drafting policy, pre-empting disputes in a more general way, and thereby 'prioritizing the avoidance of such disputes over the development of efficient regulation in the public interest'. 27 Internalisation chill is very difficult to measure and evidence of this type of chill is at best mixed. 28 However, governments that have regulatory policies in place that screen public interest measures for their potential impacts on trade and investment before such measures are adopted could lead to this particular chill effect. Within the EU, the Commission's regulatory policy requires officials to take account of the impact of any regulatory proposal on international trade and investment flows and agreements to which the EU is party among various other issues. 29 In the context of international trade law, a prominent example of an initiative that has so far not materialised because of fears over its legality under international trade law is the French idea of a carbon border tax. Resisted so far within the EU out of concerns over compatibility with the EU's trade commitments, the new President of the Commission, Ursula von der Leyen, has promised the European Parliament that it will be introduced, with the caveat that it would need to be 'fully compliant with WTO rules'. 30 Threat chill 'concerns the chilling of specific regulatory measures that have been proposed by governments following an investor's threat to arbitrate'. 31 This type of chill is the most familiar form of chill and does not depend on prior knowledge of investment agreements by government officials. In fact, the lack of prior knowledge may exacerbate the chilling effects because government officials may lack the ability to properly assess the viability of the threat. The use of investment claims as a threat is well documented. When such threats are made, government officials, bound by constraints in time and resources, will have to assess the viability of such threats. In other words, there is an element of risk analysis involved on the part of the government as to whether to pursue a particular measure. Lastly, cross-border chill is where a government fails to enact or enforce a measure, or modifies a measure that is contemplated and easily transferrable in several jurisdictions, because of an investment arbitration claim against another country. A clear example of this form of chill is New Zealand's decision to delay its plain 24 The Court's position in Opinion 1/17 on the autonomy of the regulatory process starts with the observation that Belgium and some of the governments involved in the proceedings have stated that the CETA Tribunal 'might . . . weigh the interest constituted by the freedom to conduct business . . . against public interests' set out in EU law. 33 The Court then proceeds by seeking to answer the question whether situations in which tribunals would give rulings on acts of secondary EU law would adversely affect the exclusive jurisdiction of the Court over the definitive interpretation of EU law. The Court notes in that regard that such situations 'are likely to occur often' and notes that the definition of the concept 'investment' in CETA 'is particularly broad' and may 'concern measures in any area that relates, within the Union, to business activity and the use of moveable or immovable property, securities, intellectual property rights, claims to money and any other type of investment'. The Court also notes that the EU would not be in a position to object to the jurisdiction of the tribunals in question. Moreover, in terms of the types of acts that may be brought before the tribunal, the Court states that CETA does not 'preclude that measure from being one of general application or from implementing an act of general application'. 34 The Court's reasoning then takes a notable turn away from the question of interpretation of EU law after it finds that the tribunals in question can only award damages and not annul or order changes to domestic legislation. In paragraphs 148 to 151, the Court finds that the jurisdiction of those tribunals would adversely affect the autonomy of the EU legal order if it were structured in such a way that those tribunals might, in the course of making findings on restrictions on the freedom to conduct business challenged within a claim, call into question the level of protection of a public interest that led to the introduction of such restrictions by the Union with respect to all operators who invest in the commercial or industrial sector at issue of the internal market, rather than confine itself to determining whether the treatment of an investor or a covered investment is vitiated by a defect mentioned in Section C or D of Chapter Eight of the CETA. If the CETA Tribunal and Appellate Tribunal were to have jurisdiction to issue awards finding that the treatment of a Canadian investor is incompatible with the CETA because of the level of protection of a public interest established by the EU institutions, this could create a situation where, in order to avoid being repeatedly compelled by the CETA Tribunal to pay damages to the claimant investor, the achievement of that level of protection needs to be abandoned by the Union. If the Union were to enter into an international agreement capable of having the consequence that the Union -or a Member State in the course of implementing EU law -has to amend or withdraw legislation because of an assessment made by a tribunal standing outside the EU judicial system of the level of protection of a public interest established, in accordance with the EU constitutional framework, by the EU institutions, it would have to be concluded that such an agreement undermines the capacity of the Union to operate autonomously within its unique constitutional framework. It must be emphasised, in that regard, that EU legislation is adopted by the EU legislature following the democratic process defined in the EU and FEU Treaties, and that that legislation is deemed, by virtue of the principles of conferral of powers, subsidiarity and proportionality laid down in Article 5 TEU [Treaty on European Union], to be both appropriate and necessary to achieve a legitimate objective of the Union. In accordance with Article 19 TEU, it is the task of the Courts of the European Union to ensure review of the compatibility of the level of protection of public interests 32 See n 3. 33 Opinion 1/17 (CETA) ECLI:EU:C:2019:341, para 137. 34 ibid, para 143. established by such legislation with, inter alia, the EU and FEU Treaties, the Charter and the general principles of EU law. 35 Comparing regulatory chill and regulatory autonomy A first notable aspect of the ECJ's test is the ECJ's recognition that claims for damages by investors may have an impact on the public decision-making process. The ECJ's test does not consider the autonomy of the regulatory process in the EU to be affected only if external tribunals would have jurisdiction to invalidate EU rules, but considers indirect effects also relevant. Similar to the understanding of regulatory chill described above, the ECJ does not understand an adverse effect on the autonomy of the EU legal order to solely come from a power to directly force repeal or amendment of a particular measure. Instead, repeated awards of damages may result in the same effect and can therefore also result in an adverse effect on the autonomy of the EU legal order. A second notable aspect is that the ECJ's test centres around amending or withdrawing legislation because of the level of protection set by the EU. The ECJ's test thus appears not to be concerned with any delays in regulatory action that may result from ICS litigation. Similarly, the ECJ's test is backward-looking in the sense that the test does not appear to concern anticipatory action of governments as a result of ISDS in relation to public interest decision-making. Governments may simply decide to not introduce legislation because of ISDS. The Court, however, refers to abandonment of a particular level of protection in order to avoid being 'repeatedly' compelled to pay damages. This suggests a higher threshold before the autonomy of the EU legal order is adversely affected than when simply as a result of being compelled to pay damages the EU abandons the achievement of a level of protection. In fact, the ECJ's test appears to centre around a very narrow set of factual circumstances where governments have enacted legislation first, a foreign investor subsequently obtains an award that calls into question the level of protection set by the EU and then as a result of this award the government is 'repeatedly' forced to pay damages because of subsequent litigation by that same investor. Only in this situation, according to the ECJ, is the EU forced to abandon its level of protection. It is hard to imagine when and how such a situation would occur. A level of protection may of course also be abandoned simply in order to avoid paying damages a single time. Indeed, the text of CETA expressly provides for this possibility. Article 8.39(3) CETA provides: 'For the calculation of monetary damages, the Tribunal shall also reduce the damages to take into account any restitution of property or repeal or modification of the measure. ' The ECJ's test further concerns only those structural aspects of the jurisdiction of tribunals that relate to the substantive standards of the agreement. It does not concern procedural aspects, such as the procedural ease with which a claim may be brought (e.g. the potential existence of a clause to exhaust domestic remedies, a clause on frivolous claims, or third-party funding), or the height of claims or the way damages may be calculated. The ECJ's focus is on whether or not tribunals would be calling into question the levels of protection set by the EU rather than confining themselves to finding a breach or not of the investor rights contained in the agreement. Lastly, the ECJ's test is applicable to EU regulatory action and to Member State regulatory action in so far as they are implementing EU law. It is thus not concerned with any adverse effects on the regulatory process that may occur in third countries party to an agreement with the EU. The ECJ's test is thus strictly internal and contains no reference to Treaty provisions that suggest that the EU has as its task to uphold and promote its democratic values in its relations with the wider world. 36 While the ECJ's test may certainly indirectly contribute to less intrusive investment standards being adopted between the EU and third countries, Opinion 1/17 would have little legal value in restraining tribunals deciding on regulatory action of those third countries, for instance in the course of annulment proceedings of an award before a court in an EU Member State. The ECJ's own case-law on non-contractual liability The ECJ's test on preserving the regulatory autonomy of the EU legal under EU external relations law differs from the ECJ's own case-law on non-contractual liability of the EU institutions under Article 340 Treaty on the Functioning of the European Union (TFEU). While the ECJ in its case-law on non-contractual liability is also wary of the deterrent effect of damages claims by individuals on the EU's decision-making process, the ECJ also appears to be more protective of the EU decision-making process. What is more, that standard on non-contractual liability may have undergone some erosion as a result of Opinion 1/17. Currently, the Court of Justice employs a high standard in allowing claims for damages for EU discretionary acts. 37 The ECJ considers that EU institutions should have a 'wide discretion' in implementing EU policy and therefore takes a 'strict' approach towards liability. 38 This means that not only is the applicant required to demonstrate a breach of a superior rule of law which is intended to give rights to individuals, but also that this breach must be flagrant, meaning that the EU institution manifestly and gravely disregarded the limits on its discretion. 39 Factors that are relevant in this regard are intent, justifiability and clarity of the rule breached. 40 The reason for this high standard is that the 'exercise of the legislative function must not be hindered by the prospect of actions for damages whenever the general interest of the Community requires legislative measures to be adopted which may adversely affect individual interests'. 41 Accordingly, it is very difficult to claim damages under Article 340 TFEU. 42 For illustrative purposes for this very strict approach, it is perhaps worthwhile to compare the success rate of claimants in proceedings under Article 340 TFEU and under investment agreements containing ISDS. Even though the sheer number of 'superior rules of law' is considerably larger than the three or four investor rights generally contained in investment agreements, the success rate for claimants under the latter provisions is significantly higher. The ECJ's deferential case-law towards the EU institutions in damages claims has resulted in only 23 claims out of 530 damages claims brought against the EU institutions being successful. This is a success rate of 4.3%. Eight of these cases involved disputes over milk quotas and the majority of the cases involved matters that fell within the Common Agricultural Policy. With the notable exception of the Schneider Electric case, the amounts of compensation were insignificant. Investors, by contrast, have been almost seven times as successful before ISDS tribunals. According to statistics of the United Nations Conference on Trade and Development (UNCTAD), of the 942 cases concluded so far, 173 resulted in an award in favour of the investor. This is a success rate of 28.7%. A 2018 UNCTAD study found that on average, successful claimants were awarded about 40 per cent of the amounts they claimed. In cases decided in favour of the investor, the average amount claimed was $1.3 billion and the median $118 million. The average amount awarded was $504 million and the median $20 million. These amounts do not include interest or legal costs, and some of the awarded sums may have been subject to set-aside or annulment proceedings. 43 The ECJ's rationale in preserving regulatory autonomy in Opinion 1/17 is different from the rationale in preserving regulatory autonomy for EU institutions in the context of damages claims under Article 340 TFEU. First of all, under Article 340 TFEU there is no reference to claims being made 'repeatedly' before there may be an effect on the decision-making process. Second, the ECJ's reasoning under Article 37 Whether the safeguards and procedural changes introduced with CETA will result in a success rate similar to that of damages claims within the EU remains to be seen. While the standards in CETA are generally more circumscribed than some of the broadly worded agreements from the early 1990s, CETA maintains the basic tenets of international investment agreements and it contains the same rights and the same standards for calculating the level of monetary awards as the more modern investment agreements concluded by, for instance, the United States and Canada. 340 TFEU is prospective in nature, whereas the test in Opinion 1/17 is retroactive. The ECJ focuses on withdrawal and amendments of legislation in Opinion 1/17 whereas under Article 340 TFEU the ECJ is concerned with the 'exercise of the legislative function', which must not be hindered by the 'prospect' of actions for damages. In other words, the ECJ has sought under Article 340 TFEU to develop a test so strict that the legislator does not need to worry about damages actions when introducing legislation, whereas in Opinion 1/17 the ECJ considers damages claims only problematic for the legislative function if repeated successful damages claims result in amendments or even withdrawal of already existing rules. Third, Opinion 1/17 does not require the 'defect' introduced by the EU institutions of the agreement in question to be 'flagrant' or 'sufficiently serious' in order to preserve any 'wide discretion' of the EU institutions, as is the case under Article 340 TFEU. Rather, the ECJ considers the autonomy of the EU legal order preserved if the tribunals in question merely apply the CETA Agreement. Thus, whereas under Article 340 TFEU the ECJ requires a breach to be a flagrant breach in order to protect the legislative function of the EU institutions, no such additional requirement is necessary under the ECJ's test in Opinion 1/17. Rather, the ECJ looks at the substantive provisions themselves in order to determine whether the tribunals in question would have the jurisdiction to call into question the level of protection set by the EU. Moreover, that standard on non-contractual liability may have undergone some erosion as a result of Opinion 1/17. Under Article 340 TFEU it is not possible to claim damages for lawful acts under EU law. 44 However, in Opinion 1/17 the Court attaches no such preconditions to damages claims in the context of the autonomy of the EU legal order. This widens the scope for damages claims against the EU institutions as acts that are lawful under EU law and as such are not susceptible to damages claims under Article 340 TFEU may also result in damages claims by individuals under international agreements to which the EU is party. 3. An analysis of the ECJ's application of its own test of regulatory autonomy to CETA The ECJ's assessment of CETA Chapter Eight Sections C and D After setting out the general conditions for the jurisdiction of international tribunals to adjudicate upon public interest decision-making by EU institutions, the Court proceeds by applying the test to the CETA provisions. First, the Court refers to one of the general exceptions contained in Article 28.3 of CETA, namely paragraph 2 of that article. This provision is similarly worded to General Agreement on Tariffs and Trade (GATT) Article XX and Article 36 TFEU and does not include the protection of the environment or combating climate change as an exception. This provision only applies to Section C of CETA's investment chapter (three provisions), which includes the national treatment and most favoured nation standards. Article 28.3 provides that subject to the requirement that such measures are not applied in a manner which would constitute a means of arbitrary or unjustifiable discrimination between the Parties where like conditions prevail, or a disguised restriction on trade in services, nothing in the agreement should be construed to prevent the adoption or enforcement by a Party of measures necessary for the listed public interests. From this provision, the Court infers that 'the CETA Tribunal has no jurisdiction to declare incompatible with the CETA the level of protection of a public interest established by the EU measures specified in [Article 28.4.2 CETA] and, on that basis, to order the Union to pay damages'. 45 The Court then proceeds to cite Article 8.9.1 and 8.9.2 in Section D of CETA and Point 1(d) and Point 2 of the Joint Interpretative Instrument in CETA. Article 8.9.1 'reaffirms' for the purposes of that section the Parties' 'right to regulate' in the public interest. Article 8.9.2 states that 'the mere fact that a Party regulates, including through a modification to its laws, in a manner which negatively affects an investment or interferes with an investor's expectations, including its expectations of profits, does not amount to a breach of an obligation under this Section'. 46 The point in the Joint Interpretative Instrument provides that CETA will not lower public interest standards, that investors must respect domestic requirements and that it preserves the ability of the EU and its Member States and Canada to 44 adopt and apply their own laws and regulations that regulate economic activity in the public interest. The Court concludes from these provisions that 'the discretionary powers of the CETA Tribunal and Appellate Tribunal do not extend to permitting them to call into question the level of protection of public interest determined by the Union following a democratic process'. 47 The Court finds affirmation of this conclusion by pointing out the definition of indirect expropriation contained in Annex 8-A (a definition modelled after the US model investment agreement) and the provisions of the 'fair and equitable treatment' standard in Article 8.10. The Court interprets Article 8.10.2 to contain an exhaustive list of situations covered by the standard as opposed to an open-ended list. The Court infers that those two provisions reflect that the Parties to CETA have concentrated on situations where there is abusive treatment, manifest arbitrariness and targeted discrimination, which reveals, once again, that the required level of protection of a public interest, as established following a democratic process, is not subject to the jurisdiction conferred on the envisaged tribunals to determine whether treatment accorded by a Party to an investor or a covered investment is 'fair and equitable '. 48 The Court then concludes its reasoning by stating that it is apparent from all those provisions, contained in the CETA, that, by expressly restricting the scope of Sections C and D of Chapter Eight of that agreement, which are the only sections that can be relied upon in claims before the envisaged tribunals by means of Section F of that chapter, the Parties have taken care to ensure that those tribunals have no jurisdiction to call into question the choices democratically made within a Party relating to, inter alia, the level of protection of public order or public safety, the protection of public morals, the protection of health and life of humans and animals, the preservation of food safety, protection of plants and the environment, welfare at work, product safety, consumer protection or, equally, fundamental rights. 49 CETA's investment provisions will in practice not be interpreted by the ECJ, but by ICS tribunals, government officials, lawyers and investors The ECJ's analysis of CETA focuses exclusively on those provisions the Commission has sought to introduce in CETA to accommodate concerns over regulatory chill. Thus the ECJ does not look at the actual substantive rights given to investors, but rather at some of the specific language introduced to constrain expansive interpretations of those rights. The ECJ looks at the general exception clause contained in CETA, provisions of the article on 'investment and regulatory measures' that was introduced after the initialling of CETA with the ICS reform package, the so-called Joint Interpretative Instrument introduced to alleviate concerns in German social-democratic circles in particular leading up to the signature of CETA, and parts of the definitions of indirect expropriation and fair and equitable treatment as they were already present in the initialled text of CETA in 2014. In so doing, the ECJ introduces a caveat in its reasoning that upon closer examination does not significantly help reduce the risk of regulatory chill through CETA's ICS. In paragraphs 152 to 160 the Court creates a dichotomy between two types of measures: on the one hand measures that are the result of 'choices democratically made' that relate to the 'level of protection' of an open list of public interests and, on the other, measures that constitute 'abusive treatment, manifest arbitrariness and targeted discrimination' or a means of 'arbitrary or unjustifiable discrimination between the Parties where like conditions prevail, or a disguised restriction on trade between the Parties'. According to the Court, the CETA tribunals have no jurisdiction to call into question the former, but do have jurisdiction to declare incompatible the latter with CETA and on that basis award damages. This dividing line is of course not unusual in international and European economic law and has been the subject of many disputes in the past decades. 50 Within the WTO, for instance, the legality under the GATT of many public interest measures hinges on whether or not they are compatible with the chapeau of Article XX. 51 However, the Court's guidance as to when measures fall in one category and when in the other is limited and is based in large part by simply citing the actual text of the CETA provisions. The Court does not dwell on how exactly this demarcation line is to be drawn, nor question by whom this demarcation line is drawn and on what basis, but simply concludes that measures that are the result of choices democratically made that relate to the level of protection of an open list of public interests do not fall within this category. However, drawing this demarcation line is far from self-evident. A body making such a determination can take a deferential stance on choices made by a Party to pursue public interests or a more intrusive stance. The former approach would consider even directly discriminatory measures permissible as long as they can be linked to the pursuit of a public interest, whereas the latter would not. That this demarcation line is not easily drawn between public interest measures a particular adjudication body needs to label as legitimate, and measures that breach a discrimination provision in an international agreement, is already evident from the ECJ's own internal market case-law. 52 Take, for instance, the contrasting approaches in Advocate General Bot's Opinions in Ålands Vindkraft and Essent and the ECJ's judgments in those cases. 53 The case concerned the compatibility of Swedish and Belgian support schemes for domestic renewable electricity production with EU rules on the free movement of goods between Member States. Article 34 TFEU prohibits quantitative restrictions on imports and all measures having equivalent effect. However, Article 36 TFEU provides a limited list of public interest exceptions on which Member States can rely to justify any restrictions on the free movement of goods, provided that such measures shall not 'constitute a means of arbitrary discrimination or a disguised restriction on trade between Member States'. The Swedish and Belgian support schemes were based on green electricity certificates that support domestic renewable energy production only. The schemes essentially require electricity suppliers to surrender each year to the authorities a quota of green electricity certificates. These certificates can be obtained by producing green electricity within that Member State or alternatively by purchasing them from domestic green electricity suppliers. Electricity suppliers that imported green electricity from other Member States, however, could either not obtain certificates in that country for this electricity or could not use certificates issued by another Member State. For Advocate General Bot in Essent, EU free movement law 'preclude such rules, which hinder in a discriminatory way trade between Member States . . . without being justified by imperative requirements relating to environmental protection'. For the Advocate General, 'the national rules at issue, prohibiting as they do guarantees of origin from other countries from being taken into account, do not and cannot have environmental protection as their objective'. 54 The Advocate General came to a similar conclusion in Ålands Vindkraft. For the Advocate General a directly discriminatory measure may be justified on grounds of environmental protection 'provided, however, that it undergoes a particularly rigorous proportionality test, on [sic] which I have referred to as "reinforced"'. 55 The Advocate General applied this proportionality test of suitability and necessity because he could not see how imports of green electricity from other Member States could undermine environmental protection in the importing Member State. In other words, because the main objective invoked by the Member States was environmental protection and promoting the use of renewable energy sources, the measures at issue did not seem suitable to achieve the objective. This is so because the support schemes in question exclude from their application renewable electricity generated in another Member State. In reaching this conclusion, the Advocate General placed considerable emphasis on the importance of trade liberalisation and creating an internal market for green electricity in the EU, and through comparative advantage a 'more rational location of production'. 56 The approach of the ECJ in both cases was less ideological and considerably more deferential towards Member States and the EU institutions as to the methods of achieving the goals of climate change mitigation and environmental protection. The Court found that promoting the use of renewable energy sources for the production of electricity was in principle capable of justifying barriers to the free movement of goods. 57 This was so because such promotion contributed to the protection of the environment as it contributes to the reduction of GHG emissions. 58 The increase in renewables protection was, according to the Court, 'one of the important components of the package of measures needed to reduce greenhouse gas emissions' and to comply with international agreements to which the EU was party. 59 The Court also noted that this increase in renewables production is 'designed to protect the health and life of humans, animals and plants, which are among the public interests grounds listed in Article 36 TFEU'. The Court also pointed out that Article 194(1)(c) TFEU states that the development of renewables is one of the objectives that guides EU energy policy. 60 The Court then proceeded to a lengthy analysis of the support scheme's compliance with the principle of proportionality. Much of the analysis of the Court emphasised the policy approach taken by the EU legislator, which required the EU to achieve its targets of renewables production for the overall energy mix (20% in 2020) through national production targets. In order for this approach to be successful, Member States needed a sufficient level of control over renewables production within their own territories. 61 The Court noted that 'a territorial limitation may in itself be regarded as necessary' in order to promote the increased use of renewable energy in the production of electricity. 62 The choice in particular to focus on renewable energy production, rather than on consumption, was logical for the Court because 'the green nature of electricity relates only to its method of production and that, accordingly it is primarily at the production stage that the environmental objectives in terms of reduction of greenhouse gases can actually be pursued'. 63 By contrast, this objective becomes more difficult to pursue at the consumption stage, given that it is difficult to determine the specific origin of production. The Court found it is 'essential, in order to ensure the proper functioning of the national support schemes, that Member States be able to "control the effect and costs of their national support schemes according to their different potentials", while maintaining investor confidence'. 64 The Court's deferential stance may be explained by the fact that the construction of the EU's internal market is not an end in itself but a means to an end (an ever closer Union). The EU, after all, seeks to achieve a plurality of objectives, of which climate change mitigation is becoming more and more prominent. Indeed, the ECJ explicitly refers to this treaty objective in its reasoning. Nevertheless, the ECJ does not consider the objectives and the context of CETA relevant in assessing whether or not the tribunals interpreting CETA will be able to properly make the demarcation between legitimate and illegitimate measures. This is somewhat surprising given previous case-law of the ECJ where it appeared to be well aware of the crucial difference context and objectives of international agreements can make in terms of interpreting fairly similarly worded text. In Opinion 1/91 the Court held: The fact that the provisions of the agreement and the corresponding Community provisions are identically worded does not mean that they must necessarily be interpreted identically. An international treaty is to be interpreted not only on the basis of its wording, but also in the light of its objectives. Article 31 of the Vienna Convention of 23 May 1969 on the law of treaties stipulates in this respect that a treaty is to be interpreted in good faith in accordance with the ordinary meaning to be given to its terms in their context and in the light of its object and purpose. 65 The Court then proceeded to compare the objectives of the EU Treaties with that of the European Economic Area (EEA) Agreement and came to the conclusion that both the context and the objectives were fundamentally different. Whereas the EEA agreement sought to simply remove trade barriers, the EU Treaties had the objective of making 'concrete progress towards European unity' and therefore the free movement provisions were 'far from being an end in themselves, are only means for attaining those objectives'. 66 An analysis of the objectives and the context of CETA is, however, completely absent from the ECJ's reasoning on regulatory autonomy in Opinion 1/17. CETA does not have as its objective mitigation of climate change or promoting peace, democracy, full employment, environmental protection, or any of the public interests mentioned by the ECJ in its analysis on regulatory autonomy in Opinion 1/17. CETA is simply a trade and investment agreement and has as its objective the liberalisation of trade between the EU and Canada and the protection of foreign investment. It contains no provisions that suggest that the objectives of the agreement go beyond economic liberalisation. At most, the sustainable development chapters simply seek to ensure that trade and investment liberalisation takes place in compliance with already existing environmental and social international obligations to which the Parties have committed themselves. 67 Public interests are only featured as exceptions to the overall objective of trade liberalisation and investment protection. In fact, climate change mitigation is not even mentioned as such an exception and is only explicitly mentioned in the sustainable development chapters as an area where the Parties should facilitate trade and investment liberalisation for goods and services that are of relevance for climate change mitigation and where the Parties should collaborate in trade-related aspects of the climate change regime. 68 The latter provision could even be read as a discouragement to take unilateral measures in the absence of international agreement. Overall, the ECJ's guidance is of rather limited value in preventing regulatory chill. The main problem is that the ECJ does little in actually interpreting the provisions themselves. It merely restates the text of CETA and concludes from these provisions that CETA's investment tribunals will not call into question the level of protection of measures that relate to public interests. In particular, the Court does not elaborate to any significant extent on when measures should be considered to fall within the 'arbitrary' box or alternatively when they are legitimate public interest measures. Governments and investors will still be faced with the question whether an (envisaged) measure is pursuing a legitimate public interest or arbitrary. This may relate to a host of measures contributing to climate change mitigation: from measures promoting domestic renewables to sudden reversals in policies on coal mines, coal-fired power plants, or oil and gas infrastructure, as long as such measures can be framed as manifestly excessive, arbitrary, going against specific legitimate expectations, or discriminatory. In that sense, the ECJ's ruling merely distils a high level of confidence in the ICS tribunals making the 'right' judgment calls rather than providing additional assurances against regulatory chill. Nonetheless, there are two aspects of the Court's expectations of how CETA should be interpreted that provide some interpretative guidance. First, the ECJ applies a relatively loose test for the causal relationship between the measure and the public interest itself by using the words 'relate to'. a 'democratic choice' made within a Party must only 'relate to' the level of protection of a particular public interest in order for that measure to fall outside the jurisdiction of the CETA tribunals, instead of being 'necessary' or 'essential' to achieve the desired level of protection. This appears to be a relatively generous interpretation from a public interest point of view, at least if one compares it to the text and interpretation of Article XX of the GATT. As such, if the line of the ECJ is followed before ICS tribunals, this particular point will make it easier to argue that a particular measure is not in violation of CETA and to prevent the three forms of regulatory chill identified above. Second, the ECJ makes clear that in its view the list of examples of breaches of the fair and equitable treatment standard is exhaustive, rather than open ended. Investment disputes pertaining to the level of protection of a public interest after Opinion 1/17 Opinion 1/17 raises several legal questions regarding future investment disputes that pertain to the level of protection of a public interest. Of course, there is the question what happens if a CETA award appears to breach the limited parameters set by the ECJ. But beyond this question, one may also wonder what will happen to investment disputes brought under Member State investment agreements with third countries (extra-EU bilateral investment treaties -BITs). Are such agreements potentially incompatible with EU law if they do not contain the same formal 'right to regulate' provisions as CETA? A final question is what will happen to the EU's efforts to negotiate a Multilateral Investment Court, now that the ECJ has linked the establishment of tribunals to the substantive provisions of the investment agreement. The Commission's current mandate is purely procedural in nature. Disputes before ICS tribunals Given the limited interpretation of CETA itself by the ECJ, the risks of ICS tribunals interpreting CETA in a way that would contravene the ECJ's understanding of CETA is rather low. ICS tribunals will not decide that they are calling into question the level of protection sought by the EU or a Member State implementing EU law, but simply classify measures as breaching the standards contained in CETA. While one therefore might argue that an ICS award de facto calls into question the level of protection sought by the EU, it would be quite easy to argue that an ICS tribunal simply confined itself to determining whether the treatment of the investor was vitiated by a defect mentioned in the CETA investment chapter. On the other hand, an ICS tribunal may expressly contravene the ECJ's actual interpretation of CETAfor instance by finding the examples of a breach of the fair and equitable treatment standard non-exhaustive or because of a higher public interest threshold. ICS tribunals are not bound under international law by the ECJ's interpretation of CETA, and past practice of investment tribunals in intra-EU disputes show little deference towards the ECJ. Not a single investment tribunal set up under an intra-EU bilateral investment agreement has so far declined jurisdiction following the ECJ's Achmea judgment. 69 Nonetheless, even if an ICS tribunal were to contravene the ECJ's interpretation of CETA this would not necessarily mean that CETA would adversely affect the autonomy of the EU legal order. The ECJ may still find it necessary that the level of protection set by the EU for a particular public interest has been abandoned as a result of repeated damages claims. What is more, any such award must be issued against the EU or a Member State implementing EU law, and thus Opinion 1/17 cannot be relied upon to challenge an award issued against a third country. It may be possible, however, to either seek annulment or challenge any enforcement of an award against the EU before courts in the EU, as the ECJ could view such an award as resulting in the ICS overstepping its jurisdictional boundaries. The possibilities for such litigation are also rather remote, given the enforcement regime favourable to investors in CETA. 70 A different route in preventing regulatory chill may be issuing binding notes of interpretation of the CETA in relation to climate change or other public interest measures. where serious concerns arise as regards matters of interpretation that may affect investment, the Committee on Services and Investment may, . . . recommend to the CETA Joint Committee the adoption of interpretations of this Agreement. An interpretation adopted by the CETA Joint Committee shall be binding on the Tribunal established under this Section. The CETA Joint Committee may decide that an interpretation shall have binding effect from a specific date. Such a binding note could, for instance, state that measures relating to a non-exhaustive list of public interest measures shall not constitute a breach of Sections C and D of Chapter Eight of CETA. The Parties could also agree that all measures implementing Paris commitments shall not constitute such a breach. This would be a stronger assurance against jurisdictional overreach than simply having a court of one of the Parties giving a unilateral interpretation of the agreement. On the other hand, the difficulty with this approach is that it may face the same level of opposition and creative interpretation by the investment arbitration industry as under the North American Free Trade Agreement's (NAFTA) Free Trade Commission's Notes of Interpretation of Certain Chapter 11 Provisions. 71 Strong wording may be considered de facto amendments of CETA and weaker wording opens the door for alternative interpretations. In addition, the Parties to CETA could agree to complete the roster of tribunal members with individuals that are likely going to take a very deferential view to regulatory power and a strict interpretation of investor rights. This selection and appointment process for the ICS and the future Multilateral Investment Court is currently, and not entirely surprisingly, one of the main areas of interest of the investment arbitration community. 72 There is, however, little evidence to suggest that the Parties to CETA are committed to such an outcome both in the CETA text and in terms of commitments by the European Commission and the Council in particular. Nor did the ECJ's analysis in Opinion 1/17 go beyond a formal vetting of independence requirements in CETA. 73 The CETA prescribes both the procedure for selection of these tribunal members as well as the qualifications tribunal members must have. In terms of procedure, the CETA Joint Committee is responsible for taking the appointment decision of 15 Members of the Tribunal. This decision is taken by 'mutual consent' by the CETA Joint Committee, which consists of representatives of the EU and Canada. The EU will in principle be represented by the Commissioner responsible for trade and will likely need a mandate from the Council to take a position within the Joint Committee. 74 If such a decision cannot be taken, it will fall upon the ICSID Secretary General to appoint tribunal members for a particular case. Thus in case of disagreement, a major investment arbitration institution will be responsible for the appointment. The nomination and selection process itself is not specified by CETA. The Council and the Commission, however, have indicated that 'candidate European judges [sic] will be nominated by the Member States, which will also participate in the assessment of candidates'. 75 Moreover, both institutions are committed to a process whereby the 'richness of European legal traditions' is reflected. This means that from the perspective of the EU five out of 15 tribunal members will be nominated by the EU Member States, but it is not clear how the other ten tribunal members will be nominated. In any event, it is clear that the government of the other Party, in the case of CETA Canada, will have to agree to any nominations from the EU side. This same process is likely to be repeated with other countries committed to having the ICS in an agreement with the EU, such as Vietnam. In it is 'desirable' that tribunal members 'have expertise, in particular, in international investment law, in international trade law and the resolution of disputes arising under international investment or international trade agreements'. This prescription makes it likely that the tribunal members appointed will come from the very same legal community that has inspired public opposition to the system of investment arbitration. The Commission and the Council have in this sense merely emphasised that they seek tribunal members on the basis of 'the highest degree of competence' and 'impartiality' of prospective tribunal members. Nonetheless, even within the world of investment arbitration there are differences in approaches between investment arbitrators. An interesting empirical study conducted by Van Harten found one individual as the leading contributor to restrictive interpretation of investment agreements and a small group of arbitrators as leading contributors to expansive interpretation of investment agreements in the period 1990 to May 2010. 76 Notably, Gabrielle Kaufmann-Kohler was among the arbitrators favouring an expansive interpretation of investment agreements, as were several arbitrators from EU Member States. While the study is not strictly concerned with regulatory chill as such it does provide some guidance and inspiration as to how to select and appoint arbitrators with a more favourable view on regulatory autonomy than other arbitrators within the investment law community. Extra-EU BITs Opinion 1/17 answers the legal question whether and under what conditions an investment agreement concluded between the EU and a third state is compatible with EU law. A year earlier, the ECJ also had the opportunity to clarify the compatibility of investment arbitration provisions in investment agreements between Member States. In Achmea, and in contrast to Opinion 1/17, the ECJ found such provisions incompatible with EU law. 77 However, it did not find that these provisions were incompatible with EU law because investment claims would undermine the capacity of EU institutions to operate autonomously. The ECJ's reasoning was based on a more straightforward reasoning that took issue with the ability of such investment tribunals to resolve disputes which may involve questions of EU law. The outstanding question after Opinion 1/17 and Achmea is the extent to which investment agreements concluded between the Member States and third countries are compatible with EU law. This more general question will no doubt attract considerable scholarship but for the purposes of this article the more pertinent question is the relevance of the concept of regulatory autonomy specifically for the compatibility of such agreements with EU law. In other words, is it possible that the ECJ's reasoning in paragraphs 137 to 161 could result in the ECJ declaring both existing and future investment agreements by Member States incompatible with EU law? It is certainly not completely inconceivable that the ECJ may be faced with such a question. Such agreements have already been the subject of infringement cases brought by the Commission, and Article 9(1)(a) of the current Grandfathering Regulation 1219/2012 specifically requires the Commission to only authorise negotiations of a new investment agreement between a Member State and a third country if that agreement would not 'be in conflict with Union law'. 78 An alternative route to the ECJ may come in the form of a preliminary reference from a Member State court, either in the course of annulment or enforcement proceedings of awards against a Member State or in some other less orthodox manner. The vast majority of these agreements do not contain the exact same language for their substantive provisions as CETA or contain the provisions the ECJ referred to in Opinion 1/17 that seek to curtail an expansive interpretation of CETA's investor rights. Without these formal guarantees, one could argue that the scope of investor rights in those agreements is not expressly restricted and therefore a Member State has not ensured that the tribunals in question have no jurisdiction to call into question the choices democratically made in relation to public interests. It is important to keep in mind though that the focus of the Court in Opinion 1/17 is very much on the capacity of the Union and not the Member States to operate autonomously within its 'unique constitutional 76 framework'. For the Court, the autonomy of the EU's legal order means that the EU institutions must be free to determine the level of protection of a public interest. Only where repeated awards by an investment tribunal would lead to an abandonment of that level of protection of a public interest by the EU institutions would there be an adverse effect on the EU legal order. Thus it is likely that the Court would deem it necessary to consider whether the Member State in question had been implementing EU law in some way and because of the award had to abandon the level of protection sought by the EU institutions. Member States do play an essential role in achieving the levels of protection set by EU institutions, even if they are not responsible for setting the EU's levels of protection. Member States implement EU law and are responsible for ensuring its full effectiveness. It is not inconceivable that there are links between investment disputes brought against Member States and levels of protection set by the EU institutions. Consider, for instance, the claim Uniper is preparing against the Netherlands under the ECT in respect of the government's decision to restrict the use of coal. This restriction 'relates' to the level of protection against climate change set by the EU and its Member States under the Paris Agreement and in secondary EU climate change legislation. However, if the ECJ insists that the potential future award itself subsequently results in an abandonment of the level of protection by the Netherlands as set by the EU, there may be room to argue that the award or the agreement is still compatible with the Treaties. The ECJ's initial test in Opinion 1/17 therefore does not answer the question whether an award against a Member State or an extra-EU investment agreement itself can be successfully challenged, but merely raises the question. The Multilateral Investment Court On 20 March 2018 the Council authorised the Commission to negotiate a convention establishing a multilateral court for the settlement of investment disputes, commonly referred to as the Multilateral Investment Court (MIC). 79 The goal of this convention would be to replace the current ad hoc system of dispute settlement in international investment law with a permanent body to settle investment disputes. The convention's goal is therefore to replace existing investment arbitration mechanisms in international investment agreements as well as provide for a body to settle investment disputes for future agreements by both the EU and the Member States. The EU expects these negotiations to be carried out in the context of the UNCITRAL discussions on possible reforms of ISDS. The mandate of the Commission is entirely procedural in nature. The EU aims to establish an institution that is permanent and that contains an appeal mechanism. Furthermore, the agreement should provide rules that would guarantee the independence of the judges of the MIC and include rules on transparency of proceedings. The EU also aims to make the MIC accessible and effective for businesses by including in the mandate provisions on supporting small and medium-sized enterprises (SMEs) and considering it 'vital' that the agreement contains an effective enforcement mechanism. The only rather limited aspect of the mandate addressing civil society concerns over regulatory chill is the inclusion of 'appropriate procedural safeguards, including provisions against frivolous claims'. The provision against frivolous claims in CETA Article 8.32 was not part of the Court's analysis of regulatory autonomy in Opinion 1/17. 80 This provision allows respondents to file a reasoned objection against a claim before a CETA Tribunal that it is manifestly without legal merit. The tribunal is then required to assess this objection before proceeding with the case itself. CETA does not provide a definition of what a claim manifestly without legal merit consists of, but an expansive interpretation of the provision could restrict the usefulness of CETA for investors and alleviate at least partially concerns over regulatory chill. Opinion 1/17 does pose a challenge for the establishment of the MIC because of the link the ECJ has made between substance and procedure of investment agreements. The ECJ found that by expressly restricting the scope of the substantive provisions of CETA (e.g. the fair and equitable treatment, national treatment, and expropriation standards) 'the Parties have taken care to ensure that those tribunals have no 79 Council Negotiating directives for a Convention establishing a multilateral court for the settlement of investment disputes of 20 March 2018 <http://data.consilium.europa.eu/doc/document/ST-12981-2017-ADD-1-DCL-1/en/pdf> accessed 11 October 2019. 80 This provision that allows defending Parties to argue that a claim is manifestly without legal merit can be found in Section F of Chapter Eight of CETA, whereas the Court's analysis on regulatory autonomy is confined to Sections C and D of Chapter Eight. jurisdiction to call into question the choices democratically made within a Party relating to, inter alia, the level of protection' of various public interests. Therefore, there was no adverse effect on the autonomy of the EU legal order. However, the Commission's mandate for the MIC does not contain any instructions to (re)negotiate the substantive provisions of investment agreements. In principle, negotiating an agreement that does not contain any substantive provisions at all does not pose a challenge for the regulatory autonomy of the EU. After all, without substantive provisions no claims can be brought before the MIC and therefore there will be no awards that may lead the EU institutions to abandon their level of protection of a particular public interest. However, the mandate does require the MIC to be linked to existing agreements of the EU. Point 6 of the mandate states: The Convention should allow the Union to bring agreements to which the Union is or will be a party to under the jurisdiction of the multilateral court. Consequently, the Union should be in a position to become a Party to the Convention and the provisions of the Convention should be drafted in a way which allows their effective use by the European Union. 81 For the EU this means that in practice the ECT would need to be redrafted in terms of substance if the MIC were to have jurisdiction to hear disputes against the EU arising from the ECT. The ECT is currently the only agreement to which the EU is party that contains ISDS. The ECT's substantive provisions do not contain the provisions restricting the scope of investor rights the ECJ referred to in Opinion 1/17. 82 The Commission's mandate to renegotiate the ECT suggests that the EU is seeking to accommodate this issue. 83 Conclusion Opinion 1/17 is a vindication of the Commission's stance that CETA will have no effect on the ability of governments to regulate in the public interest. The ECJ refers to the clauses introduced in CETA to restrict the scope of investor rights in finding that CETA will have no adverse effect on the autonomy of the EU's institutional framework to set the level of protection of a non-exhaustive list of public interests. Whether the ICS's and the EU's current approach to investment standards will actually preserve in practice the EU institutions' autonomy and that of third states is, however, not up to the ECJ. Government authorities and tribunal members will have to grapple with this question when faced with potential regulatory action that may constrain the freedom to conduct business of foreign investors. Especially where this impact may become severe, such as in the case of climate change, the EU's current approach will be put to the test. If the Paris goals of holding the increase in the global average temperature to well below 2 • C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 • C above pre-industrial levels are to be achieved, significant and increasingly radical regulatory changes are needed as the annual amounts of GHG emissions increase rather than reduce. At the June 2019 G20 meeting UN Secretary-General Guterres called for 'a much stronger commitment' by G20 leaders in order to 'rescue the planet'. 84 A few months later at the UN Climate Action Summit, Guterres explicitly called for regulatory action against the fossil fuel industry. 85 At the same time, market forces indicate no significant changes to the current global energy system. Global investment in fossil fuels in 2018 increased, whereas investment in renewables dropped. In 2018, US$304 billion was invested in renewables, whereas a total of US$935 billion was invested in fossil fuels. The International Energy Agency states in its World Energy Investment 2019 report that there are few signs in the data of a major reallocation of capital required to bring investment in line with the Paris Agreement and other sustainable development goals. Even as costs fall in some areas, investment activity in low-carbon supply and demand is stalling, in part due to insufficient policy focus to address persistent risks. 86 The European Commission's President Ursula van der Leyen has nonetheless promised Europe a 'Green Deal', which will include 'the first European Climate Law to enshrine the 2050 climate neutrality target into law' and 40% emissions reductions by 2030. Significant regulatory changes will be necessary to achieve such a transition and such changes will have distributional choices and consequences. Opinion 1/17 suggests that CETA will have no bearing on such choices. For the sake of future generations, hopefully the ECJ will be proven right. Declarations and conflict of interests The author has worked in a previous capacity for ClientEarth, a non-profit environmental law organisation dedicated to the protection of the environment. In this capacity, the author has advocated with Member States and the European Parliament a request for an Opinion on the compatibility of CETA's Investment Court System with the EU Treaties. 86 International Energy Agency, World Energy Investment (May 2019) 6.
2020-09-03T09:03:16.428Z
2020-07-03T00:00:00.000
{ "year": 2020, "sha1": "a3c66250ce0f6ee407015bdc31dfc36f8f3ff85d", "oa_license": "CCBY", "oa_url": "https://www.scienceopen.com/document_file/70262072-57d6-4cbf-8466-864ebc8a192a/ScienceOpen/EWLR-4-7.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "50ba31ef24683ebded4f7d3465b1fa3e99a984a3", "s2fieldsofstudy": [ "Law", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
59401941
pes2o/s2orc
v3-fos-license
THE SYSTEM OF MARKETING PLANNING AT THE ENTERPRISE AND ITS FORMATION PRINCIPLES The article deals with the nature of marketing planning at the enterprise and its features. The authors study the methodical approach of marketing planning, taking into account the market situation at the domestic market. The authors offer the systematization of the formation principles of the marketing planning system at the enterprise. Introduction Today there are many methodological approaches to the definition of marketing planning at the enterprise.Summarizing the data of the foreign and domestic literature in the field of marketing planning, it is possible to identify four main areas where it's examined from the position of the content, in terms of the decision-making as a process of development, an application strategy and a part of further planning.At the same time, we should pay more attention to the study of this system, taking into account the market situation at the domestic market and the principles of its formation. Analysis of the Marketing Planning System Marketing planning is a managerial process of creating and maintaining a fit between the goals of the enterprise and its potential in the processes of market activity.It exists in order to develop a clear program of action that will allow you to control the speed, consistency and results of changes with the aim of obtaining the desired results whithin a certain period.The opposition of marketing planning is the improvisation, which is based on the unscheduled decisions, and the spontaneous, intuitive decisions. Marketing planning should be considered to be the process by which business leaders predict the future and take the necessary steps to achieve that future.They should examine the activities to determine the goals of the company and their changes, resources needed to achieve them, and policies on acquisition and use of these resources. On the other hand, marketing planning as a process is a sequential interdependence measure for the solution of the existing problems.Accordingly, one can distinguish the 53 following stages of marketing planning: -defining the purpose of planning (in future it will allow to determine the functions of planning, criteria of decision-making, to organize a system of control); -problem analysis (defining the existing and desired end situations and also the basic problems of implementation and planning); -search of alternatives (the definition of the available opportunities to solve the existing problems); -forecasting (the formation of ideas on future development of the most important indicators of marketing activities, the expected behavior of competitors, consumers); -estimation (selection of the alternatives, adequate to set a goal); -decision and adoption of the planned tasks (developing a clear marketing plan: recommended or mandatory for the execution). The main tasks of marketing planning are the following: -bringing potential of the company into accordance with the requirements of consumers' selected target markets; optimal integration of all types and directions of the company's marketing activity; -the definition and justification of marketing activities' list; -the marketing action's focusing on: who will perform, where, how and when.Depending on the duration (the period whithin which they develop), the marketing plans are divided into the short-term (annual), the medium-term (1-3 years) and the long-term ones (3 years or more) (table.1). Characteristics of Marketing Plans in Dependence on Their Duration: Depending on the extent, the marketing plans are divided into the ones at the grocery (for individual products and product groups of the company), for the entire range of products of the enterprise or being a part (section) of the General economic plan. According to their development, they are divided into the developed ones, according to the "bottom-up" principle (on the basis of the information provided to workers of the marketing department with the other departments of the enterprise) and the "top-down" principle (planned activities, managed and controlled centrally). 54 According to the object, the marketing plans are divided into the corporate (general corporate), devon (individual units), business (individual activities) and product ones (specific groups or types of products). As for the content planning, they are divided into the strategic (looking for the new opportunities and goods production), the tactical (the creation of conditions for the wellknown features and products) and the operational ones (implementation of the specific opportunities). According to the subject, they are formed as the target (the definition of common strategic, tactical and operational goals and constraints), the object (personnel planning, information, advertising, finance, etc.), the software (creation of prerequisites for the implementation of remedies) and the procedure (planning of specific actions, for example, sales of products). The system of marketing planning exists to define the main objectives of the company and is focused on the definition of the planned end results, based on the tools and methods to achieve the goals and provide the necessary resources.Moreover, it is the process of making management of marketing solutions.The result is a set of actions and decisions of management that leads to the development of marketing strategies to achieve their goals. In the process of marketing planning formation the following should be done: -approve the organizational structure relative to the strategic development; -identify the favourable and threatening features for the business external factors; -prepare a project plan to estimate the advantages and disadvantages of the enterprise; -approve the main line of development, focusing on which you can test various strategies; -monitor the trends that may prove to be vital for the process of products' sale at the market; -develop the short-term solutions in the framework of the marketing activities' strategic plan. The major functions of marketing planning in a market economy are modelling of future changes in the marketing environment from the point of view of the desired state and coordination of all system elements in order to achieve this desired state. Summarizing the scientific data, it is possible to define the following tasks of marketing planning: -formation of the organization's goals and activities; -objective identification of the complex trends of management; -analysis of the enterprise potential and determination of the resources' level -identification of the alternative development; -identification of problems, requiring the strategic decision-making; -prediction of possible changes in the external environment and adjustment to them by developing the optimal strategy of the enterprise. Summarizing the goals, objects and functions, we can conclude that marketing planning is a specific kind of practical activities of the enterprise and the organizational system. Taking into account the above mentioned, the system of marketing planning should be understood as the system of governance in which strategic decisions are made in the sequence, based on the data of the information systems within the exact specified subsystems of the organizational support and management.The system of marketing planning is considered to be a complex, which includes such several subsystems as: 55 -system plans; -planning process; -subsystem of decision-making; -subsystem of organizational support; -subsystem of strategic planning control.A modern approach to marketing planning involves the interrelationship of planning with another planning functions and the necessity of these linkages' examination, when designing a planning system, information system and other support systems. Summarizing the results of the theoretical research, we reveal the strategic planning system's organizational basis of marketing planning fundamentals at the enterprise.The strategic planning system at the enterprise is a set of the organizational and economic methods and techniques, aimed at the problems decision of enterprises' adaptation to the external environment. In the system of the strategic marketing planning it is proposed to allocate the following six subsystems: the information support, forecasting options for the development of the external events, the evaluation and selection of strategic decisions, the organizational and methodic maintenance of the planning process, the system plans of the company, the evaluation of the strategic plans' implementation effectiveness.The subject of the above mentioned is the proposed system of marketing planning at the enterprise (Fig. 1). One of the basic elements of marketing planning is an information subsystem.In this subsystem one should focus on the data, obtained in the analysis of the external environment, which affect all aspects of the company (status and prospects of the market, product, sector, performance of competitors, main suppliers, etc). The forecasting subsystem is intended to describe the possible variants of the external situation development, the construction of the alternative scenarios in case of conditions' changes of the enterprise activity.The building of a real forecast is provided by the completeness and accuracy of the data, received from the information systems. The strategic marketing planning is inextricably linked to the evaluation and selection of the strategic decisions.It is the very subsystem where the formation of goals and strategies of the enterprise development, the evaluation of their implementation possibility, the identification of problems and the choice of the optimal development strategy exist. The organizational and methodic maintenance of the marketing planning process is a formation element of the strategic planning procedure.It contains: the definition of the state and allocation of the functions between the personnel, involved into the planning process; the development and approval of the planning documents' forms; the definition of the planning period; scheduling of the planned works. The marketing plans' system is the most important subsystem, because it is a result of the strategic planning process.The necessity of a system of the interrelated plans is the fact, addressed to the strategic planning, being very complex and requiring a comprehensive methodological approach.So, a simple plan or a hierarchy of multiple plans could not ensure the realization of the objectives of a large engineering enterprise.It should be a linked system of the interlocking plans that reflects the aspects of all problems, confronting the enterprise The evaluation system of the strategic marketing plans' implementation is necessary to monitor the achievement of the planned objectives or the desired state.In this subsystem we should display the criteria of the strategic planning efficiency and provide measures to regulate the deviations in the planning process with the aim of the intended results' obtaining. 57 The proposed approach to the composition and content elements of marketing planning at the company has a more complete coverage of all the subsystems that ensure the effectiveness of its functioning. To summarize, it can be noted that the main purpose of marketing planning is to identify the optimal variant of all the possible alternative ones of the enterprise development in future. It is an organizational and economic system, providing a continuous decision-making process, in which: -the goals and objectives of the company are set and specified in time; -the strategies are defined to achieve them; -the detailed plans, reflecting the different aspects of economic activity, are developed.The process of marketing planning is based on several principles, i.e., the rules that should be followed at its implementation.All the principles are divided into the following three groups: -the universal ones, which include the marketing orientation, consistency, complexity, the continuity of the process, being scientific, normative, situational, having the administrative-behavioural approach, etc.; -the general ones, which include the creation of a single planning system, continuity, flexibility, integration and coordination of plans, providing a feedback on the planning system, economic balance and validity of plans to ensure the achievement of objectives, etc.; -the particular ones, covering the consistency of the strategic and operational (tactical) planning, ranking of the strategic planning objects, according to their importance, the plan consistent with the parameters of the environment, variations of the plan, the adequacy of the planned performance, risk estimation, etc. Plans should be adjusted in accordance with the changing internal and external conditions of the enterprise or should be developed anew.In accordance with this principle, planning is not considered to be an isolated act but a constantly iterative process.The principle of continuity requires all plans to be developed on the basis of the perspectives, because they are the basis for the plans' preparation in the future. The following plans should be based on the previous ones, considering the results of their performance.Continuous planning allows you to implement the principle of flexibility, implying the possibility of the constant adjustments to the earlier decisions or reviewing them at any time according to circumstances.Coordinated plans have divisions (horizontal).They are integrated at different levels in the vertical. The marketing plans' focus is aimed at the rational use of the enterprise resources to increase production efficiency and maximize profits. The principle of the leading links and the priority of their implementation means that the company always selects the leading links on the realization of which its business success depends and strives to implement them in the first turn.The choice of the leading units should be based on a thorough analysis of the state of the enterprise affairs, and it is conducted only by the experienced managers.A distinctive feature is the integrated approach, being an attempt to integrate the planning and management processes. Specific principles of the formation system of marketing planning and their respective tasks include the consistency of the strategic and current planning, ranking of the strategic planning objects, according to their importance, the plan consistent with the parameters of the environment, variations of the plan, the adequacy of the targets, risk estimation. It should be borne in mind that one of the most important principles of marketing 58 planning is the creation of a single planning system that provides the link between the strategic and tactical (current) planning, the continuous planning process. Marketing strategic planning should be considered to be a process of creation and practical implementation of the enterprise activity's programme.Its purpose is to ensure the effective allocation of resources to achieve the target market.Two approaches to the allocation of resources are known in this aspect: the implementation function of return on sales (determined by the ratio of costs and results of marketing actions); tactical direction of marketing activities in a specific market situation. The most important section of the enterprise's tactical plan is the production programme or the plan of production and sales of goods, affecting the rhythm of supply.It defines the necessary volume of production during the planning period, corresponding to the nomenclature, the assortment and quality requirements of the sales plan. The production programme offers the tasks on commissioning of new production capacities, the need for the raw material resources, staffing and transport.This section of the plan is closely linked with the plan of the labour and wages, the plan of production costs, profit and profitability and the financial plan. To select the rational ways of the enterprise planning, it is necessary to anticipate situations in order to influence them, directing its economic activity at the goal achievement and to take into account the risk of the possible deviations of the planned indicators.But like in any business or technological process, it's always the financial aspect, affecting financial activities that can influence the functioning of the enterprise, which should be examined from the point of view of the system analysis.After the production, the resource supply of the production process, the choice of technologies, the realization of production, taking into account the market conditions, the planning of the financial and economic activities are the interrelated elements of the economic production and social system. The statistics indicates a favourable growth trend of production in industries.In this regard, it is especially important to develop and to implement a marketing strategy that enables to use the new opportunities and to reduce the uncertainty of the external environment.For Ukrainian enterprises, the main uncertainties are: the competition of foreign countries, the rupture in the production and economic ties, the loss of the existing distribution channels, a significant reduction of raw materials, fuel and energy opportunities, the decrease in production capacity, the slow turnover of the capital, the lack of the available financing schemes and lending; the increasing adverse impact of social and political processes on the society. To summarize, we can conclude that the main distinctive point of the present time is a constant increase of the uncertainties in the external environment.This, as it has been already mentioned, became the cause of the strategic planning, marketing and management at enterprises. To take into account all the external factors and to achieve the necessary changes in organization and management, it is possible to formulate the main requirements to the system of the strategic marketing planning: 1.The need for continuous collection, analysis, processing and classification of data which bring changes into all spheres of activity. 2. The creation of a mission, objectives, strategies activities in accordance with the requirements of the external environment. 3. The establishment of the rapid response systems to the detection of changes to 59 prevent the negative impacts or, on the contrary, the use of the opportunities.As it has been mentioned above, the strategic marketing planning in its development has passed through several stages, varying significantly in their content and form, due to the changes in the above stated conditions of production and sales.Considering the main trends in the development of the strategic planning in the international practice, let's characterize the causes of changes in the methodological approaches to the strategic planning of the enterprise. It's known that in the beginning of 1980s the interest in the strategic planning has decreased significantly.This occurred due to the fact that the largest American firms, keen on strategic planning, were driven out of the advanced positions of the world market by the more flexible in decision-making entrepreneurs from other countries.In these conditions, in order to improve the competitiveness, the companies began to restructure, reduce costs, improve quality, reduce staff and rearm. The reasons analysis of the strategic planning's failures at this stage was highlighted in the economic literature in the following way: 1.The imbalance of power and influence among the line managers and planning departments (plan services almost completely took over the operation strategies).2. Poorly developed mechanism for the practical implementation of strategic decisions (there was no organizational support). 3. Executives at different levels did not have any proper professionalism in business (hence the reduction in services of the strategic marketing planning). 4. Current activities (crisis situations, etc.) reduced the attention to the implementation of the strategic marketing planning. 5. The importance of the conditions and prerequisites of planning, that ensure the success of the implementation plans, was not taken into account (or is not estimated enough). 6. Did not emphasize the relationship and the place of strategic objectives in the overall system predictions. 7. Poorly defined marketing policy.8.The level of the decisions' uncertainty was not taken into account during planning; 9. Some managers could not estimate the situation, presenting the critical and limiting factors in making decisions. 10.The staff was psychologically and professionally unprepared. In the beginning of 1990s an increasing interest in the strategic planning has appeared there.At that time, the managers of the companies, consultants and the teaching staff of business schools considered the strategic problems of the enterprise development to be the priorities of the office and thought they will retain their paramount significance for the next five years. The strategic planning of marketing at the present stage of the economic development differs significantly from the previously accepted forms.The modern followers of strategic planning abandoned the semi-abstract, the terminized models that differed from the reality. They propose to transfer the functions of the marketing strategic planning, previously focused on exclusively from the top management to the middle managers, to the leading of the production's specific areas, where a special planning group is formed, usually consisting of the young people with creativity and the experienced staff, seeking to defend the gains, achieved earlier.Moreover, in order to bring the process of marketing planning to the realities of today's market, the professionals in the field of strategic planning are recommended to be involved into the development strategies' formation of major consumers and suppliers. This approach to the strategic planning of marketing is revolutionary, because beforehand the planning was limited only to the top managers and the most qualified professionals. The experts-planners of the new generation, particularly A. Slyvots'ky, the founder of the consulting firm, Y. Omlila, the General Manager of Nokia Group, L'yuyis Platt, the Chairman of Hewlett-Packard Company and others stress the need in taking bold decisions in the development strategy of the company not only to adapt to the changes but also to anticipate such ones.In their opinion, the strategic approach has nothing to do with such a narrow task (compared to the basic one), as the increase in the occupied part of the market or the current income. The penetration of the marketing strategic planning's ideas into the economy of Ukraine was held in 1980, and the development stages of the strategic work in our country differ significantly from those described above.It is necessary to separate four periods in the development of the strategic work: administrative, conditional self-reliance, adaptational and orientational periods for the external marketing strategy. Our analysis of the strategic planning practice in the economy has allowed establishing 61 of the main stages that reflect the characteristics of development and characterize the trend of change in the priority to be solved in the task scheduling and shaping of marketing planning at the enterprise. Conclusions It is established that the main purpose of marketing planning is to identify the optimal variant of all the possible alternative ones of the enterprise development in future.It is an organizational and economic system, providing the continuous decision-making process. One of the most important principles of marketing planning is the creation of a single planning system that provides the link between the strategic and tactical (current) planning, the continuous planning process.It is proved that the basis of marketing planning at the enterprise is a system of the marketing strategic planning, which is a set of the organizational and economic methods and techniques, aimed at the problems decision of the enterprises' adaptation to the external environment. Fig. 1 . Fig.1.The proposed system of marketing planning at the enterprise
2018-12-27T13:54:48.539Z
2017-03-04T00:00:00.000
{ "year": 2017, "sha1": "93d8c65697ee164a35bb2a6e21f880a931b753f4", "oa_license": "CCBY", "oa_url": "https://nuife.org/index.php/pnap/article/download/166/165", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "93d8c65697ee164a35bb2a6e21f880a931b753f4", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
52263523
pes2o/s2orc
v3-fos-license
Purification and Characterization of a Novel CGTase from Alkaliphilic Bacillus flexus Isolated from Lonar lake, India Alkaliphilic microorganisms have attracted much interest in the past few decades because of their ability to produce extracellular enzymes that are active and stable at high pH values (Atanasova et al., 2009; Antranikian et al., 2005). The main natural habitats of alkaliphiles are alkaline environments. Naturally occurring alkaline environments, such as carbonate springs, alkaline soils, and soda lakes, are characterized by their high basic pH values (pH 8.0–11.0) due to the presence of high concentrations of sodium carbonate salts formed by evaporative concentration (Horikoshi, 1999; Van den Burg, 2003). Soda lakes are widely distributed all over the world; however, as a result of their inaccessibility, few have been explored from a microbiological point of view (Grant et al., 2000). Introduction Alkaliphilic microorganisms have attracted much interest in the past few decades because of their ability to produce extracellular enzymes that are active and stable at high pH values (Atanasova et al., 2009;Antranikian et al., 2005). The main natural habitats of alkaliphiles are alkaline environments. Naturally occurring alkaline environments, such as carbonate springs, alkaline soils, and soda lakes, are characterized by their high basic pH values (pH 8.0-11.0) due to the presence of high concentrations of sodium carbonate salts formed by evaporative concentration (Horikoshi, 1999;Van den Burg, 2003). Soda lakes are widely distributed all over the world; however, as a result of their inaccessibility, few have been explored from a microbiological point of view (Grant et al., 2000). Cyclodextrin glycosyl transferase (CGTase,EC 2.4.1.19) is an important industrial enzyme, unique in its ability to convert starch and related glycans into non-reducing, cyclic malto-oligosacchrarides called cyclodextrins (CDs) via a cyclization reaction, an intramolecular transglycosylation reaction (Biwer et al., 2002). Moreover, it is an important hydrolytic Production and purification of Cyclodextrin glycosyltransferase (CGTase) from Alkaliphilic Bacillus flexus isolated from Lonar lake, India was investigated in present study. Production was carried out using medium containing starch, yeast extract, peptone MgSO 4 .7H 2 O. The crude enzyme was collected by centrifugation and partially purified using ammonium sulphate precipitation method. This partially purified enzyme was further purified using phenyl sepharose column chromatography .The enzyme obtained had molecular weight of 77.58 kDa which is confirmed by SDS PAGE and Mass spectroscopy. enzyme that carries out reversible intermolecular coupling and disproportionation of maltooligosaccharides (Biwer, 2002;Savergave et al., 2008). CDs are non-reducing cyclic structures consisting mainly of 6, 7 or 8 glucose residues, joined by α-(1,4) linkages, for α-, β-and γCD cyclodextrin, respectively. Among the three types of cyclodextrins, β-CD is of high interest due to the size of its non-polar cavity which is suitable to encapsulate several guest molecules; its low solubility in water which facilitates its separation from the reaction mixture. Moreover, β-CD inclusion complexes are easily prepared and more stable (Otero-Espinar et al., 2010;Astray et al., 2009). CGtase is produced by species of Bacillus, Brevibacterium, Clostridium, Klebsiella, Micrococcus. The production of CGTase became attractive only when alkalophilic Bacillus species were introduced as production organism (Biwer et al., 2002;Gawande et al 1999). Various unit operations used in downstream processing for getting pure protein constitute a large part of production cost. Many reports suggest purification strategies of using adsorption of CGTase on starch, which have a drawback that CGTase reacts with starch and a additional step to remove Cyclodextrin is required (Leaver et al., 1987). In the present work attempts were made to purify CGTase produced by alkalophilic Bacillus flexus, using phenyl sepharose column chromatography. And purity of the enzyme was then confirmed by SDS PAGE and Mass spectroscopy by confirming the molecular weight. Materials and Methods Strain: The strain used for production and purification of CGTase is the isolate isolated from Lonar lake and termed as BI 56A, which is identified by 16S rRNA sequencing and according to phylogenic analysis designated as Bacillus flexus and the 16S rRNA sequence was submitted to NCBI GenBank with Accession No. JX419382 (Heydrickx et al., 2004). The organism i.e. Bacillus flexus was inoculated in 510 ml of inoclum medium which is same as that of production medium and enriched. After enrichment 10% of the inoculums was transferred to 100 ml of production medium and incubated at 37 o C in rotator shaker incubator for 24 hrs. at the end of incubation period, fermentation medium was centrifuged at 8000 rpm for 10 minutes at 4 o C. The supernatant collected was assayed for CGTase activity and was used as crude enzyme for further study (Horikoshi et al., 1984). Cyclization activity of CGTase enzyme Cyclization activity of CGTase enzyme sample from isolates Bacillus flexus was determined by the phenolphthalein method (Goel and Nene, 1995). To 1.25 ml of 4.0% soluble starch, 0.25 ml purified CGTase was added. The reaction mixture was incubated for 30 min at 60 o C. The reaction was stopped by boiling for 5 min and 1.0 ml of the reaction mixture was incubated with 4.0 ml of phenolphthalein solution. The decrease in phenolphthalein absorption at 550 nm reflected the amount of CD in the reaction which was quantitated from calibration curve. One unit of activity was defined as the amount of enzyme able to produce 1 mole of β-CD per minute under the appropriate condition. Monitoring CGTase activity on agar plate was performed by pouring mixture of methyl orange and phenolphthalein on Horikoshi medium (Park et al., 1989) or LB plate in the presence of 1% soluble starch. Purification of Enzyme The crude enzyme was used for further purification by ammonium sulphate saturation method, followed by Phenyl sepharose column chromatography. The crude enzyme was subjected to ammonium sulphate extraction, in this procedure saturated ammonium sulphate solution of different concentration of 30%, 50%, and 70% was prepared. These solutions were mixed with mixture obtained earlier. The protein present in the crude enzyme were allowed to precipitate by keeping the mixture in cold conditions. The precipitate were then centrifuged at 5000 rpm for 20 min, the pellets were collected by dissolving in phosphate buffer of pH 7.0 and was used for further studies . Phenyl Sepharose Chromatography The partially purified enzyme mixture obtained after ammonium sulphate extraction was used for Binding study. Binding study was carried out at different ammonium sulphate concentration from 0.8M to 1.2M. A chromatography column (15X 100 mm size ) was packed with phenyl sepharose and equilibrated with 25mM, pH 7.0 Tris-HCl buffer containing 0.8 M to 1.2M (NH 4 )SO 4 , respectively. One ml of concentrated partially purified enzyme mixture was supplemented with 1M (NH 4 )SO 4 and loaded to the equilibrated phenyl sepharose column. The elution was carried out by stepwise decrease in the ionic strength of (NH 4 )SO 4 ranging from 0.25M to 0M. fractions were collected and analysed for CGtase activity and protein concentration . SDS Page After the partial purification and farction collection, the fraction which shows more CGtase activity and protein concentration was selected and the homogeneity of enzyme in the eluted fraction was checked by SDS-PAGE on a vertical slab gel electrophoresis suing 7.5% acrylamide gel at constatnt current of 30 mA for 2 h. Gel (8cmX 12cm) was run according to the method of laemmli (Laemmli, 1970). The SDS-PAGE was performed, in order to check the homogeneity of the enzyme and to determine the molecular weight of the enzyme under denaturing conditions. Mass Spectroscopy Analysis The molecular weight and purity was confirmed by mass spectroscopy analysis. The fractions were sent to Department of proteomics, National Chemical Laboratory (NCL), Pune for mass spectroscopy analysis and the samples were analysed (Nomoto et al 1986). Result and Discussion The purpose of this work is the development of simple and effective CGTase purification process immediately from cultural broth. Microorganisms for screening were grown on identical composition medium. The strain producer of CGTase was selected according the highest cyclization provided activity (for 1 ml of cultural broth). Most of the bacterial strains are known to produce other amylolitic enzymes besides CGTases (Volkova et al., 2000). Thus the measurement of cyclyzing and dextrinizing activity was conducted during purification process. Purification of the CGTase Usually the gel-filtration is used for CD removal from affinity complex with CGTase (Larsen et al., 1998;Bovetto et al., 1992). But the pretreatment of applying matter, for instance, concentration and careful calculation, makes this step hard. CGTase from Alkaliphilic Bacillus flexus was purified by ammonium sulphate saturation method followed by phenyl sepharose column chromatography, CGTase was found to be eluted using 0.085 M ammonium sulphate from phenyl sepharose column. Different separation procedures have been previously applied for obtaining purified CGTases, and in most cases, three or four purification steps were applied including ultrafiltration, gel filtration, starch adsorption and ion exchange chromatography or ammonium sulfate precipitation and two steps ion exchange chromatography (Abdelnasser et al., 2012). Yim et al purified the CGTase using DEAE sephadex A-50 followed by DEAE Sepharose CL-6B (Yim et al., 1997). Estimation of Molecular weight of the Enzyme Some of the physical and chemical properties was identified for purified CGTase. The purified enzyme obtained from phenyl sepharose column showed a single band by SDS PAGE, which confirms that the homogeneity of the enzyme (Fig 1). Molecular weight of CGTAse was estimated as 77.58 kDa, which is confirmed by Mass spectroscopy (Fig 2). Most of the reported CGTases are monomeric in nature with molecular weight between 60 and 110 kDa. However, CGTases with lower molecular weight have been also reported, such as 33 kDa from Bacillus coagulans and as 56 kDa from Bacillus sphaericus strain 41 (Abdelnasser et al., 2012). In conclusion, in this study, we report purification and characterization of CGTase from Alkaliphilic Bacillus flexus isolated from Lonar lake, India. Enzyme purification to homogeneity was achieved by phenyl sepharose column chromatography. Starch adsorbtion chromatography is one of the popular methods for the initial capture of the CGTase, but it demands gel filteration for the separation of CD formed during the elution of enzyme from the column. Though the rate of purification is slow but we have purified the enzyme successfully without formation of CD in the column. CGTase reported in the study is having molecular weight of 77.58 kDa which is confirmed by SDS PAGE and Mass spectroscopy. Thus we suggest CGTase purified with the proposed chromatographic scheme is of benefit in comparison to crude enzyme in starch hydrolysis process.
2019-04-08T13:06:19.749Z
2016-07-15T00:00:00.000
{ "year": 2016, "sha1": "5f12a36f6e50830fdb86209cbe9ae06321629532", "oa_license": null, "oa_url": "https://www.ijcmas.com/5-7-2016/A.%20Shinde%20Vinod%20and%20S.M.%20More.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "38c70003e42c46090e18ae27ae0f01be523fdd84", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
118605747
pes2o/s2orc
v3-fos-license
Tentative detection of ethylene glycol toward W51/e2 and G34.3+0.2 How complex organic - and potentially prebiotic - molecules are formed in regions of low- and high-mass star-formation remains a central question in astrochemistry. In particular, with just a few sources studied in detail, it is unclear what role environment plays in complex molecule formation. In this light, a comparison of relative abundances of related species between sources might be useful to explain observed differences. We seek to measure the relative abundance between three important complex organic molecules, ethylene glycol ((CH$_2$OH)$_2$), glycolaldehyde (CH$_2$OHCHO) and methyl formate (HCOOCH$_3$), toward high-mass protostars and thereby provide additional constraints on their formation pathways. We use IRAM 30-m single dish observations of the three species toward two high-mass star-forming regions - W51/e2 and G34.3+0.2 - and report a tentative detection of (CH2OH)2 toward both sources. Assuming that (CH$_2$OH)$_2$, CH$_2$OHCHO and HCOOCH$_3$ spatially coexist, relative abundance ratios, HCOOCH$_3$/(CH$_2$OH)$_2$, of 31 and 35 are derived for G34.3+0.2 and W51/e2, respectively. CH$_2$OHCHO is not detected, but the data provide lower limits to the HCOOCH$_3$/CH$_2$OHCHO abundance ratios of $\ge$193 for G34.3+0.2 and $\ge$550 for W51/e2. A comparison of these results to measurements from various sources in the literature indicates that the source luminosities may be correlated with the HCOOCH$_3$/(CH$_2$OH)$_2$ and HCOOCH$_3$/CH$_2$OHCHO ratios. This apparent correlation may be a consequence of the relative timescales each source spend at different temperatures-ranges in their evolution. Furthermore, we obtain lower limits to the ratio of (CH$_2$OH)$_2$/CH2OHCHO for G34.3+0.2 ($\ge$6) and W51/e2 ($\ge$16). This result confirms that a high (CH$_2$OH)$_2$/CH$_2$OHCHO abundance ratio is not a specific property of comets, as previously speculated. Introduction A central question in astrochemistry is how, where, and when complex organic molecules form. Although the total number of detected molecules in the interstellar medium 1 (ISM) continuously increase (e.g. Herbst & van Dishoeck 2009), the answers to the question above still remain unclear. To date there is no consensus on how complex organic molecules form in dense regions of the ISM, despite the increasing number of detections. One promising suggestion was that warm gas-phase chemistry, following evaporation of simple ices, could be a primary formation pathway (e.g. Millar et al. 1991;Charnley et al. 1992). However, more recent laboratory experiments and chemical modeling has shown that this mechanism is not effective enough to account for the observed abundances (e.g. Geppert et al. 2006). An alternative formation mechanism involves UV induces radicals. Garrod et al. (2008) propose that radicals, during the warm up phase, can migrate on the grains surfaces and form complex species which are then released into the gas-phase at higher temperatures. The initial ice composition and the amount of UV radiation have also proved to play an important part of the formation process (Öberg et al. 2009). In order to determine the formation pathway of COMs, it is useful to determine abundance ratios between 1 http://www.astro.uni-koeln.de/cdms/molecules different related species as these ratios, in comparison with the predicted abundance ratios from chemical models, can provide constraints on the formation processes. Indeed, variations in the abundance profiles can reflect the physical and chemical conditions that are occurring. It is therefore important to observe these species in different environments. Some of the simplest species in this context include the oxygen-bearing complex organic molecules associated to glycolaldehyde, CH 2 OHCHO, including its isomer methyl formate (HCOOCH 3 or CH 3 OCHO) and the reduced alcohol version of CH 2 OHCHO, ethylene glycol, (CH 2 OH) 2 (also commonly known as anti-freeze). By constraining the relative abundances of these species in different environments the hope is to be able to explore, e.g., the importance of initial chemical conditions, temperature and irradiation in the formation for comparison, e.g., to laboratory experiments (e.g. Öberg et al. 2009) and as input for sophisticated chemical models (e.g., Garrod et al. 2008). For example, in their laboratory experiments Öberg et al. (2009) show that the relative abundances of HCOOCH 3 to CH 2 OHCHO and (CH 2 OH) 2 are strongly dependent on both the ice temperature and exact ice composition in terms of the relative amounts of CO and CH 3 OH. Observations The observations were performed with the IRAM 30-m telescope at Pico Veleta, Spain on December 13 and 14, 2012 for W51/e2 and G34.3+0.2, respectively. The coordinates of the phase tracking center used for the two sources were α J2000 = 19 h 23 m 43 s .9, δ J2000 = 14 • 30 34 . 8 for W51/e2 and α J2000 = 18 h 53 m 18 s .6, δ J2000 = 01 • 14 58 . 0 for G34.3+0.2. The observations were performed in position switching mode using [−600 , 0 ] as reference for the OFF positions. The spectral setup was chosen to target (CH 2 OH) 2 and CH 2 OHCHO in the observations. The EMIR receiver was used in dual band polarisation (E090/E230) in connection with i) the 200 kHz Fourier transform spectrometer (FTS) back-end in the frequency ranges 99.70-106.3 GHz and 238.2-246.0 GHz for the E090 and E230 bands, respectively; ii) the WILMA back-end in the frequency ranges 99.88-103.6 GHz and 238.4-242.1 GHz for the E090 and E230 bands, respectively. Data reduction was performed using The Continuum and Line Analysis Single-dish Software 2 (CLASS). The resulting FTS spectra were smoothed to a spectral resolution of 1.15 km s −1 for the E090 band and 1.22 km s −1 for the E230 band while the WILMA spectra were smoothed to 5.88 km s −1 and 2.49 km s −1 for the E090 and E230 bands, respectively. Spectra resulting from the FTS back-end presented a standing wave pattern and the WILMA observations were there-2 http://www.iram.fr/IRAMFR/GILDAS fore used as a sanity check for the FTS data and to confirm the detections made in the FTS observations for the lines where the frequency of the two sets of observations match. The standing wave present in the FTS observations was removed using a fast Fourier transform in the data reduction. Throughout this paper, the intensity is given as the main beam brightness temperature (T mb ), which is defined as where T a is the antenna temperature, F eff is the forward efficiency and B eff is the beam efficiency. The values used were F eff = 0.94 and 0.92 and B eff = 0.78 and 0.58 for 1 mm and 3 mm respectively. The half-power beam sizes are ∼ 10 and ∼ 24 for the observations at 1 mm and 3 mm, respectively. The reduced data-sets are available for download at http://youngstars.nbi.dk/projects/HighMass_organics/index.html. Analysis and Results Spectra from both sources show a rich forest of lines characteristic for high-mass sources. A total of 21 and 19 lines have peak temperature above 5 K for W51/e2 and G34.3+0.2, respectively; the strongest lines in both spectra is the CS 5-4 transition at 244935 MHz. It is important to know that there is still a remnant oscillation in the baseline even after the removal of the standing wave in the spectra. This increases the RMS noise of the spectra, which complicates the analysis of faint lines. Therefore, in addition to the global baseline subtraction, an additional local zero or first order baseline subtraction was therefore performed before making a Gaussian fit of the (CH 2 OH) 2 lines. This additional baseline subtraction is applied in a range of ± 100 km s −1 for each line. As for the data reduction, CLASS was used for the additional baseline subtraction as well as Gaussian profile fits to the detected lines. The resulting RMS of the local baseline is ∼ 60 mK for the 1 mm observations and ∼ 7 mK for the 3 mm observations. In order to ensure proper line identification, we have checked the line observations against entries in Splatalogue database for astronomical spectroscopy 3 . In addition, we have made a reference model where we produced synthetic spectra for the line emission of common species (CH 3 OCH 3 , HCOOH, CH 3 CHO, CH 3 OH, C 2 H 5 OH and CH 3 CN) in order to visually exclude lines that are blended with any of these molecules. Table 1 lists spectroscopic parameters for the (CH 2 OH) 2 transitions that can be excited in the observed frequency range. Transitions with log(A ul ) < −5 for the 3 mm observations and log(A ul ) < −4 and E up > 300 K for the 1 mm observations have been excluded from the table, as these transitions are predicted from the synthetic spectra to have peak intensities < ∼ 0.02 K, i.e., not detectable. The spectroscopic data for (CH 2 OH) 2 come from Christen & Müller (2003) and Christen et al. (1995) and are available from the CDMS database 4 (Müller et al. 2001(Müller et al. , 2005, while the spectroscopic data for HCOOCH 3 and CH 2 OHCHO are from Ilyushin et al. (2009) andCarroll et al. (2010) respectively, available from the JPL database 5 (Pickett et al. 1998). For the analysis, the (CH 2 OH) 2 lines that are reasonably well separated and have a peak temperature above 3 σ were selected. For W51/e2 we have included the lines at ∼ 100333 and ∼ 242656 MHz although they are partially blended, as it is possible to distinguish Table 2: Transitions of (CH 2 OH) 2 observed toward W51/e2 and G34.3+0.2 with the IRAM 30-m telescope. Transition Freq. Notes. The table lists spectroscopic parameters and observed quantities from Gaussian fits to the detected (CH 2 OH) 2 lines: integrated line intensity ( T mb ∆v), line position (V LS R ), line width (∆v) and peak temperature (Peak T mb ). The error of T mb ∆v is estimated to ∼ 30% as determined by the observational uncertainty, while the errors on V LS R and ∆v are ∼ 1.0 and ∼ 2.0 km s −1 , respectively. The line intensities have been added together for the hyper fine structure transitions (denoted by / in the quantum numbers). the (CH 2 OH) 2 peak from the other peaks and perform a multiple Gaussian fit which includes all relevant peaks. Figure 1 and 2 show a zoom-in of the area of ∼ 200 km s −1 , which corresponds to ∼ 160 MHz in the 1 mm observations and ∼ 70 MHz in the 3 mm observations, around each detected line in both sources after the local baseline subtraction. Superimposed onto the observed spectra are the synthetic spectra of (CH 2 OH) 2 as well as the other species investigated here, as to demonstrate that the chosen (CH 2 OH) 2 lines are not blended. The fit to the entire observed spectra for both sources are shown in Figure B.1-B.8 in the appendix. Table A.1 in the appendix lists the estimated column densities of the molecules in the reference model. Table 2 lists spectroscopic parameters and the observed quantities from the fits of the (CH 2 OH) 2 lines: the integrated line intensities ( T mb ∆v), position (V LSR ), line width (∆v) and peak temperature (Peak T mb ). A total of 35 and 52 well separated lines have been detected above the 3 σ level in the 1 mm and 3 mm data for HCOOCH 3 in W51/e2 and G34.3+0.2 respectively. For (CH 2 OH) 2 , 4 and 2 lines, toward W51/e2 and G34.3+0.2 respectively, are detected in the 1 mm observations; while only one line is detected at 3 mm toward for W51/e2. Many other (CH 2 OH) 2 lines as well as some potential CH 2 OHCHO lines are present in both data-sets, but they are either blended with other species or too faint to be properly detected above a 3 σ limit. Modeling Assuming LTE and optically thin emission, the rotational diagram method can be used to determine the column density, N tot , and the rotational temperature, T rot (Goldsmith & Langer 1999). The approach used in this work follows the formalism described by Goldsmith & Langer (1999) with the following assumptions: 1. (CH 2 OH) 2 , CH 2 OHCHO and HCOOCH 3 are emitting from the same region and thus have the same source size, θ source . 2. (CH 2 OH) 2 , CH 2 OHCHO and HCOOCH 3 are in LTE, which implies that the excitation temperature, T ex , is equal to the kinematic temperature T kin , for all three species. Using the rotational diagram method one obtains the rotational temperature which in LTE is T kin = T rot = T ex . 3. The observations at 1 mm and 3 mm trace the same gas. According to our assumptions, the source size and T rot derived from the HCOOCH 3 analysis can be used to derive the column density of (CH 2 OH) 2 and set an upper limit on the column density of CH 2 OHCHO. A possible caveat is that the large beam size of our observations compared to the source sizes is not guaranteeing that the measured lines arise from species present in the same gas. Specifically, we obtain similar mean values of V LSR for (CH 2 OH) 2 and HCOOCH 3 in both sources (56.4±0.5 km s −1 and 56.0±0.8 km s −1 in W51/e2 along with 57.7±0.5 km s −1 and 58.4 ± 1.2 km s −1 in G34.3+0.2 for (CH 2 OH) 2 and HCOOCH 3 , respectively) so here the assumption appears reasonable. The same formalism in Goldsmith & Langer (1999) used to create a rotational diagram can also the used to generate a synthetic spectrum of the line emission of a specific molecules by using the source size, θ source , the rotational temperature, T rot , the line width, ∆v, and the column density, N tot , as input parameters. It is possible to correct for the optical depth of the line emission in case it deviates from being completely optically thin (Goldsmith & Langer 1999). A synthetic spectrum generated with inputs parameters derived from the rotational diagram therefore serves as a check for self-consistency of the result. We use adopted source sizes from the literature and determine the rotational temperature and column densities in the following analysis. The observed width of the detected lines have been determined from the Gaussian fit. For HCOOCH 3 in W51/e2 we obtain mean values of ∆v = 6.4 ± 0.8 km s −1 in the 1 mm data and ∆v = 6.1 ± 1.1 km s −1 in the 3 mm data. In the same source, the mean value of the line width of the (CH 2 OH) 2 lines is 6.3 ± 0.9 km s −1 for the 1 mm data and the only detected value is 5.5 km s −1 in the 3 mm data. For G34.3+0.2 we find ∆v = 6.5 ± 1.5 km s −1 in the 1 mm data and ∆v = 7.3 ± 1.3 km s −1 in the 3 mm data for HCOOCH 3 and ∆v = 4.5 ± 0.5 km s −1 for the (CH 2 OH) 2 lines in the 1 mm data. In the below analysis, a fixed line width of 6.0 km s −1 was chosen as input parameter in synthetic spectra for all species. HCOOCH 3 In the rotational diagram analysis for W51/e2, a source size of 2.4 (Zhang et al. 1998) is applied and results in a rotation temperature of T rot = 120 K and a column density of 1.1×10 18 cm −2 . We estimate a 40 K uncertainty on the rotational temperature, which contribute to a ∼ 20% error on the column density. This, combined with an observational uncertainty of ∼ 30%, returns an estimated overall error of 30 -40 % for the column density. Notes. The source sizes, rotational temperatures, line widths and column densities which are used as input parameters when creating synthetic spectra. As described in the text, the estimated uncertainties are ±40 K for T rot and ±30-40% for the column density. For G34.3+0.2 a source size of 7.6 (Remijan et al. 2003) is applied and results in N tot = 5.8 × 10 16 cm −2 , and T rot = 140 K for HCOOCH 3 . As in the case for W51/e2, the estimated uncertainty of the column density is 30 -40% and 40 K for the temperature. Results from both sources are given in Table 3 and we have checked these results by using the parameters derived from the rotation diagram. (CH 2 OH) 2 Numerous (CH 2 OH) 2 lines are blended with other species or have line intensities below 3 σ. A total of two and five reasonable well separated lines above the 3 σ limit toward G34.3+0.2 and W51/e2 respectively were chosen for this analysis (see Table 2 and 1 for the spectroscopic data and Figure 1 and 2 for the zoom-in of the spectra around the lines). With only 2-5 unblended lines, the assignment of (CH 2 OH) 2 cannot be considered as a firm detection. However, even if our tentative detection of (CH 2 OH) 2 is not confirmed, this measurement represents a useful upper limit for the column density of (CH 2 OH) 2 , which still can provide important information when compared to the HCOOCH 3 detection. When only a few data points are available, a statistically reliable result of the rotational temperature cannot be obtained from the traditional use of the rotational diagram method. Thus, we assume that (CH 2 OH) 2 and HCOOCH 3 emit at the same T ex which in LTE is equal to T rot . The uncertainty of the column density is, as for HCOOCH 3 , estimated to 30-40%. Using θ source = 7.6 and T rot = 140 K, a column density of 1.9 × 10 15 cm −2 is derived for (CH 2 OH) 2 toward G34.3+0.2. For W51/e2, a source size of 2.4 and T rot = 120 K returns N tot = 3.1 × 10 16 cm −2 for (CH 2 OH) 2 . These results, listed in Table 3, are used as input parameters for synthetic spectra in order to perform a sanity check. We checked each line in the synthetic spectra against the observed spectra for each source and they all seem to provide a reasonable match, at least within the estimated uncertainty. Figure 1 and 2 shows the comparison of the synthetic spectra against the observed spectra. It is evident from Figure 1 that the synthetic spectrum in the 1 mm data toward W51/e2 slightly overproduces the observed spectrum, while it reproduces the observed line in the 3 mm data. As this line has a lower E up this could indicate that a lower excitation temperature would give a better fit. Indeed a fixed T rot = 70 K in the rotational diagram returns the same column density, N tot = 3.1 × 10 16 cm −2 . CH 2 OHCHO Several lines in the data might be assigned to CH 2 OHCHO. However, they are either blended with the emission of another molecules or too faint (i.e. below the 3 σ limit), thus we cannot claim a detection. Nevertheless, we are able to obtain an upper limit for the column density. Following the assumptions stated in section 3.1, synthetic spectra were generated using θ source , T rot and ∆v in Table 3, allowing N tot to vary. More specifically, the upper limit of the column density is then determined by increasing the value until the synthetic spectra would overproduce the observed spectra intensities at the locations of the CH 2 OHCHO lines in the observed spectra. The upper limit on the column density is ≤ 3 × 10 14 cm −2 toward G34.3+0.2 and ≤ 2 × 10 15 cm −2 toward W51/e2. The uncertainty of the column density is, as for HCOOCH 3 , estimated to 30-40%. Relative abundances Following the three assumptions listed in in section 3.1, it is possible to calculate the relative abundance of the species at the rotational temperature and source size found for each source. The following abundance ratios have been computed: HCOOCH 3 /(CH 2 OH) 2 , HCOOCH 3 /CH 2 OHCHO and (CH 2 OH) 2 / CH 2 OHCHO. As CH 2 OHCHO is not detected, it is only possible to set an upper limit on the column density. The HCOOCH 3 /CH 2 OHCHO and (CH 2 OH) 2 / CH 2 OHCHO ra-tios are therefore lower limits. All three relative abundance ratios for W51/e2 and G34.3+0.2 are listed in Table 4 along with previous measurements toward high-mass stars-forming regions, a hot core, an intermediate-mass protostar, low-mass protostars, molecular clouds toward the Galactic Centre and comets. When investigating the conditions leading to differences in observed abundance ratios, it is important to take both the formation process as well as the destruction processes of the molecules into consideration. Garrod et al. (2008) combine a gas-grain chemical network with a physical model to test if the chemistry can reproduce the observed abundances of previously detected organic molecules in physical conditions characteristic to starforming regions. The physical model used in Garrod et al. (2008) is based on Viti et al. (2004) and consist of an isothermal collapse followed by a warm-up phase with temperatures from 10 K to 200 K, assuming absolute timescales for the warm-up phases are 1×10 6 , 2×10 5 and 5×10 4 yr representing low-, intermediate-and high-mass star-formation respectively. However Aikawa et al. (2008) argue that the relation should be reversed, as the warmup timescale should be relative and should depend on the ratio of the size of the warm region to the infall speed instead of the overall speed of star formation. Either way, Garrod et al. (2008) and Aikawa et al. (2008) agree that the timescales, whether absolute or relative, at the different temperature-ranges are important for the chemistry. For all the sources in Table 4, HCOOCH 3 is more abundant than CH 2 OHCHO, from a factor of > 550 in W51/e2 Article number, page 6 of 18page.18 J. M. Lykke , C. Favre, E. A. Bergin and J. K. Jørgensen : Tentative detection of ethylene glycol toward W51/e2 and G34.3+0.2 Notes. The table show to a factor of > 2 in comet Hale-Bopp. According to Garrod et al. (2008) HCOOCH 3 and CH 2 OHCHO have similar formation pathways, which are based on the addition of HCO and CH 3 O or CH 2 OH at 30-40 K. Garrod et al. (2008) find that the production rates of CH 3 O and CH 2 OH are the same, which leads to similar abundances for HCOOCH 3 and CH 2 OHCHO. Intuitively this makes sense as the two molecules are isomers but it contradicts observations as HCOOCH 3 is observed to be much more abundant than CH 2 OHCHO in all the sources reported so far. As suggested by Garrod et al. (2008), this discrepancy could be due to the differences in the CH 3 O/CH 2 OH branching ratio. According to Öberg et al. (2009), HCOOCH 3 forms at a lower temperature than CH 2 OHCHO, which can also have a impact on the resulting ratio of the two species. Another explanation to the large HCOOCH 3 /CH 2 OHCHO ratio is the assumptions regarding thermal evaporation for these two species. Garrod et al. (2008) suggest that CH 2 OHCHO remains on the grains until is co-desorbs with water (at ∼ 110 K) while HCOOCH 3 evaporates at 70-80 K. This leaves CH 2 OHCHO to be destroyed by OH radicals at higher temperatures on the grains prior to evaporation. As Öberg et al. (2009) conclude, the ratio of HCOOCH 3 /CH 2 OHCHO does not depend greatly on the initial ice composition, and so the observed variations will most likely be linked to the different temperature conditions of the different sources. In Table 4 the sources are listed in order with decreasing HCOOCH 3 /CH 2 OHCHO ratio, which in turn also roughly correspond to a descending order of luminosity, which is also listed in the table. The top plot in Figure 3 shows a schematic bar-plot of HCOOCH 3 /CH 2 OHCHO against L bol . On the x-axis the sources in Table 4 have been plotted in descending order from left to right, but not to scale. As illustrated by Figure 3 (top) a rough correlation between HCOOCH 3 /CH 2 OHCHO and L bol exist: more luminous sources show a high HCOOCH 3 /CH 2 OHCHO ratio, while low luminosity sources show a low HCOOCH 3 /CH 2 OHCHO ratio with intermediate values in between. The apparent correlation between HCOOCH 3 /CH 2 OHCHO and source luminosity supports the hypothesis that the HCOOCH 3 /CH 2 OHCHO ratio depends mainly on the temperature, which in turns depends on the luminosity of the source. Even if the chemistry and the temperature profile in all the sources were quite similar, the differences in timescales at the different temperature-ranges (low, intermediate to high temperature) are not. A higher HCOOCH 3 /CH 2 OHCHO ratio in more luminous sources, could simply be a consequence of i) more luminous sources have experienced a longer timescale at temperatures which are either more favorable to the formation of HCOOCH 3 and/or to the destruction of CH 2 OHCHO or ii) less luminous sources experience a shorter timescale at high temperature, which would conserve a greater fraction of CH 2 OHCHO than their more luminous counterparts. One should of course keep in mind that the sources discussed here have temperature profiles which covers a range of temperatures falling of with radius and that hot core regions are permeated by shocks which also produce a range of temperatures. For the Galactic Centre molecular clouds from the study by Requena-Torres et al. (2008) no luminosities are listed: these are warm, low density clouds with no sign of star-formation. However, judging by the HCOOCH 3 /CH 2 OHCHO and HCOOCH 3 /(CH 2 OH) 2 ratios, they appear to be closer to the low-mass protostars than to the hot cores and high-mass star-forming regions. In the bottom plot in Figure 3, the HCOOCH 3 /(CH 2 OH) 2 abundance ratio also shows a decrease with L bol similar to that of HCOOCH 3 /CH 2 OHCHO. If the same temperature timescale argument is applied to this correlation, one would expect an opposite trend, as (CH 2 OH) 2 is formed a high temperatures . A possible explanation for this seemingly contradiction is that less luminous sources might experience high temper- Fig. 3: Schematic bar-plot of HCOOCH 3 /CH 2 OHCHO (top) and HCOOCH 3 /(CH 2 OH) 2 (bottom) against L bol . The sources from Table 4 have been plotted in descending order of luminosity from left to right on the x-axis, but not to scale, and the sources have been grouped into high-mass sources (blue), intermediate-mass protostar (red) and low-mass protostars (green). The white bars for four of the high-mass sources are upper/lower levels, which is indicated by the direction of the arrow. The luminosity for the top plot spans from Sgr B2(N) with L bol = 10 7 L to IRAS NGC 1333 4A with L bol = 7.7L , while it spans from W51/e2 with L bol = 4.7 × 10 6 L to IRAS NGC 1333 4A with L bol = 20L in the bottom plot. atures at timescales, that are just long enough for (CH 2 OH) 2 to form, but not long enough for (CH 2 OH) 2 to be destroyed again. Another possible explanation is that the initial ice composition affects the abundance ratio. While the HCOOCH 3 /CH 2 OHCHO ratio is independent of initial ice composition, then the formation of (CH 2 OH) 2 is strongly correlated to the CH 3 OH:CO composition of the ice (Öberg et al. 2009). Öberg et al. (2009) show that pure CH 3 OH ice strongly enhances the (CH 2 OH) 2 abundance as compared to ice mixes containing CO. This contradicts the predictions by Garrod et al. (2008) where the (CH 2 OH) 2 abundance actually drops 1-2 magnitudes when the initial CH 3 OH ice-composition in their model is reduced by a factor of 10. However, to date there is no observational evidence that the ice contents on the grains vary statistically significant from high-mass protostars to low-mass protostars (Öberg et al. 2011). Until recently a high (CH 2 OH) 2 /CH 2 OHCHO ratio was speculated to be specific property of comets (Biver et al. 2014). But as Coutens et al. (2015) has reported a values of ∼ 5 for a low-mass protostar and we, together with Brouillet et al. (2015), report lower limits of > 6-16 for high-mass sources, then a (CH 2 OH) 2 /CH 2 OHCHO ratio larger than 3 can be expected to be observed in other sources as well. Conclusions In summary, we have tentatively detected (CH 2 OH) 2 in G34.3+0.2 for the first time and our tentative detection in W51/e2 confirms the previous marginally detection by Kalenskii & Johansson (2010). In addition, we derive upper limits for the column density of CH 2 OHCHO emission in both sources. From these data we determine the relative abundances of (CH 2 OH) 2 , x and CH 2 OHCHO. The relative abundances of these species are compared to measurements from literature covering a wide range of source environments and luminosities. The data show what appears to be a correlation between source luminosity and HCOOCH 3 /(CH 2 OH) 2 as well as HCOOCH 3 /CH 2 OHCHO. This apparent correlation is proposed to be a consequence of the relative timescales each source spend at different temperatures-ranges in their evolution. Using the upper limit for the column density of CH 2 OHCHO gives a lower limit for (CH 2 OH) 2 /CH 2 OHCHO of > 16 and > 6 for W51/2 and G34.3+0.2 respectively. These results, together with an upper limit of > 12 for Orion-KL (Brouillet et al. 2015) and (CH 2 OH) 2 /CH 2 OHCHO = 5 for IRAS NGC 1333 2A (Coutens et al. 2015), shows that one can expect to find high (CH 2 OH) 2 /CH 2 OHCHO abundance ratios in multiple types of environments. Additional systematic surveys of these and other relevant molecules in additional sources as well as more model and laboratory work is needed in order to fully constrain the formation pathway of complex molecules. In particular, the Atacama Large Millimeter/submillimeter Array (ALMA) shows great potential for successfully revealing the formation processes with its high sensitivity and resolution making it possible to map the relative spatial distributions of these sources.
2015-07-13T16:01:48.000Z
2015-07-13T00:00:00.000
{ "year": 2015, "sha1": "d374b35329511c5f13dde9ca907f7eb9330c2267", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2015/10/aa26220-15.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "d374b35329511c5f13dde9ca907f7eb9330c2267", "s2fieldsofstudy": [ "Environmental Science", "Physics", "Chemistry" ], "extfieldsofstudy": [ "Physics" ] }
54502936
pes2o/s2orc
v3-fos-license
Social inequalities in lawsuits for drugs The aim of this study was to characterize the lawsuits requesting drugs considering the economic profile of their petitioners. All lawsuits (1378) accepted against Goiânia, GO from 2003 to 2007 were analyzed. Petitioners’ demographic characteristics, reported diseases, requested drugs, origin of healthcare service, and lawsuit agent were described. Complainants’ addresses were georeferenced and distributed into 4 regional groups classified in accordance with the population’s average income. Dwellers of wealthier regions filed court actions requesting drugs more frequently, with an average rate of 1.7 lawsuits/1000 inhabitants versus 0.55/1000 in the poorer region. Lawsuit costs were 4-fold higher in wealthier regions compared with the poorest region. Chronic diseases were involved in most lawsuits, where acute and low complexity diseases predominated among complainants living in poorer regions. Thus, social differences were reflected in the granting of health rights. INTRODUCTION Access to health care services in Brazil is free, universal and grants full assistance to its users, include pharmaceutical.The Unified Health System (UHS or Sistema Único de Saúde -SUS in Portuguese) is organized into a hierarchical and regionalized network which provides assistance through its own health units or through affiliated private institutions (Brazil, 1988(Brazil, , 1990)). However, in the last few years an increasing number of citizens have brought court actions to grant access to drugs prescribed for their treatments and this litigation has become a management and financial problem for the national and local health systems (Messeder, Osorio-de-Castro, Luiza, 2005).This phenomenon has been coined health "judicialization".Some authors hold that lawsuits have become instruments of inequality in access and irrationality in use of public resources when they rule the provision of drugs which are not standard under the public health system or when they contravene SUS and Pharmaceutical Assistance regulations (Vieira, Zucchi, 2007;Silva, Terrazas, 2009).On the other hand, others recognize legitimacy in control of public policy by the Judicial Powers in as far as this broadens the democratic debate to include the adoption of counter-majoritarian positions (Appio, 2007;Freire, 2003).Positions in the conflicting literature reveal a lack of consensus on the issue. Lawsuits requested a myriad of types of drugs, most of which were included in the official lists of drugs for public distribution, others presented standard therapeutic alternatives, while a small proportion represented assistance gaps or new technologies that have not yet been incorporated by the SUS (Messeder, Osorio-de-Castro, Luiza, 2005;Vieira, Zucchi, 2007;Pereira et al., 2003;Romero, 2008;Bonfim, 2008;Leite et al., 2009;Chieff, Barata, 2009;Borges, Ugá, 2010).Whether opposing the policies of public health or otherwise, the judicial demand for drugs may reflect the conditions of access to the health service by citizens. The pattern of access to health services is strongly influenced by people's social conditions and the availability of access to complementary private health (Travassos, Oliveira, Viacava, 2006;Silva et al.., 2000).Delay in provision of services, excess referral of patients to other services, shortage of doctors and deficiencies in the facilities are some aspects reported by users as shortcomings regarding the use of the public health system (Schwartz et al.., 2010). Therefore, geographical and social differences are important as determinants of health and health services access.Analysis, for example, of the utilization rate of health services among different regions, cities and also on an intra-city basis, reveals the geographical dimension of the differences in local financing capability, individual access and the size and complexity of service networks available (Travassos, 1997). In the case of drugs, access may be free and citizens have to join SUS and accept its whole set of technicalorganizational regulations, which include respecting its actions hierarchy, regionalization and standard procedures.Those citizens not accepting these conditions may opt for the supplementary private system, whose expenses are covered by the users through insurances or health plans.In these cases, the assistance offered is stipulated under the contractual terms and conditions. The judicial route, as previously discussed, has become a means of gaining access to drugs, but it is unclear whether litigation is an equalitarian solution or the extent to which social and geographical differences influence its characteristics. Therefore, the objective of this study was to characterize the demand for drugs through the judicial route in regions with different incomes. METHODOLOGY A cross-sectional, descriptive study was carried out at the School of Pharmacy -Federal University of Goiás, previously approved by the Ethics Committee of the institution.The study involved the analysis of court actions demanding drugs that were accepted and granted at the city of Goiânia-GO-Brazil. The study comprised all court actions demanding drugs brought against the city of Goiânia-GO that were accepted and received by the Unified Health System manager from January 2005 to December 2007. Lawsuits which requested at least one drug and resulted in legal proceedings in the Municipal Nonessential Medication Pharmacy, the unit responsible for purchasing, storage and dispensing drugs, were included in the study.Lawsuits that did not request drugs or lacking full address for geoprocessing were excluded. Data was collected using a standard form gathering information on subject of lawsuits, the petitioners and drugs involved, in addition to the health system that had made out the prescription. The variables of interest were: (a) court actions: number of lawsuits, year granted, agent guiding action, type of court action; (b) petitioner: petitioner number, basis and associated diseases reported, gender, age, place of residence, income; (c) origin of prescriptions; and (d) drugs: cost, availability of the pharmacological therapy and alternative therapeutics. Due to the lack of information concerning the cost of each drug provided by the Municipal Health Secretariat/Goiânia, it was necessary to assign prices to them in accordance with the following strategy: for each pharmaceutical specialty requested, the reference drug was adopted according to definitions in Law No. 9787/99 (Brazil, 1999), the value assigned to each provided item was drawn from the consumer price list consulted on the Brasíndice Pharmaceutical Guide, editions from 2003 to 2007, considering a 17% taxation rate plus a 30% discount on the average profit of retail sales.Regarding drugs with floating price and/or imported, the prevailing market prices were used, and the deflation index applied for the period in question, to arrive at the hypothetical prices in the year of delivery. Pharmacological therapy availability was analyzed by comparing the requested drugs against the Brazilian Common Denomination (BCD), and checking for their accessibility in the drugs lists of the Municipal List of Essential Drugs (LEM, Relação Municipal de Medicamentos Essenciais -REMUME in Portuguese) and the High Cost and Exceptional Circumstance Dispensing Drugs Program, now called Specialized Component of Pharmaceutical Assistance (SCPA). For analysis of the therapeutic alternative availability in the public health service, the requested drugs were compared with official lists that make the three components defined in the Pharmaceutical Assistance Policy: basic component, strategic component and exceptional dispensed drugs component.Those drugs not contained in any of the lists were included and classified based on the Anatomical Therapeutic Chemical (ATC) Classification System, proposed by the World Health Organization Collaborating Center (WHO) for drugs methodology, from Oslo, Norway and the International Working Group for Drug Statistics, located in Geneva, Switzerland, which defines the directives in applying this methodology (Word Health Organization, 2010).Drugs having a therapeutic alternative were considered those requested through court actions that belonged to the same therapeutic subgroup (3 rd level in the ATC classification) as others present in the lists of the public health system. In order to characterize the drugs demanded through the judicial route in different city regions and to determine the public coverage for pharmacological therapy, the court actions were processed and the related information present in the collection forms was consolidated using Microsoft Office Excel 2007.At this stage, 129 lawsuits were excluded for having incomplete data, giving a total of 1,249 lawsuits analyzed. Subsequently, the lawsuits from among 63 Territorial Basic Units (TBU) of Goiânia were described and distributed by geoprocessing.These TPUs each correspond to a neighborhood or a set of neighborhoods, delimited by physical barriers, such as transport system, rivers and minor watercourses which split up parts of the urban space that have a significant level of homogeneity.The population data and the average income of the breadwinners (in Reais) of each TBU were gathered from the 2000 census data (Brasil, 2003).The average income was then converted into minimum wages at a reference amount of R$ 151.00 as defined in Law No. 9971 of 05/18/2000. After geoprocessing, the court actions were grouped in accordance with the estimated average income of the family breadwinner for the respective TBU, into four groups as follows: • Region type I: set of court actions filed in regions where the average income is from 0 to 3 (included) minimum wages; • Region type II: set of court actions filed regions where the average income is higher than 3 to 6 (included) minimum wages; • Region type III: set of court actions filed in regions where the average income is higher than 6 to 9 (included) minimum wages; • Region type IV: set of court actions filed in regions where the average income is greater than 9 minimum wages. The following sources were used: (a) for court action data, the suits filed at the Nonessential Drugs Pharmacy of the Municipal Health Secretariat; (b) for census data, the IBGE (Brasil, 2003); and (c) for drug costs, the Brasíndice Pharmaceutical Guide -2005 to 2007, was employed. The financial importance of each drug in the court action costs were analyzed for the two income extremes (Regions I and IV) using the Pareto Analysis, also known as the ABC curve, which allows them to be listed from most to least important, making it easier to visualize and identify those drugs which need most attention (Rosa, Gomes, 2001).The drugs were thus classified into three groups (A, B and C) according to their relative weight in the court action costs.After calculating the items' relative costs, they were arranged in decreasing order.The groups A, B and C were then defined.Group A, of higher financial importance, corresponded to those drugs that, when added, represented 75% of costs.Drugs in group C, although corresponding to the largest proportion, only represented a low proportion of costs, at about 10%.Group B represented the intermediary cases between groups A and C. For the curve calculus, monetary values from 2007 were used as a base. After characterization and distribution of the demands for drugs though lawsuits in different regions of the city (TBUs), these were categorized into four groups according to the population's median income and examined to determine whether the lawsuit characteristics varied according to income. Analyzed variables were then compared for the income groups.It should be noted that for these analyses the sample units corresponded to the TBUs, i.e., when there is, for example, an average, this number represents the average of all TBUs that met the characteristics of the group.The Kruskal-Wallis test was applied since the data did not generally follow a normal distribution. The following definition was used to identify the groups: 0-3 MW, monthly income of 0 to 3 minimum wages; 3-6 MW, average income of 3 to 6 minimum wages; 6-9 MW, average income of 6 to 9 minimum wages; and Over 9 MW, income higher than 9 minimum wages. A level of significance of 0.05 was adopted for the tests. RESULTS The survey carried out at the Non-Essential Pharmacy of Goiânia found 1,344 petitioning citizens who filed a total of 1,378 lawsuits demanding different drugs from January 2003 to December 2007. The number of lawsuits filed per year rose from 2003 (96) to 2006 (481), where in 29 cases the date granted was not found.In 2007, the number of new writs fell by 46.7%, totaling 225, but the reasons for this decrease has not yet been investigated (Figure 1).Even though the number of new suits had fallen in 2007, a commensurate decrease in the number of treatments in the same year was not observed.This is due to the cumulative characteristic of court actions. Older adults (over 60 years old) and children (0 to 10 years old), represented 47.1% of the total petitioners.The most frequent health problems in court actions among children were gastroesophageal reflux disease and lactose intolerance.The most frequent cause in the 11-15 years age group, which accounted for 22.1% of the lawsuits, was Type 1 Diabetes. Insulin was the main request of petitioners who were Type 1 Diabetes Mellitus sufferers, corresponding to R$ 1,242,009.39(23.24%) of the overall costs with judicially demanded drugs (R$ 5,344,274.50).The insulin analogues represented 90.91% of insulin units provided and Glargine corresponded to 44.20% of the requests, Lispro to 25.82%, Aspart to 27.76% and Detemir to 2.22%. The vast majority of lawsuits were writs of mandamus (94.4% of cases), mainly brought by State Public Prosecutors while the remainder were in the hands of private law offices. The prescriptions which led to lawsuits were made out under the private health system in 56.4% of cases and 18.1% from the public system whereas in 25.5% of cases it was not possible to identify the prescription's origin. Petitioners requested 482 different drugs of which 93 (19.3%) were available at REMUNE, 48 (10%) from the High Cost and Specialized Component of Pharmaceutical Assistance (SCPA) and 8 (1.7%) existed on both these lists.Thus, 333 (69.3%) of the requested drugs were not present in any official list of drugs for public distribution, however, 139 (41.7%) of this group had at least one therapeutic alternative available (Figure 2). Of the 1.378 lawsuits included in this analysis, 129 were excluded from the geoprocessing because the address was incomplete.The judicial demand characteristics varied among the intra-city regions where breadwinner incomes differed and are depicted in Table I. The number of lawsuits increased as income rose, whereby regions with the highest income (Type IV Regions) had three times more lawsuits than Type I Regions representing the lowest income. Lawsuit costs also presented a similar profile.The average cost of suits brought by petitioners living in regions with the highest income (Type IV Regions) was higher than from regions with the lowest income (Type1 Regions), reaching costs four times higher. There were no significant differences among the regions regarding the frequency at which the nonstandard drugs were requested or proportion of those which represented lack of assistance. Although the frequency that petitioners requested nonstandard drugs was similar across the regions, it was evident that individuals with higher income requested more nonstandard drugs, proportionally, than those with lower income. The ABC curve of drugs demanded in regions with the highest and the lowest incomes (Regions I and IV, respectively) offers some information that helps elucidate the differences between these regions.Group A of ABC curve is depicted in Table II. Results showed that bacterial infections was the main therapeutic need indicated as the reason for requesting drugs in Type 1 Regions, while in Type IV Regions Type 1 Diabetes Mellitus was the leading reason. Except for cancer, which represented high costs in both regions, the therapeutic indications in Type I Regions comprised a mix of illnesses of minor complexity and consequently represented lower costs in comparison with those in Type IV Regions. DISCUSSION The study results showed that the average cost of lawsuits varied among the regional groups.The average cost of suits increased with higher average income, being four-fold greater in the highest income group compared with the lowest.Some findings may help explain this behavior.One important fact is that the petitioners living in the wealthiest regions most frequently requested drugs that represent new technologies for the treatment of chronic diseases.By contrast, residents of regions with lower incomes requested treatments for acute diseases, such as bacterial infections which, although requiring high-cost drugs, require only short-term treatment.Some authors have called attention to the inequity caused by so-called "Health Judicialization" (Vierea, Zucchi, 2007;Terrazas, 2008).This study served to verify that individuals living in regions with different social characteristics in the same city, had different needs concerning the guarantee of access to pharmacological treatment as part of their right to health care.These differences between socially disparate regions may indicate inequality of access to drugs through this route.However, there are no satisfactory data to affirm that these differences reflect inequity, considering inequity as unfair, unnecessary and avoidable differences (Whitehead, 1992). This ecological approach may be useful in planning actions and health policies, since it allows tendencies and profiles in urban spaces to be highlighted; however it should be used with caution and drawing conclusions at an individual level from these findings should be avoided. Both standard and non-standard drugs were analyzed.SUS prescribers were expected to recommend drugs freely available within the public network, since they know the lists of standard drugs.However, it was observed that prescriptions of nonstandard drugs were also present in prescriptions for SUS patients.Some drugs such as those for glaucoma, alcoholism and gastroesophageal reflux treatment, among others, represented assistance gaps and should have been included in at least one list of public distribution thereby enlarging and increasing the pharmaceutical assistance coverage. However, the majority of requested drugs was standard or had free therapeutic alternatives available under the SUS where other authors had previously observed the same pattern (Messeder, Osorio-de-Castro, Luiza, 2005;Vieira, Zucchi, 2007;Borges, 2005).The request for drugs for conditions, whose treatment is already provided free under the SUS, may be explained by three reasons.The first is that the petitioners from the private health system, the majority of cases in this study, may have demanded drugs through the justice system because they were unaware of the availability of the drugs in the public system, where these users and their respective attending physicians or medical practice is out of step with the clinical protocols and the programmatic actions defined by SUS. The second reason may be that, even though these drugs are present in official lists of drugs for free distribution, the petitioners may have faced difficulties accessing them.This refers to access in its broader sense, i.e., it is not only a question of the medication being available but it must also being accessible at the opportune moment and when the patient (user) is made to believe and feel welcome (Penchansky, Thomas, 1981). The third reason involves new technologies that have not been incorporated to the public health system as an alternative to those already existing.Vieira and Zucchi (2007) identified qualitative deficiencies in treatments offered by the SUS for some diseases, indicating a need for review of the existing policies with possible inclusion of new technologies.However, this situation must be evaluated cautiously, since the existence of a new technology does not necessarily imply that it is better than those currently available (Rascati, 2010).The SUS must be critical and responsible when assimilating technologies. Recently, the Pharmacy and Therapeutics Committee of the city has reviewed its REMUME (Goiânia, 2010) and incorporated a variety of drugs which were the subject of frequent judicial requests, amongst them insulin analogues and eye drops for glaucoma, indicating the influence that judicial demands have on the policy of access to drugs.The incorporation of these new drugs into the REMUME may indicate a concern by public powers in meeting the demands of SUS users.However, we observed variations in the lawsuit profiles in terms of needs among the different social segments.Consequently the medication access policies should be sensitive to these disparities.One of these variations is the request for insulin analogues, which was a major request among the wealthier segments of population, who in turn are not frequent SUS users. Akin to diabetes, the main diseases reported in the suits were of a chronic nature and included metabolism, cardiovascular, central nervous system and gastric disorders as reported in previous studies (Vieira, Zucchi, 2007;Silva, Terrazas, 2009;Appio, 2007;Freire, 2003) explaining the cumulative nature of suits involving medication provision and consequent elevated expenditure with this component. The allocation of a significant part of the invested resources in citizens' health with an assistance-only component may jeopardize the sustainability of the health care pattern developed.This is because, according to Ferraz ( 2008), when resources are lower than the necessary expenditure to cater for needs, two different outcomes can be expected as a consequence of this opportunity cost: a) assistance for all will be rendered unviable, i.e., it will be necessary to reduce the number of beneficiaries; or b) adjustment and compatibility between resources and expenditure will result in reduced quality of healthcare assistance.Given that since 1988 the exclusion of users is an unconstitutional act, the more probable and plausible alternative is a reduction in the quality of the assistance offered. The imbalance between the constitutional proposal of universalization, the integrality of assistance and its financing, causes instability in the national health care system (Dain, 2007).The costing of drugs requested through lawsuits is a challenge to the SUS, and its sustainability is at risk while growth is higher than the resources available for health care. A notable finding in this study was the participation of the Public Ministry (State Public Prosecutors) (PM) as the agent of the court actions in the majority of cases, unlike previous studies.Vieira and Zucchi (2007), Marques and Dallari (2007) in São Paulo, Pereira et al. (2004) in Santa Catarina found private attorneys to be the main agents.On the other hand, according to the studies by Messeder et al. (2005) and Romero (2008), public and/ or non-profit agents, such as Public Defenders, Model Offices and MP (State Public Prosecutors) were the predominant agents.However, they all found little or no MP participation in cases. A high participation of the Public Ministry found in this study may call into question its legitimacy to file lawsuits to defend the interests of single citizens, deviating from its constitutional remit.However, the Brazilian Supreme Federal Court (STF) does not perceive court actions requesting means of health recuperation in this light.According to the courts, health is an inalienable right and in such cases, Article 127 of the Federal Constitution holds the Public Ministry responsible for acting in the defense of these individuals (Brasil, 2009).Moreover, in principle it is the role of the Public Defender Offices to defend poor SUS users who cannot afford to hire a lawyer to represent them in court, but for places where this office does not exist, the Public Ministry becomes an alternative for these citizens.This is the case of the city studied, and helps explain the extensive participation of the Public Ministry in the health care lawsuits. Despite the Public Ministry being a legitimate representative of needy individuals, the study observed that the majority of lawsuits had originated from urban regions associated with the highest incomes.The number of actions in the two segments with the highest income was 1.5-fold higher than in the two segments with the lowest income.Vieira and Zucchi (2007) and Chieffi and Barata (2009) also observed a higher number of lawsuits originating from regions with better social conditions.This profile may be associated with the patients' relationship network facilitating access to the necessary information to grant their constitutional rights.This is relevant because patients are generally unaware of their rights until somebody informs/guides them (Leite, Mafra, 2010) and the physician has an important role in this relationship network, acting as the main informer (Leite, Mafra, 2010;Terrazas, 2008). Other differences were observed among regions with differing average salaries.One of these differences was expected, namely, the origin of prescriptions in the private health system was more frequent in segments with higher income, i.e., it was expected that people from wealthier regions did not use the Unified Health System (SUS), opting for supplementary health services, as observed in other studies (Terrazas, 2008;Vieira, Zucchi, 2007).Looking to the public sector to overcome lack of access and integrality of care in a public/private mix are measures people resort to in their health care quotidian influenced by health plan coverage (Leite, Mafra, 2010). The health judicialization phenomenon has been extensively studied and described, along with its social, ethical, economic, administrative and sanitary consequences.However, this problem interferes in SUS management in the three government spheres, with greatest impact at municipal level, since cities have tighter budgets. Some related factors include the influence of the pharmaceutical industry, high cost of treatment, unavailability of drugs under the public health care system, among others (Provin, Delduque, Amaral, 2012) although only a few of these factors have a well-established causeeffect relationship. Nevertheless, some measures have been adopted in a bid to tackle the problem, such as taking the administrative route (non-judicial) to request drugs as an alternative to the judicial route and the adoption of Technical Chambers.However, both these solutions come into play only after the demand has been filed, while none propose to deal with the root problem, i.e., act in a preventive and efficient manner aiming to make demands that are out of flow an exception. Among the list of causes related to the judicial and extrajudicial demands, several are outside the realm of SUS management, however, those originating from within the health system should be elucidated and their relationship with the problems resulting in the demands should be determined and addressed via alternative routes.Subsequently, these problems should be analyzed in-depth in order to tackle the root problem. As the problem in hand is multifactorial, there may well be no definitive solution to remedy it, but these demands should be kept at levels compatible with efficient management of public resources while upholding the universality and integrality principle. CONCLUSION The judicial demand profile in intracity regions with different incomes exhibited distinct characteristics, and may represent diverse needs in terms of guaranteeing the right to health care, particularly concerning the provision of drugs.Population segments with higher incomes filed lawsuits more frequently to assure their access to drugs and also requested more expensive treatments. FIGURE 1 - FIGURE 1 -Number of new lawsuits filed against the Municipal Health Secretariat and number of current lawsuits (year not given in 29 lawsuits), 2003-2007, Goiânia -GO. FIGURE 2 - FIGURE 2 -Distribution of requested drugs in official lists for public access and existence of therapeutic alternatives.SCPA: Specialized Component of Pharmaceutical Assistance; REMUME: Municipal List of Essential Drugs. TABLE I - Characteristics of judicial demands by intra-city region with different incomes Origin of prescriptions by health care service * Kruskal-Wallis test.** Variable statistically different.
2018-12-02T23:43:26.482Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "175ca324709674c0ea5ccd85b02e8973c32eacb2", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/bjps/v49n3/v49n3a08.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "175ca324709674c0ea5ccd85b02e8973c32eacb2", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Political Science" ] }
16520111
pes2o/s2orc
v3-fos-license
Homology stability for Unitary groups II In this note the homology stability problem for hyperbolic unitary groups over a local ring with an infinite residue field is studied. Introduction In this note we continue the study of the homology stability problem for hyperbolic unitary groups, started in [7]. In [7] a general statement about homology stability for these groups was established. It was believed that similar to the general linear group case, for hyperbolic unitary groups over an infinite field, one can have a better range of homology stability. But no proof of this exists in the literature. Our main goal is the study of this problem. Our main theorem asserts that for a local ring with an infinite residue field, the natural map H l (inc) : H l (G n , Z) → H l (G n+1 , Z) is surjective for n ≥ l + 1 and is injective for n ≥ l + 2, where G n := U ǫ 2n (R, Λ) always with the underlying hyperbolic form. With a field k as the coefficient group we get even better result; H l (inc) : H l (G n , k) → H l (G n+1 , k) is surjective for n ≥ l and is injective for n ≥ l + 1. In fact the first result follows from the second one. To get the second result, we will introduce some posets similar to one introduced and studied in [7]. In section 1 we prove that they are highly acyclic. Applying this we will come up with a spectral sequence, Theorem 3.5, which is the main purpose of section 3. The main difficulty is to analyze this spectral sequence which is done in section 4. The stability theorem will be a result of this analysis. An application of the stability theorem is given in this section. In section 5, we will discuss the homology stability problem in the case of a finite field. I would like to thank W. van der Kallen for his useful comments that made some of the original proofs shorter and for his help in some of the proofs. Here we will establish some notations. By a ring R, we will always mean a local ring with an infinite residue field unless it is mentioned. The ring R has an involution (which may be the identity) and we set R 1 := {r ∈ R : r = r}. This is also a local ring with an infinite residue field. For the definition of the concepts that we use such as a bilinear form h, a hyperbolic unitary group and its elementary group, an isotropic element or set, the unimodular poset U (R n ), the isotropic unimodular poset IU(R 2n ) etc, we refer to [7, sections 6 and 7]. We denote a hyperbolic unitary group U ǫ 2n (R, Λ) and its elementary group by G n and E n respectively. By convention G 0 will be the trivial group. The embeddings G n → G n+1 and E n → E n+1 are given by where k is a field with trivial G-action. By k as the coefficient group of the homology functor, we always mean a field. In some cases, which will be mentioned, it has to be prime field. Isotropic unimodular posets The main statement of this section, Theorem 2.5, is rather well known (see [9]). We give the details of the proof to make sure that everything is working for our case, Theorem 2.7. For an alternative proof in the case of a field different from F 2 see Remark 1 and Theorem 5.1. Definition 2.1. Let S = {v 1 , . . . , v k } and T = {w 1 , . . . , w k ′ } be basis of two isotropic free summands of R 2n . We say that T is in general position with S, if k ≤ k ′ and the k ′ × k-matrix (h(w i , v j )) has a left inverse. Proposition 2.2. Let n ≥ 2 and assume T i , 1 ≤ i ≤ l, are finitely many finite subsets of R 2n such that each T i is a basis of a free isotropic summand of R 2n with k elements, where k ≤ n − 1. Then there is a basis, T = {w 1 , . . . , w n }, of a free isotropic summand of R 2n such that T is in general position with all Proof. The proof of the first part is by induction on l. Let T i = {v i,1 , . . . , v i,k }. For l = 1, take a basis of a free isotropic direct summand of R 2n , for example {w 1 , . . . , w k }, such that h(w j , v 1,m ) = δ j,m , where δ j,m is the Kronecker delta and choose T an extension of this basis to a basis of a maximal isotropic free subspace. Assume that the claim is true for 1 ≤ i ≤ l − 1. This means that there is a basis {u 1 , . . . , u n } of a free isotropic summand of R 2n , in general position with be the matrix obtained by deleting (j 1,i , . . . , j n−k,i )-th rows of B i (t) and set f The second part of the proposition follows from the exact sequence . . , h(w, v 1,k )) and the fact that projective modules over local rings are free. Let S be a non-empty set and X ⊆ O(S) [7,Sec. 4]. Let C k (X), k ≥ 0, be the free Z-module with the basis consisting of the k-simplices ((k + 1)-frames) of X, C −1 (X) = Z and C k (X) = 0 for k ≤ −2. The family C * (X) := {C k (X)} yields a chain complex with the differentials Proof. The proof is similar to the proof of lemma [7, 5.4], using the fact that sr(R) = 1. To prove the second part of the theorem it is sufficient to find a unimodular vector v ∈ R n such that {v, v i 0 , . . . , v i k , w 1 , . . . , w r } is linearly independent, 1 ≤ i ≤ l. The proof is by induction on l. The case l = 1 follows from 2.3. By induction assume that there are u 1 , u 2 ∈ R n such that ( be an element of the elementary group E n (R) ⊆ GL n (R) such that Au 1 = u 2 and set A(t) = E r,s (ta). Let B i be the matrix whose columns are the vectors u 1 , v i 0 , . . . , v i k , w 1 , . . . , w r for 1 ≤ i ≤ l − 1, B l the matrix whose columns are u 2 , v l 0 , . . . , v l k , w 1 , . . . , w r and B i (t) is the matrix whose columns are A(t)u 1 , v i 0 , . . . , v i k , w 1 , . . . , w r , 1 ≤ i ≤ l. The rest of the proof is similar to the proof of proposition 2.2. Proof. If n = 1, then everything is trivial, so we assume that n ≥ 2. Let σ = r i=1 n i v i be a k-cycle. Thus v i , 1 ≤ i ≤ r, are isotropic (k + 1)frames with k ≤ n − 2. By 2.2, there is an isotropic n-frame w in general position with v i , 1 ≤ i ≤ r. Set W = w and let E σ be the set of all (u 1 , . . . , u m , t 1 , . . . , t l ) ∈ IU (R 2n ) such that m, l ≥ 0, (u 1 , . . . , u m ) ∈ U (W ), if m ≥ 1, and for every l ≥ 1 there exist an i such that (t 1 , . . . , t l ) ≤ v i . The poset E σ satisfies the chain condition and v i ∈ E σ . It is sufficient to prove that E σ is (n−2)-acyclic, because then σ ∈ ∂ k+1 (E σ ) ⊆ ∂ k+1 (C k+1 (X)). Let , then u is of the form (u 1 , . . . , u m , t 1 , . . . , t l ), l ≥ 1. . Remark 1. (i) The concept of being in general position and the idea of the proof of 2.5 is taken from [9]. Because the details of the proof in [9] never appeared we wrote it down. (ii) In fact Theorem 2.5 is true for every field R = Z/2Z. Let Define the map of the posets f : , but her proof works without modification in our more general setting [2, p. 115]). On the one hand it is easy to see that (iii) We expect that over a ring with no finite ring as a homomorphic image and finite unitary stable rank the poset IU(R 2n ) is (n − usr(R) − 1)connected. For this it is sufficient to prove 2.2 over such ring. For example Theorem 2.5, without any change in its proof, is true over a semi-local ring with infinite residue fields. Therefore the results of this note are also valid for these rings. (iii) Using 2.5, (iii) and the same argument as in (ii) one can prove that over a semi-local ring with infinite residue fields, IV(R 2n ) is (n − 2)-acyclic. Over an infinite field this gives much easier proof of Vogtmann's theorem mentioned in (ii). (iv) Using a theorem of Van der Kallen [13, Thm. 2.6] and a similar arguments as (iii) we can generalize the Tits-Solomon theorem over a ring with stable range one (for example any Artinian ring). Let R be a ring with stable range one and consider the following poset, which we call it the Tits poset, Let X = U (R n ) ≤n−1 and consider the poset map g : X → T (R n ), v → v . By induction and a similar argument as in (ii), using the fact that X is (n − 3)connected, one can prove that T (R n ) is (n − 3)-connected (note that any stably free projective module of rank ≥ 1 is free). We leave the details of the proof to the interested readers. Theorem 2.7. Let n, m be two natural numbers and n ≤ m. Then the Proof. The proof is similar to the proof of 2.4 and 2.5. The spectral sequence In this section, k will be a field, S i a k-algebra, i ∈ N, S ⊗n For simplicity we denote this collection by and set . is multilinear, so it gives an R * -equivariant homomorphism. In this way we obtain an R * -equivariant epimorphism Since the functor H 0 is right exact, by applying the first part of the lemma we By induction and applying the functor H 0 to the short exact Any choice of a section for β gives a homomorphism ϕ : p A → H 2 (A, F p ), which by the property of the algebra Γ, uniquely extends to an F p -algebra This filtration does not depend on our choice of section ϕ and successive factors H Note that P i and Q i are S i -modules. Then both cases follow from 3.2. Let σ 2 = ( e 1 , e 3 ) ∈ IU(R 2n ). The elements of Stab Gn (σ 2 ) = {B ∈ G n : Bσ 2 = σ 2 } are of the form where a i ∈ R * and A ∈ G n−2 . Let N n,2 and L n,2 be the subgroups of Stab Gn (σ 2 ) of elements of the form  respectively. It is a matter of an easy calculation to see that the elements of the group N ′ n,2 = [N n,2 , N n,2 ] are of the form  where r, s ∈ Λ = {r ∈ R : ǫ −1 r = −r} and t ∈ R. In general one can define N n,p , L n,p and N ′ n,p for all p, 1 ≤ p ≤ n, in a similar way. Embed R * p × G n−p in Stab Gn (σ p ) as diag(a 1 , . . . , a p , A) → Proof. It is sufficient to prove the theorem when k is a prime field. Fix a natural number p, 1 ≤ p ≤ n, and set N = N n,p , L = L n,p , N ′ = N ′ n,p and T = Stab Gn (σ p ). The extensions 1 → N ′ → L → L/N ′ → 1 and 1 → N/N ′ → L/N ′ → L/N → 1 give the Lyndon-Hochschild-Serre spectral sequences respectively. Since L/N ≃ R * p and N/N ′ acts trivially on N ′ , E 2 p ′ ,q ′ = H p ′ (R * p , H q ′ (N/N ′ ) ⊗ k H q (N ′ )). It is not difficult to see that N/N ′ ≃ R h and N ′ ≃ R l × Λ m for some h, l, m and the action of R * 1 on N/N ′ and N ′ is linear-diagonal and quadratic-diagonal respectively. Again the extension Since the homology functor commutes with the direct sum functor, where the action of R * 1 on R h , R l and Λ m is linear-diagonal, quadraticdiagonal and quadratic-diagonal respectively. By theorem 3.3, H s (R * 1 , M ) = 0 for s ≥ 0 and q > 0 or q ′ > 0. This shows that E 2 p ′ ,q ′ = 0 for p ′ ≥ 0 and q > 0 or q ′ > 0. Therefore H p ′ (L/N ′ , H q (N ′ )) = 0 for p ′ ≥ 0 and q > 0. Hence E 2 p,q = 0 for p ≥ 1 and q > 0. By the convergence of the spectral sequence we get . and by a similar approach to (1), From the embedding R * p → L, (1) and (2) we get the isomorphism gives the map of the spectral sequences By what we proved in the above we have the isomorphism E 2 p,q ≃ E ′ 2 p,q . This gives an isomorphism on the abutments and so H i (R * p ×G n−p ) ≃ H i (T ). Theorem 3.5. There is a first quadrant spectral sequence converging to zero with In particular for 0 ≤ p ≤ n, d 1 Proof. Let C ′ l (X n ) be the k-vector space with the basis consisting of lsimplices (isotropic (l + 1)-frames) of X n . Since X n is (n − 2)-acyclic, Theorem 2.7, we get an exact sequence Call this exact sequence L * : L 0 = k, L i = C ′ i−1 (X n ), 1 ≤ i ≤ n, L n+1 = H n−1 (X n , k) and L i = 0 for i ≥ n+2. Let F * → k be a resolution of k by free (left) G n -modules and consider the bicomplex C * , * = L * ⊗ Gn F * . Here we convert the left action of G n on L * into a right action with vg := g −1 v. By the general theory of the spectral sequence for a bicomplex we have E 1 p,q (I) = H q (C p, * ) = H q (L p ⊗ Gn F * ) and E 1 p,q (II ) = H q (C * ,p ) = H q (L * ⊗ Gn F p ). Since F p is a free G n -module, L * ⊗ Gn F p is exact and this shows that E 1 p,q (II ) = 0. Therefore E 1 p,q (n) := E 1 p,q (I) converges to zero. If p = 0, then E 1 0,q (n) = H q (k ⊗ Gn F * ) = H q (G n ). The group G n acts transitively on the l-frames of X n , 1 ≤ l ≤ n, so by the Shapiro lemma [1, Chap. III, 6.2], L p ⊗ Gn F * ≃ k ⊗ Stab Gn (σp) F * and thus E 1 p,q (n) = H q (Stab Gn (σ p )), 1 ≤ p ≤ n. By 3.4, is of the form that we mentioned. Now we look at the differential d 1 p,q (n) : and (e 2l−1 , e 2l )g −1 i,p = (e 2l−3 , e 2l−2 ), i + 1 ≤ l ≤ p, where vg −1 := gv for v ∈ R 2n . It is easy to see that d i (σ p ) = σ p−1 g i,p , and so ∂( It is easy to see that l g i,p is an inn g i,p -homomorphism, and d i ⊗ id F * induces the map k ⊗ Stab Gn (σp) F * → k ⊗ Stab Gn (σ p−1 ) F * , 1 ⊗ x → 1 ⊗ l g i,p (x). This shows that d i induces H q (inn g i,p ) : H q (Stab Gn (σ p )) → H q (Stab Gn (σ p−1 )) and hence the map H q (inn g i,p ) : H q (R * p ×G n−p ) → H q (R * p−1 ×G n−p+1 ). Set α i,p = inn g i,p . Since G n acts transitively on the generators of C ′ p (X n ), E 1 * ,0 (n) is of the following form Remark 2. In fact E 2 n,0 (n) = 0. For a proof see the proof of theorem 4.3. Stability theorem To prove the homology stability result we have to study the spectral sequence that we obtained in theorem 3.5. Lemma 4.1. Let n ≥ 1, l ≥ 0 be integer numbers such that n − 1 ≥ l. Let H q (inc) : H q (G n−2 ) → H q (G n−1 ) be surjective if 0 ≤ q ≤ l − 1. Then the following conditions are equivalent; Proof. For n = 1 every thing is easy so let n ≥ 2. By the Künneth theorem [5, Chap. V, §10, Thm. 10 Consider the following diagram where β j is the shuffle product, j = 1, 2 [1, Chap. V, Sec. 5], α 1 = id ⊗ H l−i (inc) is surjective and α 2 = H l (inc). By giving an explicit description of the above maps we prove that this diagram is commutative. For this purpose we use the the bar resolution of a group [1, Chap. I, Sec. 5]. If See [5,Chap. VIII,§8] for more details about the shuffle product. Let P ∈ G n be the permutation matrix that permutes the first and second columns with third and forth columns respectively and let inn P : G n → G n , A → P AP −1 = P AP . It is well known that H q (inn P ) = id Hq(Gn) [1, Chap. This shows that the above diagram is commutative. Therefore τ 1 (S 2 ) ⊆ τ 1 (S 1 ). Consider R 2(n−2) as the submodule of R 2n generated by e 5 , e 6 , . . . , e 2n (so G n−2 embeds in G n as diag(I 2 , I 2 , G n−2 )). Let L ′ * be the complex with X n−2 = IU(R 2(n−2) ). Define the map of complexes L ′ * α * → L * , given by Note that this is similar to one defined in the proof of the proposition 2.6 in [8]. This gives the maps of bicomplexes where L * and F * are as in the proof of theorem 3.5 and F ′ * is F * as G n−2module, so it induces the maps of spectral sequences where all the three spectral sequences converge to zero. By a similar argument as in the proof of 3.5, one sees that the spectral sequence E ′ 1 p,q (n) is of the form Corollary 4.4. If n − p ≥ q, then the complex Proof. This comes out of the proof of 4.3. Theorem 4.5. Let n ≥ 1, l ≥ 0 be integer numbers. Then H l (inc) : Z) is surjective for n ≥ l + 1 and is injective for n ≥ l + 2. Proof. For n ≥ l + 1, theorem 4.3 implies H l+1 (G n+1 , G n ) = 0. Here H l+1 (G n+1 , G n ) is the homology of the mapping cone of the map of complexes F is the G m -resolution of k. Applying the homology long exact sequence to the short exact sequence We must prove that H l+1 (G n+1 , G n , Q/Z) = 0. Since Q/Z = ⊕ p lim −→ Z/p d Z and since the homology functor commutes with the direct limit functor, it is sufficient to prove that H l+1 (G n+1 , G n , Z/p d Z) = 0. This can be deduced from writing the homology long exact sequence of the short exact sequence 0 → Z/pZ → Z/p d Z → Z/p d−1 Z → 0 and induction on d. Therefore H l (G n+1 , G n , Z) = 0. The surjectivity, claimed in the theorem, follows from the long exact sequence The proof of the other claim follows from a similar argument. Remark 3. Theorem 4.5 gives almost a positive answer to a question asked by Sah in [11, 4.9]. Also it gives better range of stability in comparison to other results [7], [15]. It is easy to see that B is a functor from the the category of topological groups to the category of topological spaces. The topological space BG top is called the classifying space of G with the underlying topology. Let BG be the classifying space of G as the topological group with the discrete topology. By the functorial property of B we have a natural map ψ : BG → BG top . See [6] and [11] for more information in this direction. Proof. This follows from 4.3 and 4.7. Homology stability of unitary groups over finite fields In this section we will explain which part of the above results is true if R is a finite field, so in this section we assume that R := F is a finite field. Lemma 5.1. Let F be a field different from F 2 . Then U (F n ) is (n − 2)connected, U (F n ) w is (n − |w| − 2)-connected for every w ∈ U (F n ) and the poset IU(F 2n ) is (n − 2)-connected. Proof. The proof of the first two claims is by induction on n. Let Z := U (F n ) and Y := O(P n−2 ). For any v = ( v 1 , . . . , v k ) ∈ Z\Y , there is an i, for example i = 1, such that v i / ∈ R n−1 . This means that the n-th coordinate of v 1 is not zero. Choose r i ∈ F such that v ′ i = v i − r i v 1 ∈ F n−1 , 2 ≤ i ≤ k. It is not difficult to see that [13, 2.13 (ii)]. To complete the proof we have to prove that Z ′ := U (F n ) w is (n − |w| − 2)-connected. If w ∈ Y , then replacing Z by Z ′ in the above and using the induction assumption one sees that Z ′ is (n − |w| − 2)-connected. If w / ∈ Y then by induction Y ∩ Z ′ is (n − |w| − 2)-connected and Y ∩ Z ′ u is (n − |w| − |u| − 2)-connected for every u ∈ Z ′ \Y as we proved in the above. Now by [13, 2.13 (i)] the poset Z ′ is (n − |w| − 2)-connected. The proof of the last claim is similar to the proof given in Remark 1(ii).
2014-10-01T00:00:00.000Z
2003-12-22T00:00:00.000
{ "year": 2003, "sha1": "61c6e649a9bc9a4bc45c9fe333ff02da86cae037", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0312408", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6254302b8fc690c84138ff0052f050abea830708", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119220771
pes2o/s2orc
v3-fos-license
Asteroid spin-axis longitudes from the Lowell Observatory database By analyzing brightness variation with ecliptic longitude and using the Lowell Observatory photometric database, we estimate spin-axis longitudes for more than 350 000 asteroids. Hitherto, spin-axis longitude estimates have been made for fewer than 200 asteroids. We investigate longitude distributions in different dynamical groups and asteroid families. We show that asteroid spin-axis longitudes are not isotropically distributed as previously considered. We find that the spin-axis longitude distribution for main-belt asteroids is clearly non-random, with an excess of longitudes from the interval 30{\deg}-110{\deg} and a paucity between 120{\deg}-180{\deg}. The explanation of the non-isotropic distribution is unknown at this point. Further studies have to be conducted to determine if the shape of the distribution can be explained by observational bias, selection effects, a real physical process or other mechanism. Introduction Theoretical work based on the collision history of the solar system suggests an isotropic rotational pole distribution for asteroids (Davis et al., 1989). However, observational data have already suggested otherwise. Analyses based on photometric data have indicated that many rotational poles of small asteroids (D < 30 km) seem to be directed far from the ecliptic plane, and that there exists a preferential prograde rotation, possibly of primordial origin, for large asteroids (D > 60 km) (Kryszczyńska et al., 2007, La Spina et al., 2002, Hanuš et al., 2011. A full explanation of pole depopulation near the ecliptic remains elusive, but the YORP (Yarkovsky-O'Keefe-Radzievskii-Paddack) effect has been indicated as a possible cause. Other evidence for non-isotropic spin distribution comes from the Koronis family. Based on a 10asteroid sample and large lightcurve amplitudes, it has been suggested that the Koronis family members have their spin vectors aligned, clustered towards very low or very high obliquities therefore preferentially presenting equatorial aspects to Earth-based observers (Slivan, 2002). The alignment has subsequently been explained (Vokrouhlický et al., 2003) by a combination of the YORP effect and resonances with Saturn. Vokrouhlický et al. (2003) found that prograde rotators should have their spins rates slowed and their spin axis cast into a slow precession and next locked into spin-orbit resonance. Asteroids in this particular equilibrium state are said to be in a socalled Slivan state. For near-Earth asteroids (NEAs), an excess of retrograde rotators has been reported (La Spina et al., 2004). Retrograde rotators are more likely to be injected into NEA orbits via the so-called Yarkovsky effect. The effect causes the main-belt asteroids (MBAs) in retrograde rotation to drift towards the Sun, injecting them into resonant regions, thence to Earth-approaching orbits. In general, asteroid spins can be affected by collisional processes, close encounters with planets (Scheeres et al., 2000), tidal effects during close encounters (Richardson et al., 1998), and processes such as the YORP effect (Vokrouhlický and Capek, 2002, Rubincam, 2000, Binzel, 2003. Random collisions among asteroids cause their spin axes to be oriented in random directions (Davis et al., 1989). For small asteroids, the YORP effect is the dominant force influencing asteroid spins. YORP is for example responsible for the spin-up of rubble-pile asteroids, leading to the creation of binary objects (Walsh et al., 2008). YORP also leads to changes in asteroid spin axes. Much less attention has been paid to the spin-axis longitudes, which are generally thought to follow a uniform distribution (Davis et al., 1989, De Angelis, 1995. It has been argued that precession of orbits has erased any original anisotropy in pole longitudes. De Angelis (1995) has noted that no significant information can be extracted from pole longitudes, because they change due to the precession of rotation axes arising from the tidal torques by the Sun and planets (Burns, 1971). The precession period is orders of magnitude shorter than the age of the solar system (Magnusson, 1986), so no information about the earliest asteroid belt can be obtained from the distribution of pole longitudes. More recent studies also suggest isotropic longitude distributions (for example (Hanuš et al., 2011, Kryszczyńska et al., 2007). In this study, we estimate spin-axis longitudes for hundreds of thousands of asteroids using the magnitude method (Magnusson, 1986) and photometric data from the Lowell Observatory database. The method is described in Sec. 2, and the Lowell Observatory database is described in Sec. 3. In Sec. 4, we discuss our results. It should be noted that we present only empirical distributions for the spin-axis longitudes and do not seek to provide possible physical or observational explanations for the shape of the distributions. Conclusions and future research are outlined in Sec. 5. Method We use the so-called magnitude method (Magnusson, 1986) relying on the longitude variation of the mean absolute brightness (an example is given in Fig. 1). In the absence of surface albedo features, it can be assumed that the peak absolute brightness occurs at minimum polar aspect angle; that is, when an asteroid's spin axis is most nearly pointing toward or away from the Earth. We fit a sinusoid to the brightness variations as shown in Fig. 1, and find the spin-axis longitude at the maximum of the curve. The fitted curve is: where the phase  0 , amplitude  V 2 , and origin point V 0 are fitted simultaneously using least squares and  is the heliocentric ecliptic orbital longitude. This simple spin-axis longitude computation method is well suited to the noisy data at hand. A two-fold ambiguity is present in the method for all objects: there are two equally likely solutions 180 degrees apart in longitude for each object and it is not possible to identify which of the two solutions is true. Therefore the fit assumes that the mean (rotation averaged) brightness of an asteroid with ecliptic longitude is symmetric with 180° and that a symmetric solution in the range 180-360° is also possible. This implies that the distributions of spin-axis longitudes for the 180°-360° range would look identical to those in the 0°-180° range. To test if the fit is really symmetric with 180° we fit both the first and second harmonics to the data, that is we fit the function described by: and check the amplitude of the first harmonic. For 98.3% of the objects, the amplitude is zero within the error bars. For the remaining 1.7% of the objects, the fit including both harmonics overfits the noisy data. The simple fit (Eq. 1) is therefore a good approximation and can be used in spin-axis longitude computation. The fit is generally well defined for asteroids exhibiting significant peak-to-peak brightness variation (> 0.15 mag), but cannot usually be reliably obtained for asteroids having smaller variation. Asteroids whose poles are perpendicular to the ecliptic exhibit little or no variation; those with poles directed closer to the ecliptic will have larger variation. Most of the fitted objects have their peak-to-peak mean magnitude variation between 0.15 mag and 0.45 mag. In Fig. 4, we plot the distribution of the peak-to-peak magnitude variation. Also, asteroids having small numbers of observations (fewer than 50, say) cannot usually be reliably fitted. We then estimate the spin-axis longitudes of hundreds of thousands of asteroids, creating the most extensive list of asteroid spin-axis longitudes currently known (the Poznań Observatory database (Kryszczyńska et al., 2007) comprises fewer than two hundred rotational pole solutions). In Fig. 2, we plot the spin-axis longitudes based on the heliocentric longitude brightness variation (Lowell Observatory database) versus the spin-axis longitudes from the Poznań Observatory database (Kryszczyńska et al., 2007). There is good agreement between the results from the magnitude method and estimates by other authors. Outliers near 0° and 180° arise from asteroids in the Poznań database having spin estimates from authors who do not agree with each other. Fig. 3 shows the distribution of uncertainty in spin-axis longitude with increasing numbers of observations. Improvement in the longitude fits with increasing numbers of observations is obvious. The biggest advantage of the magnitude method is its simplicity. The spin-axis longitudes estimates obtained by the method can help constrain the phase-space of possible asteroid spin and shape solutions in more sophisticated methods such as the lightcurve inversion methods, especially in cases where the parameter phase space has many local minima. Kaasalainen et al. (2001) have developed convex inversion methods that, in the case of extensive observational data, converge to a global minimum. For sparse observational data, our spin-axis longitude estimates can help localize the global minimum. The estimates can be especially useful in the analyses of asteroid lightcurves expected from the upcoming large-area sky surveys (e.g., Pan-STARRS, LSST, and the Gaia mission). Lowell observatory photometric database Data from the Lowell Observatory photometric database combines orbital data (from the Lowell Observatory orbital data file maintained by EB and LHW) with photometric data from the Minor Planet Center (MPC). Most of the photometric data are of low precision (generally rounded to 0.1 mag) and low accuracy (rms magnitude uncertainties of 0.2 to 0.3 mag are typical). The MPC data comprise photometric observations from many sources (each having different systematic and random errors, sometimes time variable). The photometric data are very numerous: in the present study we have used about 47,000,000 individual, largely independent magnitude estimates. For most asteroids, there exist photometric data sampled at a variety of heliocentric longitudes, and therefore different asteroid spin-axis aspects. We have used data from eleven observatories. Most of them have provided photometric data during the course of NEA searches, though the overwhelming majority of the data pertain to MBAs and Jupiter Trojan asteroids. The data were calibrated using accurate broad-band photometry of asteroids observed in the course of the Sloan Digital Sky Survey (SDSS) (Ivezić et al., 2001). The SDSS data were converted to the V band using transformations derived by Rodgers et al. (2006). Because of the limited magnitude range of the SDSS data, the brightest (in practice, the first thousand numbered asteroids) and faintest (mostly TNOs, that is, transneptunian objects) are not calibrated. Thus, results for bright and very faint asteroids are less reliable than those whose brightness falls within the range of the SDSS data. A description of the data reduction and calibration can be found in (Oszkiewicz et al., 2011). Results We present histograms of asteroid spin-axis longitudes for different dynamical groups and families. Bins were normalized to sum up to unity. The dynamical classification was extracted from JPL, 2011, and the asteroid family membership was derived from Nesvorný (2010). Our full sample comprises 355,926 numbered and unnumbered asteroids. The sample reduces to 18,471 asteroids having at least 50 observations and 3 × σ spin-axis longitude uncertainties less than 30º (which corresponds to a moderate 1-σ spin-axis longitude uncertainty of 10º), which we consider a reasonable cut. The sample includes 17,160 MBAs, 98 NEAs, 121 Jupiter Trojans, 186 Mars crossers, 1 TNO, 5 Centaurs, and the members of most asteroid families (please see Table 1 for the number of objects per dynamical group depending on the selection criteria). For TNOs and Centaurs, no statistical study based on such a small number of objects is possible. There might be some selection biases in our sample, such as those related to the amplitude of brightness variation (only asteroids having their rotational poles close to the ecliptic plane exhibit large enough brightness variation). However, the significance of the biases remains unknown in the present work and cannot be easily estimated. Figure 5 shows the distribution of spin-axis longitudes for MBAs. The longitude distribution for MBAs is far from uniform and shows distinct features: an excess of spin axes in the longitude interval 30º-110º (with two maxima the first one being located between 0º-55º and the second one, more pronounced between 70º-110º) and a paucity between the longitudes 120º-160º. The paucity, the excess, and the second maximum are about 3-σ significance over the level expected from a random distribution and thus can be considered real. The first maximum is only 1-σ above the largest dip between the two intervals and thus cannot be confirmed. Dynamical populations The anisotropy of longitudes for MBAs has already been suggested by La Spina et al. (2003) and Samarasinha and Karr (1998). However that suggestion is contrary to other authors. For example Hanuš et al. (2011) found that the longitude distribution for MBAs shows no significant features and is very close to uniform, with an exception of asteroids smaller than 30 km. Those asteroids have shown a small excess of small spin-axis longitudes, but it was thought to be a random coincidence rather than the result of a physical process. Also, Kryszczyńska et al. (2007) have concluded that the dips in the longitude distributions in the regions 120º -180º and 300º-360º are only of about 1σ significance, and thus cannot be confirmed. Figure 6 shows the distribution of spin-axis longitudes for NEAs. The distribution is clearly different from that for MBAs. In Fig. 6, for NEAs, the distribution exhibits two maxima (the first maximum between 0º-70º and the second between 110º-180º). The two maxima are, however, only about 1σ above the background, so cannot be confirmed. It has previously been suggested that the NEA longitude distribution exhibits two sharp maxima (Kryszczyńska et al., 2007), but the finding has not been confirmed because of the low contrast of the maxima compared to the mean background. Asteroids from the Mars orbit crossing population show longitude distribution not far from uniform with only very weak features similar to those of MBAs (see Fig. 7). The features are less pronounced than those of MBAs and only of about 1-σ significance. Asteroids from the Jupiter Trojan population (Fig. 8) show similar features as the MBAs; that is, two maxima (the first located between 0º-55º and the second, more pronounced, between 70º-110º) and a minimum (between 120º-160º), both of which are also less than 1-σ significance. For the TNO and Centaur population we cannot draw any conclusions due to small-number statistics. To test the robustness of all the distributions, we plot the longitude distributions based on different cut-offs for the numbers of observations and the 3 × σ longitude uncertainty. Next, we use the Kolmogorov-Smirnov (K-S) test to examine the randomness of all the distributions. The null hypothesis that the distribution being tested is uniform is rejected or accepted based on K-S statistics and p-values. The obtained K-S statistics and p-values are listed in Table 2. If the K-S statistic is small or the p-value is high (> 0.05), then we cannot reject the hypothesis that the two distributions are the same. The null hypothesis can be clearly rejected for MBAs. For MBAs, both the K-S statistics are large and the p-value is small. The spin-axis longitude distribution for MBAs is therefore nonrandom. For NEAs and Mars crossers, the null hypothesis cannot be rejected. For Jupiter Trojans, the null hypothesis can be rejected for cases (a), (c), and (d). Condition (b) is the most strict one, and therefore we are inclined to conclude that the spin-axis longitude distribution for the Jupiter Trojans is non-random. Group (a) vs. uniform (b) vs. uniform (c) vs. uniform (d) vs. uniform A clear explanation for the shape of the longitude distributions is missing. However it is worth mentioning that the distribution for the MBAs is different that that for NEAs. Although it can not be statistically shown, the remaining groups (Jupiter trojans, Mars crossers) show similar trends in their longitude distributions. Therefore a possible mechanism has to explain the lack of (or smaller) influence on the NEAs. To test if YORP could influence the distributions, we plotted the spin-axis longitudes for different absolute magnitude regimes: First, corresponding to an interval of H = (0 -9.5) mag, that is asteroids larger than approximately 30 km; Second corresponding to an interval H = (9.5 -11.5) mag for asteroids with moderate sizes between 30 km and 10 km; and the third one H>11.5 mag corresponding to asteroids smaller than 10 km. The trend in the spin-axis longitude distribution is visible in all the regimes. YORP is therefore unlikely to be the main explanation for the shape of the observed longitude distribution. The longitude distribution for the Koronis family is unimodal, with an excess of longitudes between 60º-110º, which could be a reflection of the general trend visible for MBAs. (Slivan et al. 2002, Slivan et al. 2009) except for (311) Claudia and (2953) Vysheslavia (Slivan et al. 2009, Vokrouhlický et al. 2006. For those objects, the literature values of the spin-axis longitude are 24º +/-5º and 11º +/-10º respectively. Both of those objects have large numbers of observations made at various observatories. (311) Claudia has 442 observations taken at 11 different observatories (observatory codes: 689, 699, 608, 704, 644, 703, 333, 691, 1412, 683, 1696). (2953) Vysheslavia has 658 observations made at 9 different observatories (observatory codes: 691,699,704,703,608,1696,645,1412,644). Both of the objects also have the mean peak-to-peak magnitude variation above 0.15 mag. Therefore both of the objects were considered as an acceptable fit, however, visual inspection of the fit shows large scatter of the data. We consider the number of this kind of invalid fits to be small and not affecting the overall distributions. However in future some automatic testing rather than visual inspection could possibly be developed to detect those sort of cases among the large amount of data at hand. Spin distributions for families Vesta, Flora, and Nysa-Polana seem to follow the general trend of MBAs, that is a paucity between 120º-160º and an excess of longitudes between 70º-110º. Most of the families for which we have at least 100 spin longitudes also follow the general shape of the MBA distribution. Conclusions and future work We have estimated spin-axis longitudes for hundreds of thousands of asteroids, based on the brightness variation with ecliptic longitude and using the Lowell Observatory photometric database. The number of spin-axis longitudes computed is an enormous increase in the number of previously known asteroid spin-axis longitudes. The estimated spin-axis longitudes are publicly available online on Planetary System Research group -University of Helsinki webpages (https://wiki.helsinki.fi/display/PSR/Planetary+System+Research+group) and on an ftp site at Lowell Observatory ftp://ftp.lowell.edu/pub/elgb/summary.out. Based on the spin-axis longitude distributions for MBAs, we concluded that the distribution is far from uniform, with an excess at longitudes 30°-110° and a paucity between longitudes 120°-160°. Longitude anisotropy is consistent with La Spina et al. (2003), Samarasinha and Karr (1998) and contradictory to Kryszczyńska et al. (2007), Hanuš et al. (2011). Anisotropy of the longitude distributions was not confirmed in other dynamical groups, except for Jupiter Trojans, which exhibit features similar to MBAs. We also investigated asteroid families. For the Koronis family, we showed that spin-axis longitudes are clustered around 60°-110°. Spin-axis distributions for most other asteroid families reflect the features visible in the MBAs distribution. Explanation of the physical causes for the shape of the distributions is beyond of the scope of this paper, and will require extensive modeling of the YORP effect, precession and observational selection effects. Acknowledgments Research has been supported by the Magnus Ehrnrooth Foundation, Academy of Finland (contract 127461), Lowell Observatory, Polish National Science Center (grant number 2012/04/S/ST9/00022), and the Spitzer Science Center. We would like to thank David Vokrouhlický, David Nesvorný, and Alan Harris (Space Science Institute) for valuable comments and discussion. We would also like to thank our reviewers Petr Pravec and Agnieszka Kryszczyńska for insightful and detailed comments.
2019-04-13T06:54:34.164Z
2011-10-01T00:00:00.000
{ "year": 2013, "sha1": "94985c0aa99b38e6222b1fd721430bd627dc5de8", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/maps.12230", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "68eff389aef193e01470ad6f86ca51d22dac1f8b", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Geology", "Physics" ] }
54940917
pes2o/s2orc
v3-fos-license
Containerization of Grain : Emergence of a New Supply Chain Market The containerized shipment of freight continues to grow rapidly. This development can be traced to a transformation of bulk and break-bulk service to containerization. Demand has been driven by opportunities to broaden logistical options as well as advantageous freight rates. Logisticians and policy makers are unsure how much more bulk traffic can be converted to containerization, but the trends are evident. Of particular interest is grain. Bulk grain handlers have successfully resisted the conversion of grain shipping to containerization, except on the North American-Asian traffic lanes and the Australian-Asian traffic lanes where growth has been significant. This paper reviews the theoretical case for grain containerization from a logistics perspective, followed by an examination of the current trends in the United States and Canada. Subsequently, the analysis considers the restrictions and resistance to the conversion of grain from bulk shipping to containerization. Introduction The containerized shipment of freight has grown rapidly in recent years.A portion of this growth can be traced to the containerization of products that previously moved to export markets in bulk or in break-bulk transport.Containers offer opportunities to lower logistics costs and to broaden marketing options.As a result, increasing volumes of grain are being shipped in containers.This raises a fundamental question in the minds of logisticians and policy makers alike: how much more conversion of bulk grain traffic to containers is possible? The purpose of this article is to examine where current trends in containerization of grain are leading.The first section of this paper sets out the logistical concepts that support the hypothesis for the modal shift of grain from bulk transport to intermodal ISO containers.This is followed by an examination of current practices in the United States and in Canada.The third section considers the restrictions and resistance to "container conversion" associated with the physical loading constraints of container vessels, the impact on railway efficiency, utilization of port facilities and regulations.The penultimate section compares the theoretical rationale and the practice.The conclusion offers some cautious thoughts on the further conversion of grain from bulk to container movement. Logistics and Economic Theory Logistics and economics concepts can be used to guide whether or not changes in the organization of supply chains are likely to lead to improved efficiency.In this case, the conventional bulk handling system for grain is compared to the emerging containerized system of grain handling. Mixed systems are superior to pure systems: The benefits of a mixed system become more obvious the greater the fluctuation in the volumes handled.This leads to the added cost of service required to meet the peak demand.In the Northern Hemisphere, the volume of grain entering the bulk handling system surges as the harvest commences and peaks at the end of the fall months.Volumes then decline, with a few bumps, until the next harvest.While the annual pattern of fluctuation is predictable, the maximum peak demand depends on prices and weather conditions. The problem for a pure system, like bulk handling, is to be able to serve the peak demand profitably.The more equipment and staff that are allocated to serve the peak, the more capacity is likely to remain idle during the off-peak season.Figure 1 presents two cases, of (a) a normal peak and (b) an exceptionally larger (bumper) crop year.The economic model illustrates the allocation of railcars under a free market pricing scenario. The bulk handling system for grain can be characterized as a high fixed-cost, network industry subject to peak load demand.The bulk handling network of railcars, collection points and transfer systems cannot be adjusted quickly to accommodate changes in harvest demand.In Figure 1(a), the peak load demand is accommodated by letting prices rise to allocate the available supply.During the off-peak, prices fall to the point where freight rates cover only short run marginal costs.However, demand may be insufficient to utilize all the available capacity.Consequently, a number of the railcars will sit empty and unused during the off-peak season. Figure 1(b) illustrates the case when an unusually large crop creates a demand surge.The bid prices for railcars rise much higher, but the demand cannot be satisfied in excess of what the existing infrastructure can accommodate.In this circumstance, some of the peak demand is transferred to the off-peak demand that also shifts to the right.Fewer railcars remain empty during the off-peak, but for some shippers the market opportunity may be lost by the time affordable railcars become available. The problem illustrated by Figure 1 is that pure systems are negatively affected by instability.In a normal year, the carriers are burdened with excess capacity, while in the event of a large crop shippers face much higher prices during the peak demand period.A mixed system with a containerized option could lower total cost and address the demand surge that occurs immediately after the harvest [1].Containers would enable the bulk system to achieve higher utilization over the course of the year because when not needed for grain, the containers would be used in other carriage.Containers could also enable the bulk handling system to move large volumes of grain in fewer separations during the peak demand by removing small segregations from the system.In addition, the container mode offers the ability for small lot buyers to source at origin, thereby eliminating in-transit storage and handling costs as well as the overheads and profits taken by mid-stream brokers and grain dealers. Variety exacts a price: Containerization of grain is not expected to replace the bulk handling system for lower value or generic products.Oilseeds and feed grains do not require segregation to maintain purity because they are going to be further processed in systems that have broad quality tolerances.The principal concern of oilseed crushing plants and cattle feedlots is handling cost.In cases where a bulk handling system can achieve acceptable quality consistency and economies of size, it will continue to dominate. As buyer sophistication increases and the varietal differences provided by crop breeders expand, the number of products entering the grain handling system is amplified.An example of export crop diversification is the special crops (peas, beans, lentils, etc.) that have emerged.Their shipment size and handling requirements make containers an ideal shipping method. The bulk handling system begins to lose its cost advantage when segregation becomes important.Increased crop varieties multiply the number of bins required to maintain product integrity.This is illustrated in Figure 2. The vertical axis represents the cost of grain bins.The horizontal axis presents the number of grain separations.In terms of volume, economies of size make bulk storage bins less expensive than containers.Bulk grain is transferred and stored at the ports in bins that are approximately 40 feet in diameter, and upwards of 80 feet tall.However, the more that bulk storage is divided up to create separate bins, the more its average cost increases.Bulk storage cost function is depicted as a series of incremental steps, while the average costs of containers are constant regardless of the number employed. Bulk shipping suffers diseconomies of scope with respect to handling small quantities of crops with specific attributes.Containers assure that specialized products, like pulses and organic wheat, can move with their specific identity intact. Variety separation presents another case in which a mixed system is better than a pure system.Other agricultural markets operate with "bulk sales" of generic quality at low prices, and segregated sales of precise quality at very high prices.The beverage market (wine and whiskey) operates this way, but this is less common in the grain market.Some notable exceptions are organic wheat, and soybeans for the Japanese noodle production.In these cases the product is containerized.The ability to differentiate the product allows producers to obtain higher prices.Delayed commitment: Profits can be maximized by shipping products as far as possible before committing to the final product form [2].This is done in many industries, the most famous being paint that is all shipped white, and tinted after the customer has made a decision to purchase.In the case of grain, the ability to obtain particular milling attributes is lost as soon as the commodity enters the bulk handling system.While it is true that Canadian wheat has a reputation for high average quality, this sometimes comes at the sacrifice of blending very high quality with lesser quality.Some foreign millers might be happier to buy the very best wheat and blend it with their local product to obtain the desired flour quality. Total costs matter: Many logistical systems fall into the trap of sub-optimization in which too much focus is placed on one cost component at the expense of the total movement cost.For example, shippers that compare only the costs of bulk shipping to containers could easily conclude that the latter can never compete, except perhaps in backhaul traffic lanes.However, the costs of storage, handling, inventory carrying costs and product damage associated with bulk handling need to be considered as part of the total logistics package.This is especially true for small lot buyers that have to go through intermediaries or purchase storage for larger quantities than they wish.With containers, small lot purchasers have the opportunity to buy at source as opposed to a local broker where the broker's storage, overhead and profits get added to the price. Lean thinking: The recognition that inventory in excess of immediate needs is a waste, led to Just-in-time logistics systems.Bulk handling systems have large pipeline inventories because these quantities are required to load unit trains and bulk ships.In Canada and the US, grain storage at country elevators and port terminals duplicates storage that exists on farms.Moreover grain can be held in these commercial storages for relatively long periods.Figure 3 presents data for the Canadian bulk handling system.While the system has improved over this 12 year period, grain remains in commercial storage and transit for up to 50 days before export.Containers move more quickly through the logistical system meaning that less inventory in-transit needs to be financed [3].The 10 days that bulk grain is in transit by rail is likely greater than the total time for a container shipment from farms to the port. Quality Counts: Given two options in which one method risks the reduction in quality of the shipment, the higher quality transport will generally earn a premium.For certain commodities such as peas and lentils, the bulk handling system has indirect costs that containerization avoids.Continuous handling causes breakage that opens the commodity to quality deterioration and insect damage.Delicate products require handling on "flat belts" or in bags.Containerization eliminates damage that significantly reduces the products value or makes it potentially unsalable.Taken more broadly, the reduction of risk can also mean avoidance of inadvertent blending.As the ability to detect small percentages of genetic attributes improves, the risk of detecting cargo cross-contamination increases.Although the numbers of refused cargoes because of GMOs or other variety mixing is small, containers guarantee that the product is traceable from origin to destination. Containerization in Practice Containerization of freight is the favored means of logistics for the global trade of consumer and industrial products.Once a high-cost option, events over the past decade-and-a-half have strengthened the economic viability of this shipping alternative.When the containerization of grain was discussed 15 years ago, the largest container ship was in the 4500 TEU range.Such ships now seem mid-sized, 8000 TEU vessels are common, and ships as large as 18,000 TEUs are entering service. Economies of size in ocean shipping depend on vessel utilization.A downturn in economic growth since 2009, combined with an increasing amount of capacity from newly built vessels has left the shipping lines with excess slots to fill.Consequently, North American shippers seeking to move their products to the container's originating point can often gain access to low-cost back-haul capacity inherent in its repositioning.Shippers can now potentially arbitrage and lower their freight costs while providing an alternative logistics scenario to overseas buyers [4]. The examination of grain containerization practice is divided between the United States and Canada.Although the rail and port facilities are comparable in the two countries, differences in crop production, container traffic patterns and economic regulation affect the containerized grain supply chain (CGSC). United States CGSC Prior to 2003, containers were mainly restricted to specialty crops, which would not fill a hold in a ship, and feed ingredients like corn gluten meal, bone and meat meal The containerization of grain in the US began to pick up significantly in 2004 because of the spread that emerged between backhaul container rates and bulk shipping.The rates for bulk shipping increased because of the strong demand for scrap metal. The North American consumer demand for imported Asia Pacific manufactured goods also surged between 2003 and 2008.Consequently, freight rates for containers on the westbound traffic lanes fell as the volume of empty containers returning to Asia became excessive.During this period, grain could be shipped in containers from Chicago at $35/40 per ton, while bulk rates for grain at the Gulf of Mexico were $60/70 per ton.This provided the incentive for commercial bulk grain shippers to beginarbitraging the freight and movinggrain exports in containers. The Baltic Exchange Panamax Index (BPI), shown in Figure 4, is a standard indicator of bulk shipping rates worldwide.Driven by the comparative shortage of bulk vessels in the face of the growing demands imposed by a vibrant Chinese economy, prices shot to all-time highs.From late 2003 through to the autumn of 2008, bulk ocean shipping rates, as represented by the BPI, climbed by over 400%.As the economic downturn in late 2008 began to grip the global economy, bulk ocean vessel rates soon fell, this time to record low levels, where they continue to languish to this day. The current state of grain containerization in the United States is observed in the USDA AMS report: "In 2013, containers were used to transport 10 percent of total US waterborne grain exports, up 2 percentage points from 2012.Approximately 61 percent of US waterborne grain exports in 2013 went to Asia, of which 16 percent were moved in containers.Asia is the top destination for US containerized grain exports-97 percent in 2013."[5] This growth continued in 2014, although reduced by congestion and labor disruption on the US west coast.There has also been a GMO-related dispute between China and the US regarding a corn variety, but this has now ended.China, Taiwan, Vietnam and Indonesia account for about two-thirds of these shipments, but their individual shares vary by month. Grain is transloaded into containers on the east and west coasts of the United States and at a few interior collection points where excess empty containers accumulate.A site visit to Chicago in 2012 revealed that three facilities accept truckloads of corn, soybeans and dry distillers grains (DDG-derived from ethanol plants) for trans-loading into containers.Two of the transloaders are located adjacent to the CenterPoint Intermodal Centre container port, while the third is located at the CN container yard. DDGs account for about half the grain exported in containers from the United States.This is explained by the surplus of DDG production arising from the ethanol fuel mandate and the difficulty of shipping DDG in bulk.Soybeans are the next largest containerized export at 24 percent.The balance of containerized grain exports in 2014 were corn (8 percent), animal feed (6 percent), residues of starch manufacturing (4 percent), and other crops (8 percent) [5]. Aside from the movement of identity preserved products, like soybeans for Japan and some special crops, containerization is treated as a substitute for bulk shipping.DDG, corn and soybeans are transloaded into containers without liners, and shipped.Some concern is expressed about the potential for cross-contamination from prior shipments in the containers, but when the end use is livestock feed, and the amount of potential contamination is small in any case, the risk is considered to be minimal. The motive for using containers is almost universally identified as improved logistical economics.During the boom years prior to the recession of 2009, the gap between available backhaul container rates and bulk shipping attracted increasing volumes.Since 2009, bulk shipping rates have collapsed and the container lines have become more alert to any price differences.Grain in containers must compete with waste paper, scrap metal and on the west coast lumber and logs.Grain may be more valuable, but these other commodities can be forced to pay more because they have no competition from bulk shipping. When asked whether foreign receivers are willing to pay a premium for higher quality received in containers, the answer of the transloaders is "generally no".Buyers acknowledge that quality is better, and they like this aspect of containers.They are willing to pay a small premium for containers of certain products (e.g.Number 1 Soybeans), but not for ordinary grain. The places where containerization works best, in terms of source-loading inland, is at gateway locations where surplus empty backhaul containers accumulate, notably Chicago, but containers also compete with conventional bulk at Memphis and Kansas City.Most grain transloaders are located at the ports.Inland shippers away from the gateways that would like to source a container have to pay a premium price for repositioning containers to their locations. Inbound merchandise transloading practices encourage the loading of grain in containers at the ports.Asian import logistics costs can be reduced by transloading sea containers into 53-foot domestic containers, or 53-foot tractor-trailers, at the North American Pacific coast for shipment to inland distribution warehouses.The ratios are significant because the lading from three forty-foot or six twenty-foot containers can be moved in two 53 foot domestic units (subject to product density).As a result, more sea containers remain at the coast.This reinforces the logic of moving grain in bulk to the coast for transloading into containers. In 2012, the Union Pacific (UP) railroad initiated a new "Plant-to-Port" transload service for grain and grain products at a facility in Yermo, California.A unit train of grain is moved to the transloading facility where it is met with a unit train of empty containers from the Port of Los Angeles.After transloading, the containers are returned to the port for export shipment.Rates include rail shipment, transloading to a 40-foot container, intermodal loading and transport to the Ports of Los Angeles or Long Beach.In the case of the DDGs, rates also include USDA inspection.However, shipments must be made in unit train quantities of 80 or 100 carloads with a maximum average loading weight of 96 tons per covered hopper car. The competition between source-loading at Chicago, or sending unit trains to the UP facility is a question of backhaul rates.The Chicago transloaders retained a rate advantage in 2013 because container backhaul rates are lower than the equivalent hopper car tariff.Smaller lot shippers would also favor source loading of containers at Chicago. Canada: GCSC Canadian experience with the containerization of grain is different than the US in several respects.While cereal grains (wheat and barley) and canola make up almost 80% of the export market, in an effort to diversify, Western Canadian farmers have embraced new field crops like red and green lentils, yellow peas, mustard and canary seed.From small beginnings these "special crops" have grown to represent almost 20% of the crop mix with cleaning and processing plants appearing across the Prairies.This has led to increased agricultural research directed at developing broader varieties with higher production yields.As a result the seeded area of these crops now averages 5 million to 7 million acres annually in Saskatchewan alone. Institutional arrangements are also different in Canada.Until the 2012/13 crop year, cereal crop exports, which represent over 60% of Canadian production, were under the monopoly control of the Canadian Wheat Board (CWB).The CWB focused on large customers and bulk shipping.While the CWB would deliver in containers on the customer's request, this was not a marketing practice that they actively promoted.The situation was similar in Australia, until 2007 when the power of the Australian Wheat Board (AWB) to license wheat exports in containers as removed.Since 2007, Australian wheat exports in containers have grown from 500,000 tons to 2.7 million tons by 2012 [6]. The rise in bulk ocean freight rates in the 2005-2009 period drove the conversion of bulk grain shipments to containerized movement.To illustrate, consider the following model that uses actual rates.The total cost used represents that of the logistics chain components.In the case of bulk grain, it can be expressed as the steps involved in moving grain from the country to a destination port.This approach allows for a comparison to an alternate containerized movement. Table 1 illustrates the component costs of a bulk movement of grain in Canada under three different scenarios.In each, all costs remain constant with the exception of the ocean freight: • Scenario A portrays ocean freight when the Baltic Exchange Panamax Index (BPI) is at a very low point, with rates for a Panamax size vessel (approximately 60,000 tonne capacity) falling in the $8,000-per-day range.This is reflective of current market conditions.• Scenario B portrays ocean freight when the BPI is at a moderate or "normal" point, with rates for Panamax sized vessels being in the range of $25,000 per day.• Scenario C portrays ocean freight when the BPI is at a high point, such as what was experienced in the period immediately before the economic crisis of 2008, and vessel rates soared to over $75,000 per day. As noted above, the period leading up to early 2008 saw the logistical cost of moving grain increase by over 21% from the "normal" level.While this provides some insight into the cost of large-lot movements of 60,000 tonnes or more, the impact on smaller movements was even greater.This led many in the grain industry to look for alternatives, particularly when smaller-lot volumes were being traded. Table 2 portrays a similar set of cost scenarios for the movements of grain in containers.In this model, both rail and ocean freight rates fluctuate.Between 2005 and 2008 container rates fell while bulk rates rose.This figured heavily in the subsequent decisions made by grain logisticians. • Scenario D portrays back-haul container and rail rates in the period prior to 2005, when both the railways and container shipping lines priced their services with an eye towards building volumes and establishing the foundations of a potential back-haul container business. • Scenario E portrays ocean freight in the period after 2008, as both the railways and shipping lines adjusted rates to a level that secured the volumes they could adequately handle.This best reflects the situation being experienced at the time this paper is written.• Scenario F portrays ocean freight in the period after 2005, as both the railways and shipping lines experienced unusually high volumes and began to look for ways to optimize asset utilization.These prices also appropriately reflected market demand, as bulk rates had soared between 2005 and 2008.The situation in the bulk freight market as seen in Scenario C ($85.11 per tonne) and the container freight market in Scenario D ($68.04/tonne) represents prices leading up to the summer of 2008.This differential between these rates led many logistics managers in the grain industry to explore and experiment with a conversion of some typically bulk movements to container.When the economic collapse of 2008 pushed bulk freight rates from an all-time high down to abnormally low levels, rates as seen in Scenario A ($65.01/tonne) became the norm while at the same time container rates rose to that seen in scenario E. During that period, the predominant area of growth was in special crops, and in particular, pulses.This new modal choice worked well with global markets looking to purchase Canadian pulse products in small lot volumes. The cost differentials were short lived though, and since 2009, bulk rates have fallen.While some traffic reverted back to bulk in response to the lower cost, much continues to move by container, with shippers continuing to take advantage of multi-modal alternatives. The most prominent multi-modal option is characterized by the use of transload facilities at the ports of Montreal and Vancouver, which have created a competitive cost structure by combining inbound hopper-car movements with outbound container movements to final destination. Figure 5 presents grain export data for Port Metro Vancouver (PVM) by mode of transport from 2000 to 2014.Containerization's share varied between 9 and 19 percent in the first decade.The percentage of grain exports in containers through PMV has varied, but the total volumes have gradually increased.This is consistent with the comments of transloaders in the United States who observe that the shipping lines now price backhaul container rates for grain with an eye to bulk shipping rates.Many Canadian special crops, e.g.peas, mustard, etc., can go in bulk to the large importers using "soft handling" technology, which refers to the use of flat belts and maximum dropping distances.Special crops shipments through PVM are reported for both bulk and container exports.Figure 6 presents the data for the years 2000 to 2014.Containerized shipping increased significantly during the years when bulk shipping rates were rising rapidly.Following the recession when bulk shipping rates fell dramatically, the shares in containers and bulk reversed, but appears to on the rise again. Restraints on the Containerization of Grain A number of additional barriers to a large scale conversion of grain to containerized movements are identified by industry stakeholders. Inspection and Documentation Costs: Transactions costs are an important advantage for bulk handling.A 10,000 ton shipment requires the same amount of paperwork (letter of credit, B13, ocean bill of lading) as a single container.When the bulk rates are less per ton than shipping in containers, the economics favor large shipments that are split up at destination.When the bulk rates move up, container shipping increases because so many more buyers become accessible. The increase in the number of buyers intensifies competition.A bulk shipment in a Panamax ship may be handled by 5 or 6 large import buyers, who split up the cargo to supply many smaller domestic buyers.When the product goes in containers, the number of buyers available expands to hundreds.This creates opportunities to establish niche markets and form new loyalties. Inspection and grading costs, like the costs of transactions, favor conventional bulk handling.Grading is redundant for containerized grain because it is never mixed and can be traced back to its origin.The reason for grading is generally a case of the buyer and seller wishing a third party to adjudicate the quality.Depending on the number of containers, the inspections in Canada cost approximately $100 for 3.5 containers.In the US, the inspection fees are $1.50 to $2 per ton.This is about 10 times more than the equivalent inspection costs for bulk shipping. Cargo Density Considerations: The most significant driving factor in the loading of bulk commodities into ocean containers is the loading capability of the container vessels themselves.The operational requirements of any transportation service provider dictates the maintenance of balanced equipment flows between a variety of origins and destinations.This is to ensure that adequate amounts of equipment are in position at the locations where the market demand calls for it.The common objective of container vessel operators is to make each ocean crossing with as many containers as possible, preferably filling 100% of the vessel's container slots. A typical 5000 TEU container vessel has a maximum gross carrying capacity of approximately 49,000 tonnes, or approximately 9.8 -10.5 tonnes per TEU.This is a function of the vessel's buoyancy and carrying capability.This was confirmed in a review of the ten largest container vessels in the global fleet.A bulk carrier of roughly equal size and dimensions would carry in excess of 65,000 tonnes.Much of the reason for this relates to the carrying-capacity lost to the structure within a container vessel that is required to hold containers in a fixed position, sometimes referred to as "slots".Based on the dimensions of a standard 20-foot container, these ships can accommodate a maximum per-cubic-foot loading of approximately 14.3 pounds per cubic foot (pcf). The challenge with loading bulk commodities into containers is that their densities are greater than the average lading threshold capability of 14.3 pcf.To use industry vernacular, a container ship of grain would "weighout" before it "cubed-out". A standard twenty foot container typically accommodates about 21 tonnes of grain, often filling much of the available cargo space.At the higher end of this range is wheat, which normally weighs in at an average of 23 tonnes per container.If it is assumed that a 5000-TEU container ship is available to load, then the weight profile of these loaded containers would only permit 2140 to be taken aboard; a load factor of just under 43%.Although the remainder of the ship's container slots could be used to move empty containers, its ability to take on additional loaded containers had effectively been reached.Ultimately, 1.34 empty TEUs would accompany every TEU loaded with wheat. The calculations being as follows for a typical 5000-TEU Vessel: 3 provide an indication of the actual operating and loading practices of container lines, and show that outbound movements are typically 66% heavier than those on the inbound side, and loaded to about 76% of the maximum allowable weight.While the companion movement of less dense products or commodities can help mitigate the operational issues that arise from moving high-density traffic in containers, an increase in volumes of the latter would diminish the number of slots a vessel could make available for back-haul movements of grain. Discussions with container terminal personnel indicated that, in practice, container lines balance the heavier loaded containers with empty or lighter loaded ones, leaving heavier traffic behind in order to ensure a proper balance of movement, and a safely loaded vessel.The traffic left behind places increased pressure on the storage capacity of port terminals that are already constrained, as well as adding additional costs in the form of storage and rebilling fees. Country and port terminal asset investments: The physical layout of Canada's major ports is such that land on tidewater is always at a premium, and comes at a high cost.Grain companies, railways, and the government have made significant capital investments in bulk handling infrastructure, estimated to exceed $5 billion in Canada.This includes the country and port terminal network, the hopper car fleet, and the processes that allow them to function. It is crucial that the utilization of that space be managed in the most efficient and effective manner possible [7].While it could be possible to convert or adapt these facilities to load containers, it could reduce their utilization for bulk movements.Certainly, thesunk investment in bulk handling facilities could be the largest single financial barrier to the conversion of exports into containers. Railway efficiency: In comparing the merits of container versus bulk movement, the predominant difference between the two approaches is the volume capability of the different kinds of train service.For comparison purposes, a typical container or bulk grain train with a length of 6000 feet has a considerable difference in the amount of lading each is able to carry.A container train could carry approximately 450 TEUs with an average lading weight of approximately 15.9 tonnes each, or a total of 7800 tonnes per train.A bulk grain train can carry in excess of 10,300 tonnes, some 32% more.This differential implies that the average per-tonne cost for moving grain in containers is higher than that of moving it in bulk, but of course this does not consider the railways round-trip revenues.All covered hopper cars return empty to the inland origin, while containers are more likely to carry freight in both directions.Source-loading of containers in Western Canada is more expensive than transloading at the port of Vancouver because of the empty container repositioning charges.Although shippers prefer to load at source because of greater control over damage, security, etc. the costs are unfavorable [8].This is explained mainly by freight rate regulations that discourage containerization. Canadian Transportation Regulations: The Government of Canada has interceded in the rail transport of grain since the Crow's Nest Pass freight rates came into effect in 1897.In 1926, these fixed rates per ton-mile were extended to all rail carriers and made statutory.After 50 years, the Statutory Freight Rates were deemed noncompensatory and railways were faced with the significant challenge of renewing plant and resources.Consequently, the rate structure was replaced by a subsidized maximum freight rate under the Western Grain Transportation Act in 1984 [9].Finally the grain transportation subsidy was abolished in 1995, and for five years farmers paid a regulated maximum freight rate.In 2001, the Maximum Grain Revenue Entitlement (MRE) [10] was introduced for Canadian National (CN) and Canadian Pacific (CP) railways to replace maximum freight rate regulation.Under the MRE formula, the railways are allowed flexibility in pricing, with a limitation on total revenues.The MRE is subject to an annual increase in the average rate per tonne through the calculation of an index of input price changes that relates specifically to the rail industry. When the MRE was established in 2001, railway revenues were based on a costing review completed in 1992 when no grain was moving in containers.This original cost base was adjusted in 1999 in anticipation of the implementation of the MRE in 2001.The revenues and volumes of grain moved in containers are included with the bulk grain volumes and revenues. The situation is confusing, and data are not publically available to determine the degree that the MRE may discriminate against containerization in favor of bulk shipping.The railways' costs are higher for container movements so the rates charged for container movements must be higher, too.As a consequence, this could have an impact on how the railways choose to price movements in containers and allocate equipment to this type of movement. Reconciling Theory and Practice Total Costs Matter according to theory, but in practice it appears that only the differential of costs between bulk and container shipping rates is necessary to shift grain from one mode to the other.Quality is appreciated, but the driver is price.With the world population at 7 billion, One Size Fits All is apparently the case for the majority of the international grain trade.Better quality is nice, but the marketplace may not be willing to pay higher logistical costs for the sake of quality. The impetus for the conversion from bulk to containerization was largely initiated by an aberration in bulk shipping rates in the period between 2005 and 2008.This was not necessarily a negative event, because it provided a much needed boost for some commodities, such as pulse crops, to gain a foothold in the global market place.If the containerization of grain has passed a "tipping point", then its share could grow could again grow at the expense of conventional bulk, once bulk ocean shipping rates recover again. To the extent that premiums for quality are available, the driver is risk.US grain shippers note that they can receive 10 to 20 percent price premiums over Brazilian soybeans because the grading and inspection system guarantees better quality perception.High food grade soybeans put through the conventional bulk system can be mixed with lower grade soybeans.Consequently, food grade soybeans are shipped in containers.This same scenario can be seen with the marketing of Canadian wheat, which is recognized for its superior milling quality. Delayed Commitment can forge new traffic patterns.In China, the containerization of feed is a means of addressing two problems.Space for intensive livestock production is becoming scarce at the coastal provinces.This is increasing the desire to move inland.At the same time, Chinese manufacturers want to move production to the interior provinces where labor is less expensive.The manufacturers need empty containers to ship out exports and the inbound delivery of feed in containers solves their repositioning problem.This could make feed a backhaul shipment all the way from the interior of North America to the interior of China. Variety exacts its price, but the documentation and inspection costs are more for containers than the conventional bulk supply chain.This may also apply to traceability.In Canada, the system of Kernel Visual Distingui-shability for wheat has been replaced by a certificate system.The certificate system seems to be operationally successful in the bulk system.For bagged products, and small volume shipments like organic wheat however, containers offer lower overall logistics costs.This also applies to feed ingredients.DDGs are described as dusty, smelly and in volumes too small to fill a ship's hold.Ultimately it is the buyers' terms and level of risk tolerance that will determine the value these systems will derive from the market. While the factors described above will place a ceiling on the growth of containerization for grain, there are two specific areas that should expect to experience continued growth: • As markets open in the grain industry for more identity preserved products, there will be a demand for smaller, better controlled logistics solutions, and the most effective means of accommodating this is through containerization.• The most prevalent area of growth continues to be the special-crops market, pulses in particular, where sales are typically made in lot sizes of less than 10,000 tonnes, and not conducive to bulk shipment.Mixed systems are superior to pure systems.The availability of empty backhaul containers presents an opportunity to lower the total cost of moving grain to some export markets as well as gaining access to smaller, niche markets.Where empty containers are in surplus at inland locations, they are used, even if they just substitute for conventional bulk shipments of corn and soybeans.Where containers are available for backhaul loads at the ports, the railways move bulk hopper cars of grain to the coasts for transloading in containers.At this point, neither the buyers, nor the sellers may be maximizing the full benefits of containerization, but they are certainly seizing the opportunities to save money. The largest inhibitor for significant conversion, though, is the density weight of bulk products, including grain, which detracts from the number of loaded containers that can be safely handled aboard a container ship.With a potential load factor of 43%, the adverse impact on vessel productivity would be severe.Further, the relatively low value of these export products are likely not sufficient to support higher freight rates.Based on current actual average weights, there is room for continued growth of containerized bulk products, but it is strictly limited by the amount of capacity made available through imported goods. Future of Grain Containerization A considerable amount of discussion has been dedicated to the potential conversion of bulk resource and agricultural exports to containers.It is argued that grain products could be readily converted to containerized freight because the back-haul direction for imported containers (east to west) corresponds to the head-haul direction for export grain moving in hopper cars to port terminals on the west coast.The concept would see grain loaded in the country using the empty containers that are flowing westward instead of hopper cars, thereby shipping export grain overseas under back-haul rates.It is believed that the conversion of grain from bulk to container will then balance the movement of containers and reduce the requirements for hopper cars, and the empty return movement they incur on each trip. Has the containerization of grain reached a point of maturity and will now be characterized by slow growth, or is a rise to a new level of container use only waiting for the next cycle of high bulk shipping rates to trigger increasing volumes? No definitive answer is possible at this time.The success of grain containerization is highly dependent on backhaul freight rates, which is why shipments to Asia account for most of the volume.Container shipments to Europe, South America and Africa are generally only made for special crops that require higher quality handling.However the direct substitution of conventional bulk for bulk in containers suggests that only the differential in shipping rates restricts the use of more containers. The conventional bulk handling system is very mature.While some extra efficiency might yet be found, it is difficult to improve on unit trains and existing material handling systems.To the extent that improvements are possible in the bulk handling system, the capital barriers are significant.In contrast, the barrier to entry for transloading containers is low.The technology is simple and an efficient scale is easily reached.Any significant profit incentive is going to attract new entrants and more locations for transloading containers. There has been a significant shift away from the break bulk shipping practices of old in favor of containers and more recently, bulk shippers are viewing containers as an option to meet certain logistical demands and efficiencies.The average size of container ships is continuing to grow and the absolute size may not yet have been reached.While bulk freighters enjoy economies of size, the diseconomies of inventory holding may auger against larger shipments in some markets.The availability of low cost communications and lean logistics practices favor the just-in-time inventory management strategies that containerization offers.At the present time, the higher unit costs of transactions and inspections of containerized grain give the bulk handling system some protection.It seems only a matter of time however, before a new transaction system is developed for containerized grain that reduces these costs. Traceability and identity preservation are desirable marketing features that are difficult for bulk handling systems to guarantee.Containerization provides a form of quality assurance at only a marginally higher cost and the product is not re-handled until it reaches the import buyer.This makes milling wheat a desirable commodity for containerization when the buyer is looking for certain quality attributes such as seed variety, gluten and protein content.It is noted that Australia saw an upsurge in the volume of wheat moving in containers shortly after the removal of their monopoly marketing board as smaller niche markets became open to them.It is conceivable that Canadian wheat exports could follow a similar path now that the monopoly powers of the CWB have been removed. As global demands place increased importance on traceability and security of the entire food chain, the necessity for guarding and preserving the identity of grain used in the human food chain may become critical.Containerization of grain products from North American markets can provide the kind of protection within the supply chain that global buyers of the future will strive for and will likely result in an increased demand for the use of this mode. Figure 1 . Figure 1.Peak-load demand for covered grain hopper cars. Figure 2 . Figure 2. Economies of scope in the handling of multiple grain segregations. Figure 3 . Figure 3. Average days spent in-transit by grain in the Canadian bulk handling system by quarter 2001-2 to 2012-13. Figure 6 . Figure 6.Exports of special crops by container and in bulk, via Vancouver: 2000-2014. Table 1 . Costs associated with the movement of Canadian grain in bulk freighters. Table 2 . Costs associated with movement of Canadian export grain in containers.
2018-12-05T23:11:26.828Z
2015-03-18T00:00:00.000
{ "year": 2015, "sha1": "f231091921fe9edea78736abbf99169c23de3183", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=54794", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f231091921fe9edea78736abbf99169c23de3183", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
10688838
pes2o/s2orc
v3-fos-license
Effects of different anesthesia methods on postoperative transient neurological syndrome in patients with lumbar disc herniation The objective of the present study was to investigate the effects of different anesthesia methods on postoperative transient neurologic syndrome (TNS) in patients with lumbar disc herniation (LDH). Ninety-six patients with LDH were selected from November 2015 to October 2016 in Cangzhou Central hospital. All patients were treated with percutaneous transforaminal endoscopic discectomy. The patients were randomly divided into the control group and observation group, with 48 patients each. Combined spinal-epidural anesthesia was performed for patients in the control group, while epidural anesthesia was applied in the observation group. The levels of T lymphocyte subsets (CD4+ and CD8+) and inflammatory factors (IL-2 and TNF-α) were measured and compared before and 1 week after surgery. The incidence rate of TNS within 1 week after surgery was compared between the two groups. Fugl-Meyer Assessment was used to evaluate lower limb motor function and sensory disturbances at 1, 3 and 5 days after treatment. One week after treatment, the serum levels of CD4+ and CD8+ in the two groups were significantly lower than those before surgery (p<0.05), but no significant differences were found between the groups (p>0.05). The incidence rate of TNS within 1 week after surgery was significantly lower in the observation group than in the control group (p<0.05). The scores of lower limb motor function and sensory disturbances in the observation group evaluated at 1, 3 and 5 days after treatment were significantly higher than those in the control group (p<0.05). In conclusion, combined spinal-epidural anesthesia and epidural anesthesia caused no significant differences in immune function or inflammatory indexes in patients with LDH. However, the application of epidural anesthesia significantly reduced the incidence rate of postoperative TNS, which in turn reduced nerve damage. Introduction Lumbar disc herniation (LDH) is a common orthopedic disease characterized by low back pain and radiating pain, and has become a global health issue (1). Treatment of LDH may include methods such as conservative treatment, interventional therapy and surgical treatment, although percutaneous transforaminal endoscopic discectomy (PTED) is clinically preferred (2). Intravertebral anesthesia is generally used in PTED. Intravertebral anesthesia, which includes epidural anesthesia, spinal anesthesia, spinal anesthesia combined epidural anesthesia, and sacral anesthesia, is a method of local anesthesia (3). Compared with general anesthesia and the use of intravenous opioids, intravertebral anesthesia can more accurately control surgical or postoperative pain with the use of local anesthetics for nerve block, which is beneficial for the recovery of the patient's body function. However, the potential neurological complications from intravertebral anesthesia have attracted increasing attention (4). Neurological complications of intravertebral anesthesia include transient neurologic syndrome (TNS), cauda equina syndrome and permanent lumbar radiculopathy. Different from cauda equina syndrome, most patients with TNS have certain motor dysfunctions and sensory disturbances. However, nuclear magnetic resonance and electrophysiological examination generally show no abnormalities, which is a challenge in clinical practice (5,6). In this study, patients with LDH were treated with combined spinal-epidural anesthesia or epidural anesthesia. The incidence of postoperative TNS was also observed. significant differences in the baseline parameters between the two groups (p>0.05) (Table I). Anesthesia. All the patients were subjected to intramuscular injection of atropine (0.5 mg, SFDA approval no. H4102367; Anyang Jiuzhou Pharmaceutical Co., Ltd., Henan, China). Routine monitoring of changes in SpO 2 , breathing, pulse and blood pressure were performed while patients were in the prone position. Patients in the observation group received epidural anesthesia. The intervertebral space at the top of the lesion area was used to place the epidural tube. Next, a 5 ml mixture of 1% lidocaine (SFDA approval no. H14023559; Jincheng Haisi Pharmaceutical Co., Ltd., Jincheng, China) and 0.375% ropivacaine (SFDA approval no. H20050325; China Resources Pharmaceutical Group Ltd., Beijing, China) was injected. If no adverse reactions occurred, an additional 10 ml of the mixture was injected with the plane controlled below T6-T8. Patients in the control group were treated with combined spinal-epidural anesthesia. The area between L2 and L3 was punctured to place the spinal needle. Clear cerebrospinal fluid outflow indicated smooth backflow. At this point, 2 ml of 1% ropivacaine was injected within 15 sec. The spinal needle was then withdrawn, and an epidural tube was inserted 4-cm long towards the head end. With patients in the supine position, the plane was adjusted to T6. Epidural analgesia was administered to both groups of patients using 0.2 µg/ml sufentanil (SFDA approval no. H20054172; Yichang Humanwell Pharmaceutical Co., Ltd., Yichang, China) and 0.12% ropivacaine with infusion rate of 5 ml/h. The epidural catheter was removed when platelet density was >100,000/mm 3 , international normalized ratio was <1.5 and clotting time returned to normal. Index detection. Venous blood samples (5 ml) were collected from the two groups of patients before and 7 days after surgery (fasting for at least 8 h), and serum was separated and extracted by centrifugation (Shenzhen Chaojie Experimental Instrument Co., Ltd., Shenzhen, China). After, rabbit monoclonal CD4 (dilution, 1:50; cat. no. 100405) and rabbit monoclonal CD8 antibody (dilution, 1:50; cat. no. 100706), purchased from BioLegend (San Diego, CA, USA), were added and incubated at 4˚C for 30 min in the dark. Flow cytometry (BD Biosciences, Franklin Lakes, NJ, USA) was used to detect the levels of CD4 + and CD8 + cells. The serum levels of interleukin-2 (IL-2) and tumor necrosis factor-α (TNF-α) were determined using an enzyme-linked immunosorbent assay (ELISA) kit according to the manufacturer's instructions (Thermo Fisher Scientific, Inc., Waltham, MA, USA). A total of 50 µl of standards and serum were added to each well of the reaction plate, and 100 µl of the enzyme-labeled solution was added to each well. After incubation at 37˚C for 1 h, plates were washed 5 times. After, chromogenic agent solutions A and B (50 µl each) were added and incubated at room temperature in the dark for 15 min. Next, 50 µl of the stop solution was added. Optical density (OD) values were measured at 450 nm using a microplate reader (Potebio, Jiangsu, China) within 15 min to calculate the concentrations of IL-2 and TNF-α. Evaluation criteria. Fasting venous blood samples (5 ml) were collected from patients before and 7 days after surgery to prep-are serum samples. The levels of CD4 + and CD8 + T lymphocyte subsets were measured by flow cytometry. Serum IL-2 and TNF-α concentrations were measured by ELISA. Lower limb motor function was scored according to Fugl-Meyer Assessment (FMA) (7) at 1, 3 and 5 days after surgery. Cooperative motion of lower limb flexor and extensor, reflection activity, coordination ability and speed, and separating motion were included in the scoring system. A total of 0 points indicated that movement could not be carried out, and 2 points indicated that movement could be fully completed. The total score was 34 points. A score of <32 indicated movement disorder, and scores were negatively correlated with movement disorder. Lower limb sensory scoring was performed according to FMA, and pain, temperature, and touch sensation were included to evaluate sensory disturbance. A total of 0 points indicated no sensation, and 2 points represented normal sensation. The total score was 20 points, and score and sensory disorder were negatively correlated. Determination of TNS. After surgery, patients showed unilateral or bilateral lower limb movement disorders, sensory disturbances, burning pain and squeezing or radiating pain. If electrophysiological examination showed no abnormalities, the patients were diagnosed with TNS. According to the severity, TNS was divided into different degrees: i) Level I: Obvious unilateral or bilateral movement disorder and sensory disturbance, movement disorder score <26 points, and sensory disturbance score <16; ii) Level II: No obvious unilateral or bilateral movement disorder, score ≥26 points, obvious sensory disturbances and sensory disturbance score <16; and iii) no obvious unilateral or bilateral movement disorder and sensory disturbance, and only the presence of numbness and pain. Statistical analysis. Data were analyzed using SPSS 19.0 statistical software (SPSS Inc., Chicago, IL, USA). Numerical data are presented as mean ± standard deviation (mean ± SD), and paired t-test was used for intragroup and intergroup comparisons. Categorical data are presented as rate, and were Results The levels of T lymphocyte subsets and inflammatory factors in the two groups. The levels of T lymphocyte subsets (CD4 + and CD8 + ) and inflammatory factors (IL-2 and TNF-α) in the two groups were compared. Before and 1 week after surgery, CD4 + levels were: 662. .52 pg/ml in the control group. The serum levels of CD4 + and CD8 + cells in the two groups were decreased, and the levels of IL-2 and TNF-α were increased at 7 days after surgery compared with those before surgery (p<0.05). However, no significant differences were found between the two groups (p>0.05) (Figs. 1-4). The incidence rates of TNS within 1 week after surgery. The incidence rates of TNS within 1 week after surgery were compared between the two groups. The incidence rate of TNS in the observation group was significantly lower than that in the control group (p<0.05) (Table II). Lower limb movement disorder scores between the two groups. Lower limb movement disorder scores were compared between the two groups. The scores in the observation group were significantly higher than those in the control group at 1, 3 and 5 days after surgery (p<0.05) (Table III). Sensory disorder scores between the two groups. Sensory disorder scores were compared between the two groups. The scores in the observation group were significantly higher than those in the control group at 1, 3 and 5 days after surgery (p<0.05) (Table IV). Discussion LDH is the herniation of the nucleus pulposus caused by changes of water molecules in the nucleus pulposus of the intervertebral disc and the prominent nucleus pulposus can Figure 1. Comparison of CD4 + levels in the two groups of patients before and 1 week after surgery. Flow cytometry showed that the postoperative CD4 + levels in the two groups were significantly lower than those preoperatively (p<0.05). No significant differences were found in the preoperative and postoperative levels between the two groups (p>0.05). Compared with preoperative levels, * p<0.05. Figure 2. Comparison of CD8 + levels in the two groups of patients before and 1 week after surgery. Flow cytometry showed that the postoperative CD8 + levels in the two groups were significantly lower than those preoperatively (p<0.05). No significant differences were found in the preoperative and postoperative levels between the two groups (p>0.05). Compared with preoperative levels, * p<0.05. Figure 3. Comparison of IL-2 levels in the two groups of patients before and 1 week after surgery. Postoperative IL-2 levels in the two groups were significantly higher than those preoperatively (p<0.05). No significant differences were found in the preoperative and postoperative levels between two groups (p>0.05). Compared with preoperative levels, * p<0.05. IL, interleukin. Figure 4. Comparison of TNF-α levels in the two groups of patients before and 1 week after surgery. Postoperative TNF-α levels in the two groups were significantly higher than those preoperatively (p<0.05). No significant differences were found in the preoperative and postoperative levels between the two groups (p>0.05). Compared with preoperative levels, * p<0.05. TNF-α, tumor necrosis factor-α. oppress the cauda equina or nerve root to produce pain. LDH mostly occurs in the L4/L5 segment (8). Currently, PTED is widely used in clinical practice. By precise puncture and positioning, the lesion can be reached through intervertebral foramen puncture to effectively reduce the pain of patients. In addition, intravertebral anesthesia is performed before surgery. Conscious patients will maintain the function of motor and tactile nerves. Through communication with patients, nerve root damage can be avoided, thereby reducing pain (9,10). Surgical trauma, the nervousness of patients and narcotic drugs can cause a stress response, which in turn inhibits immune function. Therefore, immune dysfunction and associated inflammatory responses will occur (11). Stress responses can lead to increased secretion of adrenaline and catechol, resulting in abnormal CD4 + and CD8 + levels. Therefore, immune function disorders will occur and serum levels of IL-2 and TNF-α will increase to induce inflammation (12,13). In this study, serum CD4 + and CD8 + levels decreased, and IL-2 and TNF-α levels increased at 1 week after treatment compared with preoperative levels. Moreover, there were no significant differences between the two groups (p>0.05). These obser-vations can be explained by surgical trauma and anesthesia stimulation. However, combined spinal-epidural anesthesia and epidural anesthesia can cause immune dysfunction and inflammation within a short time period after treatment, and there are no essential differences between them. Related studies reported that the incidence rate of TNS was 3-33%, which may represent mild toxicity from local anesthetics (14). Results from this study showed that TNS was observed in both groups of patients within 1 week after surgery, and the incidence rate of TNS in the observation group was significantly lower than that in the control group (p<0.05). Puncture at the dura of the spinal nerve root, drug injection, and catheterization were performed in both combined spinal-epidural anesthesia and epidural anesthesia. The operations can cause nerve root injury and anesthetic can easily accumulate and precipitate at the dura of the spinal nerve root to achieve nerve block function. Additionally, the intrathecal vulnerable area is the spinal nerve root reaching the spinal cord, and the nerve fibers are unmyelinated. After local anesthetic injection, anesthetic will spread to the spinal nerve root. In addition, excessive air or salt water will also be injected to increase the pressure within the spinal canal, which in turn oppresses nerves and induces nerve damage (15,16). In addition, puncture is generally performed before the complete drying of disinfection liquid. Therefore, disinfector can easily spread to the spinal canal to damage nerve roots. Compared with epidural anesthesia, operations using combined spinal-epidural anesthesia can cause more severe soft tissue and nerve root injuries (17). In this study, patients in both groups showed lower limb movement disorder and sensory disturbances, and the scores in the observation group were significantly higher than those in the control group at 1, 3 and 5 days after surgery (p<0.05). This was because all local anesthetics are neurotoxic, and repeated injection can cause high local concentrations. The surface active molecules will aggregate in biofilms to affect the structures of proteins and phospholipids on nerve fiber membranes, resulting in irreversible damage (18). Similarly, local anesthetics cause increased intracellular calcium concentrations, and higher calcium concentrations cause greater damage to nerve cells (19). Long-term analgesia catheterization can lead to long-term exposure of local nerves to drugs, as well as spinal cord tissue edema. Therefore, patients will present with lower limb numbness, weakness, pain and other symptoms. In combined spinal-epidural anesthesia, local anesthetics are injected directly into the cerebrospinal fluid to act on anterior and posterior roots of the spinal nerve and spinal cord, which in turn lead to even higher local drug concentrations and spinal nerve toxicity. Therefore, nerve damage caused by this method is more severe than that caused by epidural anesthesia alone (20). The safety of intravertebral anesthesia used in the surgical treatment of patients with LDH should be recognized, and this method should not be ignored because of the occurrence of TNS. Proper clinical prevention should be applied to reduce the incidence rate of TNS. Comprehensive assessment should be performed before surgery to determine the application of intravertebral anesthesia. With comprehensive assessment, the use of epidural anesthesia is encouraged. In the case of puncture difficulty, repeated puncture should be avoided to reduce damage of the penetration barrier caused by anesthetics. Soft and effective non-invasive catheters should be used as epidural catheters. The dose of anesthetics should also be controlled. The plane of anesthesia should be controlled in a relatively wide range to avoid excessive concentrations of local anesthetics. In conclusion, to reduce the incidence rate of TNS and increase the safety of surgery, epidural anesthesia, a form of intravertebral anesthesia, should be applied in the PTED treatment of patients with LDH.
2018-04-03T00:15:36.157Z
2017-08-07T00:00:00.000
{ "year": 2017, "sha1": "ecc00f8741b44d6f2f3745c9bf0f717668b3ad56", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/etm.2017.4900/download", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecc00f8741b44d6f2f3745c9bf0f717668b3ad56", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209678092
pes2o/s2orc
v3-fos-license
The analgesic efficacy and safety of peri-articular injection versus intra-articular injection in one-stage bilateral total knee arthroplasty: a randomized controlled trial Background As an essential component of multimodal analgesia approaches after total knee arthroplasty (TKA), local infiltration analgesia (LIA) can be classified into peri-articular injection (PAI) and intra-articular injection (IAI) according to administration techniques. Currently, there is no definite answer to the optimal choice between the two techniques. Our study aims to investigate analgesic efficacy and safety of PAI versus IAI in patients receiving simultaneous bilateral TKA. Methods This randomized controlled trial was conducted from February 2017 and finished in July 2018. Sixty patients eligible for simultaneous bilateral total knee arthroplasty were randomly assigned to receive PAI on one side and IAI on another. Primary outcomes included numerical rating scale (NRS) pain score at rest or during activity at 3 h, 6 h, 12 h, 24 h, 48 h, and 72 h following surgery. Secondary outcomes contained active or passive range of motion (ROM) at 1, 2, and 3 days after surgery, time to perform straight leg raise, wound drainage, operation time, and wound complications. Results Patients experienced lower NRS pain scores of the knee receiving PAI compared with that with PAI during the first 48 h after surgery. The largest difference of NRS pain score at rest occurred at 48 h (PAI: 0.68, 95%CI[0.37, 0.98]; IAI: 2.63, 95%CI [2.16, 3.09]; P < 0.001); and the largest difference of NRS pain score during activity also took place at 48 h (PAI: 2.46, 95%CI [2.07, 2.85]; IAI: 3.90, 95%CI [3.27, 4.52]; P = 0.001). PAI group had better results of range of motion and time to perform straight leg raise when compared with IAI group. There were no differences in operation time, wound drainage, and wound complication. Conclusion PAI had the superior performance of pain relief and improvement of range of motion to IAI. Therefore, the administration technique of peri-articular injection is recommended when performing local infiltration analgesia after total knee arthroplasty. Trial registration The trial was retrospectively registered in the Chinese Clinical Trial Registry as ChiCTR1800020420 on 29th December, 2018. Level of evidence Therapeutic Level I. Background Although total knee arthroplasty (TKA) has been recognized as the optimal treatment method for the end stage of knee osteoarthritis, over 50% patients experienced moderate to severe postoperative pain after receiving the surgery [1]. Perioperative pain management in TKA may be insufficient and hinders the process of fast recovery [2]. Multimodal analgesia regimen gains popularity in recent years, encompassing patient-controlled analgesia [3], epidural analgesia [4], femoral nerve block [5], and local infiltration analgesia [6]. However, every single method has its pros and cons: patient-controlled analgesia (PCA) is quite useful for severe pain, but it could also result in sequent side effects such as nausea, vomiting, constipation, and respiratory depression [7]; the epidural analgesia involving intrathecal injection raised the risk of nausea, hypotension, and respiratory depression [8]; despite adequate analgesia of femoral nerve block, it has been associated with quadriceps weakness and increased risk of in-hospital falls [9]. In recent years, local infiltration analgesia (LIA) is becoming more commonly applied in TKA for its convenience, splendid analgesic efficacy, and fewer side effects [10][11][12]. LIA is commonly performed as direct injection of a cocktail solution containing local anaesthetic, opioids, adrenaline, glucocorticoids, and nonsteroidal antiinflammatory drugs (NSAIDs) into the surgical area to relieve inflammation and pain [13,14]. Administration techniques of LIA could be classified into peri-articular injection (PAI) and intra-articular injection (IAI). It is well-known that exogenous IAI of hyaluronate is valid as a treatment for the symptoms of knee osteoarthritis [15]. IAI of the novel, microsphere-based, extended-release formation of triamcinolone acetonide leads to a prolonged reduction in symptoms of osteoarthritis [16]. Deducted from studies above, IAI of analgesic cocktail may also play a role in pain relief after TKA. In addition, PAI could increase the risk of paralysis of common peroneal nerve, while IAI may consume less operation time and have no increased risks. Therefore, although most surgeons perform LIA in TKA as PAI, and never just IAI, we are curious about the comparison within LIA administration techniques, between PAI and IAI. In 2015, Perret published an article comparing PAI and IAI in TKA in Australia [17]. The study failed to show statistically significant benefit in either technique. Besides, the study is not a prospective randomized controlled trial (RCT). At present, there is no RCT existing towards the comparison between PAI and IAI of analgesic cocktail in TKA. This randomized study aimed at determining the effect of administration techniques of LIA on pain relief and postoperative rehabilitation. We compared analgesics efficacy and safety of PAI versus IAI in patients receiving simultaneous bilateral TKA during the in-hospital period. Trial design and ethics approval This single-centre, prospective randomized controlled trial (RCT) was performed at the Department of Orthopedic Surgery, Peking Union Medical College Hospital, following the Consolidated Standards of Reporting Trials (CONSORT) statement guidelines for reporting parallelgroup randomized controlled trial [18]. The eligible patients were supposed to receive simultaneous bilateral total knee arthroplasty, in which one side of the knees underwent PAI and another one underwent IAI. The details of randomized allocation were described in the following 'Randomization and Blinding' part. The study was approved by the institutional review board of Peking Union Medical College Hospital (25th Oct, 2016) and performed in accordance with the standards of 1964 Declaration signed in Helsinki. All patients participating in this trial signed informed consent. The trial was registered on Chinese Clinical Trial Registry as ChiCTR1800020420 (respectively registered on 29th December, 2018). Eligibility Patients were identified on the day before scheduled surgery and evaluated for eligibility. Patients will be enrolled in the study if they meet the criteria: 1) older 18 years old; 2) receive simultaneous bilateral total knee arthroplasty during the same anaesthesia session; 3) diagnosed with osteoarthritis or rheumatoid arthritis. Exclusion criteria are:1) a history of allergy to any of the injectable drug ingredients or excipients; 3) severe deformity of genu varum or valgum (change of femoraltibial angle > 20°); 4) comorbid with bronchospasm, acute rhinitis, nasal polyps, angioneurotic edema, urticaria, and other allergic reactions after taking aspirin or NSAIDs (including COX-2 inhibitors); 5) severe liver injury (serum albumin< 25 g/L or Child-Pugh score ≥ 10), inflammatory bowel disease, opioids abuse, a body mass index (BMI) of > 35 kg/m 2 ; 6) American Society of Anesthesiologists (ASA) category of > 3, or physical, emotional, or neurological conditions that would compromise compliance with postoperative rehabilitation and assessment. Randomization and blinding The LIA administration technique and the order of the operations for the two knees of each participant were randomly allocated using a computer-generated table, which was conducted by investigators not involving in the whole trial protocol except for this randomization and blinding procedure. For each participant, a sealed envelope was opened in the operating room to identify the treatment assignment. The patient received PAI on one side and IAI on another. The orthopaedic surgeon was informed about the administration allocation before skin incision. The patients, data collectors, and analysts were blinded during the entire trial. Interventions procedure All the surgeries were performed through medial parapatellar approach by the corresponding author (Xisheng Weng) with 250 mmHg tourniquet under general anaesthesia. The constituent of administered cocktail solution in our study combined the components in previous studies [19][20][21][22], consisting of 200 mg ropivacaine, 100μg fentanyl, 0.25 mg adrenaline, 50 mg flurbiprofen axetil, and 1 mg diprospan, with addition of normal saline to a 60 mL soliton. A drainage tube was placed laterally to the prosthesis components in every joint, clamped for 3 h [23] and then unlocked, and removed in the second morning after surgery. The drainage tube has 6 orifices and all of them were located inside the articular cavity. Intervention procedure was conducted according to the randomized allocation. In PAI group, before prosthesis installation, 20 mL of cocktail solution was injected into the posterior capsule, including femoral attachments of anterior cruciate ligament and posterior cruciate ligament, posteromedial and posterolateral capsules. After prosthesis installation, the residual 40 mL was injected into the medial and lateral collateral ligament, quadriceps tendon, patellar tendon, pes anserinus, fat pad and subcutaneous tissues. In IAI group, after closure of deep fascia, the cocktail solution was injected into the articular cavity through the drainage tube. It is the watertight test that we perform after suturing the deep fascia in every joint to check the watertight condition of the area. If fluids were leaking in somewhere, we would make more sutures to ensure the articular cavity was watertight. Both PAI and IAI were single-shot administrations. No participants received any regional nerve blocks or epidural block during the whole perioperative period. Participants were free to choose the use of PCA according to their wills. After surgery, participants routinely received 40 mg of parecoxib in every 12 h and 650 mg of acetaminophen in every 8 h. The rescue analgesia treatment included morphine, oxycodone or pethidine. The consumption of overall opioids of every participant was documented. Outcome measurements The primary outcome was pain intensity at rest or during activity assessed by NRS pain score at 3, 6, 12, 24, 36, 48, and 72 h after surgery. Secondary outcome included active and passive range of motion at 1, 2 and 3 days after surgery, volume of wound drainage, postoperative days required to perform straight leg raise, length of hospital stay and opioids use in morphine equivalents. Range of motion (ROM) was calculated as the sum of angles of knee flexion and extension measured by a long-arm goniometer without removing outside dressing. In our study, active ROM means patients bend their knee joints freely without enforcement, and passive ROM means investigators bend their knee joints as most under their tolerance. The operation time was counted from skin incision to wound dressing. Morphine consumption was calculated as the sum of morphine equivalents divided by the weight of the patient. Sample size Our hypothesis was to substantiate the non-inferiority of IAI compared with PAI. The sample size was calculated according to the following formula [24]: To show a clinically important difference of 1.3 [25] in NRS pain score between PAI group and IAI group, with a standard deviation of 2.0 according to the published article [17], a power 0.90 and a two-tailed significance of <0.05, each group required 49 subjects. Statistical analysis Measurement data were expressed as mean and 95% confidence interval (95% CI). Shapiro-Wilk test and Levene test were performed to evaluate normality and homogeneity of variance of the data, respectively. If data did not comply with normal distribution or equal variance, a non-parametric test (Mann-Whitney) was applied; if else, student t-test was undertaken to analyse the difference between the two groups. The dichotomous data were analysed by Fisher's exact test, in that 50% of cells have expected count less than 5. SPSS version 25.0 software was used during the analysis process. Baseline characteristics Between February 2017 and July 2018, 65 patients were enrolled in the study, among which 5 patients were excluded for violating criteria (severe deformity with more than 5 mm bone defect of tibia plateau inspected during surgery, refusal to participate and incoordination to respond) (Fig. 1). A total of 60 patients participated in the study. All of them finished the process of randomization, allocation, trial administration and postoperative assessment. Baseline characteristics of the participants are illustrated in Table 1, including gender, age, body mass index, ethnics, diagnosis, and ASA grade. There were no differences in NRS pain score and ROM between two groups before the surgery and intervention. Primary outcome During the first 48 h after surgery, NRS pain score in PAI group was significantly lower than that in IAI group (Fig. 2, Fig. 3 and Additional file 1: Table S1). The difference of NRS pain score between the two groups was larger at rest compared with that during activity. . There were no differences between two groups in NRS pain score at 72 h after the surgery at rest (P = 0.426) or during activity (P = 0.287). Secondary outcome PAI group had better results of active ROM and passive ROM in the first 3 days after surgery compared with IAI group (Fig. 4, Fig. 5, and Additional file 2: Discussion Our results demonstrate that PAI provides superior analgesic benefit to IAI in patients receiving TKA. The advantage of PAI over IAI on NRS pain score faded off after 48 h, while ROM was continuously better in PAI group than IAI group during the first 3 days after the surgery. In addition, it took less time for PAI group to perform straight leg raise postoperatively. There were no differences in operation time, volume of wound drainage and wound complications between two groups. Our study substantiated the superiority of PAI to IAI in analgesia after total knee arthroplasty. Therefore, PAI technique was recommended for performing LIA in TKA. PAI group showed a statistically significant reduction in postoperative VAS pain scores in a previous study [17], which positively correlated with NRS pain scores in our study [26]. In a retrospective study [27], Tietje demonstrated that patients receiving PAI of local anaesthetics in TKA had a noticeable decrease in length of hospital stay and incidence of postoperative nausea and vomiting when compared to patients receiving IAI. In the early period after surgery, it is pain that mainly accounts for patients hospitalization [2]. Besides, the occurrence of nausea and vomiting in patients after surgery may vary from the usage of opioids [7]. Therefore, it could be deducted from the results of Tietje that the analgesic benefit of PAI may underlie the decreased length of hospital stay and incidence of postoperative nausea and vomiting. In the current study, PAI had advantages of pain relief over IAI, corresponding with our deduction from Tietje study. There are several mechanisms underlying the analgesic benefit of PAI over IAI. According to a previous cadaveric study [28], the outer capsule is more abundant of innervation such as saphenous nerve and genicular nerves, while the inner synovium and articular cavity have fewer nerve distribution. Another histologic survey of human cadaveric knees performed by Jiranek et al. [29] elucidated the distribution of free nerve endings after hematoxylin and eosin staining. High concentrations of nociceptors were found in the medial and lateral retinacula, patellar tendon, pes anserinus, and meniscofemoral ligaments. The lowest concentration was seen in the central portion of the anterior cruciate ligament. Thus, the conduct of PAI could be more effective than IAI because of denser innervation of the outside capsule and soft tissues in the knee joint. Besides, since we placed a drainage tube in every joint, solution in the articular cavity was more likely to be drained out and solution in the soft tissues around the knee joint could continue to work out. It would be more difficult for cocktail solution of PAI group to escape from the joint than that of IAI. It also might be the persistent effect of cocktail solution in PAI group that contributes to the analgesic benefits. The volume of cocktail solution was the same in both groups, and according to our previous assumptions, the volume of wound drainage of IAI group was supposed to outnumber that of PAI group. However, there was no difference in the volume of wound drainage in our study. This paradox requires more substantive evidence to explain. For the further investigation to uncover the potential mechanism, a biocompatible and undegraded detector could be included in the cocktail solution to detect the real-time concentration and volume of the solution constituents in the articular cavity and soft tissues around the knee joint. To our knowledge, this is the first RCT study comparing analgesic efficacy and safety of PAI with that of IAI in patients receiving simultaneous bilateral TKA. The highlight of our study is the self-control design, where participants received PAI on one side and IAI on another. Owing to the homogeneity inside one participant, the only possible explanation for the remarkable differences in outcomes may lay in distinctive interventions. The conclusion of our study is confirmative. However, there is no exception for limitations in our study. Firstly, the ceiling effect makes it impossible to distinguish the differences in systemic adverse effects, ambulation mobility and morphine consumption between two groups. In addition, one pain could increase or reduce the other. Thus, the difference in our study could be overestimated or underestimated. Despite the qualitative conclusion in the study, further research is required to determine the exact difference between the two groups. Besides, the outcomes were only limited to in-hospital data without long-term follow-up data and the long-term effect needs to be further evaluated. Conclusion Generally, we conducted a randomized controlled trial to compare the analgesic efficacy and safety of PAI versus IAI in patients receiving simultaneous total knee arthroplasty. PAI had more analgesic benefits than IAI after the surgery. There were no differences between PAI and IAI in wound drainage, operation time, and wound complications. The administration technique of PAI is recommended when performing LIA in TKA. Additional file 1: Table S1. Numerical Rating Scale (NRS) at rest or during activity Additional file 2: Table S2. Range of Motion
2020-01-04T15:34:15.585Z
2020-01-04T00:00:00.000
{ "year": 2020, "sha1": "78676ff2ff9b5e03d91701d1186347964232e94f", "oa_license": "CCBY", "oa_url": "https://bmcanesthesiol.biomedcentral.com/track/pdf/10.1186/s12871-019-0922-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78676ff2ff9b5e03d91701d1186347964232e94f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119383059
pes2o/s2orc
v3-fos-license
A simple holographic scenario for gapped quenches We construct gravitational backgrounds dual to a family of field theories parameterized by a relevant coupling. They combine a non-trivial scalar field profile with a naked singularity. The naked singularity is necessary to preserve Lorentz invariance along the boundary directions. The singularity is however excised by introducing an infrared cutoff in the geometry. The holographic dictionary associated to the infrared boundary is developed. We implement quenches between two different values of the coupling. This requires considering time dependent boundary conditions for the scalar field both at the AdS boundary and the infrared wall. Introduction. Modeling quantum quenches in a holographic setup has attracted considerable attention in the last years. A remarkable success has been achieved in reproducing important aspects of the universal dynamics of quenches [1]- [5]. However most models lack some of the defining characteristics of quenches. Notably, they simulate an injection of energy in the system without real change in the hamiltonian. In this note we want to present a simple holographic model of a quench modifying the infrared physics. With this aim, we search for the gravitational dual to a family of d-dimensional QFT's parameterized by a relevant coupling. As a main input, the ground state for any value of the coupling is required to be Lorentz invariant. We pursue the minimal scenario, involving Einstein gravity coupled to a real scalar field. The possibility of extra compactified dimensions is excluded. We use the following ansatz for the ground state metrics Setting 8πG = d−1, the equations of motion are These equations have two integration constants, which can be related to the coefficients of the two independent scalar modes. Asking for regularity of the geometry links their values, allowing to interpret one of them as a QFT coupling and the other as the expectation value of the sourced operator. Regular solutions of (2), whose existence depends on the scalar potential, describe RG flows into an infrared fixed point independent of the integration constants. All other solutions run into naked singularities. Considering naked singularities raises a number of serious problems. There is no condition that relates the two integration constants, challenging the usual holographic dictionary. Related to this, Lorentz invariant metrics are not minimal energy solutions when only one integration constant is fixed at the AdS boundary. Actually there are solutions of arbitrary negative energy. These issues admit a simple, albeit crude solution, by introducing an infrared cutoff in the geometry. This creates a new boundary and renders natural to interpret both integration constants from (2) as couplings. Fixing the two couplings solves the vacuum stability problem [6]. Moreover regions of high curvature, which bring outside the regime of validity of classical gravity, are excised. AdS with an infrared hard wall is a well known rough holographic model for confining theories [7]. The new ingredient in this paper is to consider the hard wall as a regularizing element, while the infrared physics will be linked to the strength of the naked singularity. Static backgrounds. We will explore the proposed scenario with a vanishing scalar potential. Equations (2) can then be solved analytically, with the result where α and β are arbitrary constants. β represents a global shift in the value of the scalar, which is of no physical consequence when V (φ ) = 0. α induces a non-trivial scalar profile with z 0 denoting the radial position of the wall, and ∆φ = φ 0 −φ ∞ the variation of the scalar field between the wall and the AdS boundary. If extended beyond the infrared cutoff, all backgrounds with ∆φ = 0 have a naked singularity. Holography interprets the harmonic modes of bulk fields as excitations in the dual QFT. In Fig.1 we have plotted the frequency of the lower scalar modes along the family (3)-(4) for d = 2. When the radial variation of scalar profile is small, the spectrum is determined by z 0 . The spectrum becomes instead ruled by ∆φ for larger values of this parameter. Fig.1a shows that the mass gap, holographically given by ω 0 , grows with ∆φ and implies that this is a relevant coupling. Interestingly the ratio of higher normal frequencies to the fundamental one arXiv:1701.02671v1 [hep-th] 10 Jan 2017 shows an approximate linear growth, see Fig.1b. Hence the infrared physics associated to the family (3)-(4) does not differ by a mere rescaling. The lowest excitation becomes increasingly separated from the rest the larger is ∆φ . Modelling a quantum quench. We want to model a global quench between QFT's whose ground states are in the family (3)-(4). A convenient ansatz for the associated metric is For vanishing scalar potential, the equations of motion arė with Φ = φ and Π = A −1 e δφ encoding the radial and time scalar derivatives. Solving the equations of motion requires giving a set of initial data together with boundary data at asymptotic AdS and the infrared wall. As boundary data, we will allow for time dependent profiles φ ∞ (t) = φ (t, 0) and φ 0 (t) = φ (t, z 0 ). Dynamical processes triggered by φ ∞ (t) in the hard wall setup were studied in [8]. The possibility of imposing a time dependent scalar profile at the wall has been considered in [9]. The family (3)-(4) does not exhaust the set of static solutions to our gravity system. In general they break Lorentz invariance and are described by the ansatz (6). Their energy density can be read from the asymptotic expansion A = 1−2Mz d +. . . . It is then clear that all solutions (3)-(4) have zero mass. For static solutions with A 0 = A(z 0 ). The first term represents the contribution to the total energy from the geometry hidden by the infrared cutoff. When this part encloses a naked singularity it can be arbitrarily negative. On the contrary, the second term is always positive for solutions without horizons. Therefore naked singularities are a crucial ingredient for obtaining Lorentz invariant backgrounds with non-trivial scalar profiles. Up to a trivial global shift in the scalar, static solutions are parameterized by ∆φ and A 0 . The backgrounds (3)-(4) define the codimension one subset see red line in Fig.2. Since A 0 is a boundary data, it is natural to also interpret it as a QFT coupling. The unique static solution without horizons for ∆φ and A 0 in the shaded region of Fig.2a, represents the ground state of the associated QFT [9]. We consider that the QFT before the quench is in the ground state for chosen couplings in the subset (10). Acting on the boundary values such that φ ∞ changes while φ 0 remains constant, clearly brings outside (10). The same actually happens in the opposite case. When φ ∞ is kept constant, the equations of motion ensure the conservation of total mass. A time dependent φ 0 generates a scalar pulse that enters the geometry at the infrared boundary. Unless the time variation is adiabatic, this pulse induces an excited state in the final QFT. The value of A 0 will then adjust such that the total energy is conserved. Namely, if the initial theory belongs to the Lorentz invariant subset, the final one will have negative ground state energy. In order to model a quench between theories in (10), the wall profile φ 0 (t) needs to be combined with an energy injection into the system. This can only happen at the AdS boundary, induced by a non-trivial φ ∞ (t). The initial state will be taken to have vanishing φ ∞ , φ 0 =φ and A 0 satisfying (10). We shall choose the wall profile This models a quench with a finite time span controlled by the parameter a. After the quench φ 0 =φ +η, while A 0 will be fixed by the conservation of energy at the wall. Any φ ∞ (t) which fulfills (10) at late times, ensures that the final QFT will have Lorentz invariant couplings. We do not want however that the quench follows an arbitrary path in the coupling space of Fig.2a. We aim to only act on the combined coupling that moves along the M = 0 line. This involves a tuned variation of the scalar field at the wall and the AdS boundary, which the absence of time-like Killing vector renders unclear how to implement. In the following we will assume that the diagonal time coordinate in (6) provides a reasonable way to project the value of φ 0 onto the dual QFT. Hence we require (10) to hold at each constant time slice. Numerical results. The central characteristic of the dynamics generated by (10)-(11), is whether or not it will generate a horizon. In the affirmative case, the end point of the evolution is a Schwarzchild black hole trapping the total mass. This represents a unitary process in the dual QFT leading to thermalization [1] [10]. Those that do not form a horizon result in a scalar pulse that bounces forever between AdS boundary and wall [8]. Bouncing geometries provide the holographic counterpart to periodic reconstructions of quantum correlations in the dual field theory [11], known as quantum revivals [12]. The only topological obstruction to the formation of a horizon in our setup is the presence of the wall, enforcing Mz 2 0 > 1/2. A first question then is whether the typical scale triggering fast thermalization is set by z 0 or depends on the ∆φ . Before studying quenches, we analyze the infall of a scalar shell modelling an energy injection without variation of the hamiltonian. We restrict in the following to d = 2 for numerics. Since the shape of the pulse influences the evolution, we consider a typical shell, radially localized and of gaussian form with σ = 0.1 and Φ(t = 0) in the family (3)-(4). The threshold mass for gravitational collapse without bounces is plotted in Fig.2b. It strongly grows with ∆φ , confirming the secondary role of the infrared wall. Using the hard wall as an auxiliary element, we are actually obtaining a basic model of a soft wall. We explore now the evolutions after a quench modelled by (10)- (11) in d = 2. The quench will be applied to the Lorentz invariant background ∆φ =φ = 0.7. At this value the infrared physics starts to be dominated by the hidden singularity instead of the wall position, see Fig.1. We focus on η > 0, and hence the quench will increase the mass gap. Fig.3a shows the final energy density as a function η for several values of the time span a. Its growth with η is more pronounced the smaller is a. We have shaded in blue the parameters that lead to black hole formation. Processes where a horizon is generated after some bouncing cycles occupy just a small window on the boundary of the blue region. Otherwise we obtain geometries that keep bouncing as far as our simulation could go. Only sufficiently fast quenches, those with a < 0.25 in the example of Fig.3a, can generate enough energy density to trigger thermalization. Bouncing geometries can be roughly divided in two types: standing and traveling waves. Standing waves project mainly on the fundamental harmonic of the static background associated to the final couplings. It is convenient to restore the natural mass units, M → d−1 8πG M, with G extremely small. According to the holographic dictionary, 1/G is proportional to the number of elementary degrees of freedom in the dual QFT. Hence M translates into an energy density per species in field theory terms. Although the mass of standing waves is much smaller than that required for collapse, it can be parametrically larger than G. Indeed quenches in Fig.3a generate standing waves when a ≥ 0.6, having masses up to Mz 2 0 ≈ 0.1. Standing waves oscillate with the frequency of the mass gap, ω 0 . It is then natural to holographically identify them with coherent states of k = 0 modes of the lowest QFT excitation. Revivals with the same interpretation appear for example in the massive Schwinger model after a quench [13]. The important difference in our case is their energy density. It can be much larger than the mass gap, proper in holographic models of a confining phase, ranging up to O(1/G), close to the typical values in the plasma phase. In spite of that, the physics driving thermalization does not refer to ω 0 . This is illustrated in the inset of Fig.2b. The temperature of the black hole at the collapse threshold for the gaussian pulses (12), is well below the mass gap. Traveling pulses exhibit radial localization and displacement. They represent in general partial revivals. They have larger masses, and the associated QFT states are thus expected to contain higher energy excitations and non-zero momentum modes. The former should be connected with the projection of narrow pulses on higher harmonic modes. The radial infall of a narrow shell has been related to the evolution of the separation between entangled excitations after a quench [1][3] [5], the so-called horizon effect [14]. In this sense, radial displacement indicates the presence of non-zero momentum modes in the dual field theory state. Since the quench we are modelling is global, finite momentum modes can only be created in pairs. Fig.1b shows that 2ω 0 ≈ω 1 for a large range of couplings, explaining why radial localization and displacement appear at similar energies. Contrary to standing waves, traveling configurations generated by (10)-(11) are composed of two distinct sub-pulses, one entering from the AdS boundary and the other from the wall. This is clearly appreciated in the one-point functions. Fig.3b shows the vev of the operator sourced by φ ∞ for three examples from Fig.3a. We use a rescaled time such that the fundamental frequency for the final couplings is 2π. The oscillations of O ∞ are plotted in blue for a slow quench, with a = 0.6, resulting in a standing wave. A traveling configurations with two sub-pulses producing signals of similar magnitude is obtained for a = 0.15 and shown in green. The effect of both sub-pulses superposes, giving rise to oscillations with roughly twice the fundamental frequency. In magenta we have an intermedium configuration, with a small boundary component. It is worth mentioning a slight increase in the period of oscillations between the a = 0.6 and a = 0.15 pulses. This is due to their different final energies: M = 0.002 and M = 0.02 respectively. The increase of the period with the energy is generic in holographic quenches, finding some analogues in the condensed matter literature [11]. The distinction between fast and slow quenches should refer to the characteristic scale of the infrared physics. Slow quenches can be unambiguously defined as those producing standing or quasi-standing waves. We consider now quenches with fixed amplitude η and time span a, but different initial couplingφ . Fig.1a shows that the mass gap grows with the coupling. Therefore the quench should result in a collapsing shell, a bouncing pulse or a standing wave as we choose larger values ofφ . Alternatively, the energy density in units of the final mass gap must be a monotonically decreasing function ofφ . This quantity is plotted in Fig.4a for η = 0.2 and several small values of a, confirming the expected behavior. One point functions at the wall. We have assumed that the boundary values φ 0 and A 0 relate to couplings with a well de- fined, local projection on the field theory time coordinate. The same as φ ∞ , they should source local operators. We aim to determine their expectation values. Symmetry under global shifts of the scalar field implies that only the difference ∆φ = φ 0 −φ ∞ is physically relevant. Hence the ground state expectation values of the operators O 0 and O ∞ can not be independent. While the latter is dictated by the asymptotic expansions at the AdS boundary, the former has to depend on quantities evaluated at the wall. The scalar equation for static solutions reduce to ( where we have gauge fixed t to be the proper time at the AdS boundary, i.e. δ ∞ =0. The lhs is precisely O ∞ [15]. Defining O 0 as minus the rhs, we obtain a relation of the desired form The sign has been chosen such that the operator sourced by φ ∞ +φ 0 has a vanishing vev in the ground state. Notice that (13) would not hold with V (φ ) =0, when neither a global shift on the scalar is a symmetry of the system. The metric function A satisfies the evolution equatioṅ The z d coefficient in the asymptotic expansion of A determines the dual QFT energy density [15]. The previous equation im-pliesṀ+φ ∞ O ∞ = 0. However the field theory Ward identities dictate a sum over all couplings,Ṁ+∑λ i O i = 0 [15]. It is then necessary that the contributions from φ 0 and A 0 exactly cancel, which is the requirement of energy conservation at the wall. Using the above proposed value for O 0 , (15) at the wall can be rewritten aṡ Therefore the expectation value of the operator O A sourced by A 0 , is given by the expression multiplying its time derivative in the previous equation. A check on the consistency of these assignments is how they behave when a horizon forms. Thermalization after a global quench in an infinite system only happens at the local level. Namely, for any late but finite time there are sufficiently large regions where non-local observables have not yet achieved thermal values. Such observables, as for example the entanglement entropy, require information from behind the apparent horizon for their holographic determination [1] [4]. One-point functions are local observables, which thus should only imply the geometry outside it. We have used constant t slices to translate wall boundary values into QFT couplings. Constant t slices only approach the apparent horizon asymptotically at late times, in the region where it has practically achieved its final value z BH . They depart again from it at z > z BH , and finally reach the wall. This implies that indeed, O 0 and O A do not require information from behind the apparent horizon at any instance of their evolution. The only non vanishing one-point function associated to a Schwarzchild geometry is that of the stress tensor. Thus other expectation values should tend to zero in the process of gravitational collapse. When a horizon emerges, the part of the geometry with z>z BH gets frozen for observers using the proper time at the AdS boundary. This is implemented by the exponential vanishing of e −δ in that region. According to the previous assignments both O 0 and O A are proportional to e −δ 0 , which insures that indeed they tend to zero as a horizon forms. Clearly so does O ∞ . The evolution of the three observables after a quench generating a horizon, or equivalently leading to thermalization, is shown in Fig.4b. We have proposed a simple holographic scenario, easily accessible to numerics, modeling quenches where a relevant coupling changes. A number of checks have been successfully performed. We hope that this can help placing holography among the standard tools for studying out of equilibrium physics.
2017-01-10T16:37:29.000Z
2017-01-10T00:00:00.000
{ "year": 2017, "sha1": "10834fdcf5a59624e7feb5c644e7c4bdcb386c69", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP02(2017)130.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "10834fdcf5a59624e7feb5c644e7c4bdcb386c69", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17464253
pes2o/s2orc
v3-fos-license
Effects of Simulated Marker Placement Deviations on Running Kinematics and Evaluation of a Morphometric-Based Placement Feedback Method In order to provide effective test-retest and pooling of information from clinical gait analyses, it is critical to ensure that the data produced are as reliable as possible. Furthermore, it has been shown that anatomical marker placement is the largest source of inter-examiner variance in gait analyses. However, the effects of specific, known deviations in marker placement on calculated kinematic variables are unclear, and there is currently no mechanism to provide location-based feedback regarding placement consistency. The current study addresses these disparities by: applying a simulation of marker placement deviations to a large (n = 411) database of runners; evaluating a recently published method of morphometric-based deviation detection; and pilot-testing a system of location-based feedback for marker placements. Anatomical markers from a standing neutral trial were moved virtually by up to 30 mm to simulate deviations. Kinematic variables during running were then calculated using the original, and altered static trials. Results indicate that transverse plane angles at the knee and ankle are most sensitive to deviations in marker placement (7.59 degrees of change for every 10 mm of marker error), followed by frontal plane knee angles (5.17 degrees for every 10 mm). Evaluation of the deviation detection method demonstrated accuracies of up to 82% in classifying placements as deviant. Finally, pilot testing of a new methodology for providing location-based feedback demonstrated reductions of up to 80% in the deviation of outcome kinematics. Introduction Anatomical marker placement has been identified as the single largest source of variance in repeated gait analyses using motion-capture techniques, resulting in inter-examiner differences of up to 34 degrees for certain joint angles [1]. This problem is particularly relevant for large, multi-site biomechanical gait studies where inter-tester differences can preclude pooling of data [2,3]. To address this problem, a novel method was developed which 'scores' marker placement by comparing marker data transformed by generalized-Procrustes analysis (GPA), to a large normative database [4]. The outcome from this analysis is known as an inter-quartile range ratio (IQRR), and is a standardized, non-parametric, statistical description of the location of a given marker placement. It has been shown that the IQRR is capable of indicating divergence in marker placements between a single Novice and Expert, and measurements over time can show how a single Novice alters their placements [4]. While these results provide valuable insights, there is a lack of contextual information about how downstream kinematic data may be affected by marker deviations described by the IQRR. Therefore, two questions arise: 1) which anatomical markers have the greatest influence over outcome kinematics, and 2) what is an 'acceptable' value for the IQRR at each location to indicate that deviations in marker placement are within some acceptable limit. A simulation approach has produced valuable descriptions of deviation effects for knee markers [5], and propagation into coordinate systems [6]. However, these studies did not comprehensively address the impact of specific marker placement deviations on lower extremity kinematics of gait analysis, particularly in light of anatomical variance. By simulating deviations in marker placement, at multiple locations, for a large subject cohort, it is possible to determine the worst-case effects of these deviations. Additionally, by considering the IQRR score as a binary classifier of deviation, conventions can be established for what constitutes 'acceptable' or 'unacceptable' IQRR scores. Therefore, the purposes of this study were to: 1. Use simulated deviations in marker placement to calculate resulting changes in downstream gait kinematics and quantify the relationships between marker placement deviations and changes in kinematic variables 2. Evaluate the classification performance of IQRR scores by considering kinematic changes in terms of Mathews' correlation coefficient (MCC), false positive rates (fall-out), and true positive rates (sensitivity) for a range of kinematic and IQRR thresholds 3. Conduct a real-world pilot study of IQRR-based deviation detection with novice biomechanists. Data Collection A total of n = 411 subjects were included in this study (gender: 215 females; age: 40.0 ± 11.6 yrs; height: 172.3 ± 8.9 cm; mass: 71.2 ± 13.1 kg). All were patients who participated in clinical activities at the Running Injury Clinic, and all gave written, informed consent for their data to be used for research purposes. Patients presented with a variety of common running injuries (i.e. patellofemoral pain syndrome, iliotibial band syndrome), however, none presented with conditions known to drastically influence lower body morphology (i.e. amputation, developmental disorders, lower extremity orthopaedic surgery, etc). Inclusion criteria were kept intentionally broad in order to represent a large segment of the population in a morphological sense, thereby improving the generalizability of the method. Ethical approval was given by the Conjoint Health Research Ethics Board to review and analyze patient data for the current study. Three-dimensional (3D) kinematics were collected on each patient using 9.5 mm spherical retro-reflective markers and eight high-speed digital video cameras recording at 200 Hz (MX-3, Vicon, Oxford, UK). Cameras were placed in a uniformly-spaced circular arrangement, surrounding the treadmill. Markers were placed over anatomical landmarks to create an anatomical model of the lower extremities according to previously published methods [7,8]. Subjects wore tight-fitting shirts and shorts, so as to limit the impact of clothing on marker placement. All marker placements were performed by one tester (the Expert) with 15 years of experience in clinical anatomy and more than 800 3D kinematic gait analyses performed. In brief, the marker model used for the lower extremity consisted of the following conventions. Ankle joint centres were located at the midpoint of the lateral and medial malleoli markers. Knee joint centres were located at the midpoint of markers placed over the lateral and medial joint lines. Hip joint centers were determined using trochanter markers and locating them at 25% of the inter-trochanteric distance [9]. A joint coordinate system [10] was used to construct segment-aligned coordinate systems, relative to segment tracking markers. Briefly, each segment coordinate system was created by aligning the long axis of the segment with the vector connecting adjacent joint centres. The anterior axis was then calculated from the cross product of the long axis and the vector connecting medial and lateral markers of the distal joint. Finally, the hinge axis was calculated as the cross-product of the anterior and long axes. Segment tracking clusters were affixed to the pelvis, thighs, shanks and feet. Each subject performed a standing neutral trial with all anatomical markers to establish the anatomical model, and their position was standardized using a template under their feet. Once the standing trial data were collected, the anatomical markers were removed, however the segment tracking markers were left on the subject for the dynamic trial. The treadmill was then sped up to a comfortable jogging speed between 5.5 and 6.5 mph. The subject was given a 2-5 minute period of acclimatization, after which a 20-60 second dynamic running trial was collected. All data were post-processed in MATLAB (The Mathworks, Natick, USA) using custom software according to the following procedures. Error Sensitivity For each subject, a set of 29 standing trials was created from their original standing trial. One of the 29 trials consisted of the unaltered trial, and the remaining 28 trials each had a randomly simulated deviation added to either the vertical or anteroposterior coordinate for ONE of the following markers: greater trochanters (bilaterally), medial and lateral knee markers (bilaterally), medial and lateral ankle markers (bilaterally) and medial and lateral metatarsophalangeal markers (bilaterally), for a total of 28 marker-coordinates. Errors (e) were randomly drawn from the uniform distribution over the interval -30 mm < e < 30 mm, as suggested by Stagni et al. [11]. Marker data from the running trials were low-pass filtered using a recursive Butterworth filter with a 10 Hz cutoff. Kinematics for segment tracking clusters were calculated at the ankle, knee and hip, from the standing and running trials using a singular-value decomposition method [12] and a joint coordinate system [10]. Angular velocities were calculated by differentiating joint angles. A selection of discrete variables was then calculated from the kinematic data, which consisted of typical peaks and excursions reported in clinical biomechanics literature [13,14,15,16]. This procedure was repeated with each of the 29 standing trials, using the same running trial each time. Kinematic change was defined as the absolute difference between outcome kinematics for the unaltered standing trial and the outcome kinematics from each of the 28 standing trials with error introduced. In order to compare relationships between marker placement errors and downstream changes in kinematics, each kinematic change was standardized to 10 mm (approximately one marker width) of placement error. These kinematic change ratios were therefore calculated in units of degrees/10 mm of error for angles, or degrees per second/10 mm of error for angular velocities. This transformation of the data resulted in n = 411 normally distributed ratios for each marker/variable pair. A point estimate of the 95%ile of the ratio was calculated for each marker/variable pair using a weighted average of the closest two values to the 95% percentile. The entire simulation was repeated 10 times using the same procedure, but drawing different random errors from the uniform distribution. The 95%ile ratios calculated for marker/variable pairs for each of the 10 iterations were averaged to produce a mean point-estimate of the 95%ile change ratio. IQRR Classifier Evaluation A leave-one-out cross validation approach was used to assess IQRR classifier performance. Each of the 29 static trials used in the simulation were also spatially normalized using the modified GPA procedure [4] using n = 411-1 standing trials as the reference database, leaving-out the standing trial being analyzed. Briefly, a subset of reference data was selected from the reference database using a majority-vote, nearest-neighbour analysis, and the IQRR for each coordinate-marker pair was calculated according to previously published methods [4]. Using the results of the error simulation, kinematic change in selected discrete variables was compared with the IQRR scores from the GPA. Confusion matrices were constructed by dichotomizing kinematic variable data and IQRR scores using thresholds, which were iteratively set. For IQRR scores, thresholds were set at increments of 0.1 from 0.0 to 1.0. For kinematic change, thresholds were set at 0.1 degree increments from 0 to 15 degrees (beyond which less than 0.5% of cases occurred). At each pair of thresholds, a corresponding confusion matrix was generated by calculating true-positives, false-positives, true-negatives and false-negatives, with respect to the thresholds set. If any category of the confusion matrix consisted of less than five instances for a given threshold, this was identified as a marginal case and this threshold pair was discarded. This procedure was repeated a total of 10 times using the marker errors from each iteration of the error simulation to produce a total of 10 confusion matrices for each threshold pair. For each confusion matrix, the Matthew's correlation coefficient (MCC) was calculated as a single descriptor of classifier performance for a given pair of kinematic change and IQRR score thresholds. A point estimate of the mean MCC for a threshold pair was calculated as the average of MCCs across the 10 iterations of the error simulation. This generated a 10 x 150 matrix of mean MCCs corresponding to each combination of thresholds. The maximum mean MCC was found in order to identify the pair of thresholds that produced the most balanced classifier performance. Real-World Pilot Study In order to support a real-world application of the IQRR scoring method and thresholds, a pilot test was conducted using one test subject with several marker placements. One placement was performed by the aforementioned Expert and six more marker placements were conducted by Novices with anatomical knowledge, but no prior biomechanics experience. Each Novice was given a schematic indicating the placements, and the anatomical locations were described verbally. Segment tracking clusters were placed only once, therefore being identically located for all trials. Within 30 seconds of the first placement and standing trial by each Novice, standardized feedback was given regarding their placement. A 3D plot was presented, showing the expected location of anatomical markers, and the location of the Novice placements in "IQRR space" for the anteroposterior and vertical coordinates (Fig 1). The data were presented in the context of a 3D surface model of the lower limb to provide visual guidance, however, no scale or measurement information was given. A threshold of 0.8 was chosen as a cutoff for the IQRR score based on the findings from the IQRR classifier evaluation. Novices were instructed to only change markers for which the IQRR of 0.8 was exceeded, and to use their judgment in making modifications. After placements were modified, a second standing trial was taken. After all marker placements by the Expert and Novices were completed, the test subject performed a running trial, which was then analyzed using all 13 standing trials. The kinematic change or deviation from the Expert data was calculated as the difference between the discrete kinematic variables of the Expert from those calculated based on Novice marker placements. Median deviation was then determined both prior to feedback and afterwards, across both left and right sides for 9 kinematic variables: peak ankle abduction, internal rotation, dorsiflexion, peak knee abduction, external rotation, flexion, and peak hip adduction, internal rotation and extension. Wilcoxon Signed-Rank tests were performed on all 9 variables and tests were adjusted using the Bonferroni-Holmes procedure, with a family-wise alpha of 0.05. Error Sensitivity In general, errors in the anteroposterior direction exhibited clear linear relationships with variables in the transverse plane, along with much larger 95%ile change ratios than other planes (Fig 2). Conversely, errors in the vertical coordinate tended to exhibit less clearly defined relationships with frontal plane variables. Feedback tool screenshot. Screenshot demonstrating feedback given to the Novices of the pilot study to indicate potential errors in marker placement. A lower-limb surface model was constructed and anatomical marker position data was overlaid to provide context for users (segment markers locations were identical for all trials, and therefore omitted). The marker position data consisted of expected positions (green circles), which were scaled to fit the surface model, and dimensionless IQRR scores (blue and red circles), which were scaled and positioned relative to the expected position with a connecting line to indicate directionality. Red circles indicated that a marker had crossed the threshold of 0.8 for the associated IQRR score, while blue circles indicated the marker was within the threshold. Participants were instructed to only modify those placements indicated by red circles, and to use their judgment in deciding how much to move the marker. Knee and ankle markers demonstrated influence over the largest number of kinematic variables, with trochanter markers and MTP markers influencing only a few variables (Fig 3). In terms of peak angles, the anteroposterior (AP) coordinates of the knee and ankle markers produced the largest angular changes (up to 7.59 degrees/10 mm). In terms of angular velocities, errors in AP ankle markers produced the largest changes (up to 45.5 degrees per second/10 mm). IQRR Classifier Evaluation There was no single IQRR threshold that produced maximum MCCs across all variable/marker pairs, therefore maximum MCCs were identified for selected IQRR thresholds to evaluate potential performance benefits (Tables 1-3). An IQRR threshold of 0.8 maximized true positive rates (sensitivity) across nearly all of the marker-coordinate pairs, whereas larger tradeoffs were apparent for other threshold criteria. Each marker/variable pair was evaluated on IQRR thresholds from 0.0 to 1.0 (0.1 increments) and on kinematic change thresholds from 0.0 to the maximum value (0.1 degree increments). Table 1 includes results for the maximum MCC scores found across all threshold pairings. Table 2 includes results when the IQRR threshold was fixed at 0.9 and maximum MCC was found for kinematic change. Table 3 includes results when the IQRR was fixed at 0.8 and maximum MCC was found for kinematic change. Real-World Pilot Study Median deviation in kinematic variables between Novice testers and the Expert trended towards zero in 7 of the 9 kinematic variables after feedback was given (Fig 4). However, these differences were only significant for peak knee flexion and peak hip extension (family-wise α = 0.05). Discussion The purposes of this study were to: 1) use simulated deviations in marker placement to calculate resulting changes in downstream kinematic variables and quantify relationships between marker placement and changes in kinematic variables; 2) evaluate the classification performance of the IQRR score by considering MCCs, false positive (fall-out) and true positive (sensitivity) rates for a range of kinematic and IQRR thresholds; and 3) to conduct a real-world pilot study of the IQRR method. The results indicate large influences of malleolus and knee marker placement deviations on kinematics, along with favorable performance of the IQRR classifier in detecting markers placed in different (unexpected) locations. It has previously been demonstrated that deviation in marker placement can affect many downstream variables [5], and that precision in the 3D coordinates of anatomical landmarks propagates into joint angle calculations for the lower limb [6,17]; however, specific relationships between the two have not been elucidated. According to current results, it is not only knee markers that can have a large influence over kinematic variables, but ankle marker placement as well. In both cases, the most affected joint angles were those in the transverse plane at the hip, knee and ankle, while the most affected velocities were in the frontal and transverse planes at the knee joint. These findings are consistent with experimental studies that have also highlighted the sensitivity of variables in the transverse plane [2,18,19]. Taken together, these results argue for caution in measuring kinematics of the transverse plane. Previous studies have postulated that relationships between marker placement and gait kinematics arise from conventions in the creation of segment coordinate systems [1,6,17,20]. Indeed, it is likely that the conventions chosen will determine the propagation of deviations, and therefore, generalizations from the current findings must be interpreted in the context of the model used. For many segment-fixed joint coordinate systems (JCS), the joint centre and the hinge axis are defined using two markers on the lateral and medial sides of a joint, making these JCS hinges highly sensitive to anteroposterior deviations. Since joint angles are calculated based upon orientations of the two segments they connect, the construction of either segment coordinate system may exert an influence over the calculated joint angle. It is therefore understandable that malleoli markers influence ankle and knee transverse angles, but not hip angles, while knee markers primarily influence hip and knee transverse angles (Fig 3). Knee, ankle and hip markers also influence sagittal plane joint angles, and this effect likely arises from the construction of the long axes of segments based on the location of the proximal joint centres. Although this effect is much smaller than effects in other planes, it is worth noting that prior studies have demonstrated small but significant differences in sagittal plane joint angles between testers [1], and it is possible that these differences can be entirely accounted for by this mechanism. Prior studies of placement deviation have asserted that marker placement deviations are unpredictable [17], and indeed, it is not eminently clear how these deviations occur. To this point, by using a large cohort this study provides near-worst-case 95 th percentile changes in a given kinematic variable in response to a 10 mm placement deviation. The standard value of 10 mm was chosen as this approximates one marker diameter (9.5 mm markers were used in this study). This 'rule-of-thumb' can be applied in context to provide an estimate of whether a kinematic difference might be spurious. As a simple example, if a researcher or clinician is confident they rarely deviate in malleoli markers by more than half-a-marker-width, then they can infer that a repeated measures difference in peak ankle rotation which exceeds 3.80 degrees would fall outside of the 95%ile error estimate (7.59 degrees/10 mm error x 1/2). It is critically important to qualify, however, that the validity of this approach is dependent upon the lower extremity model chosen by the investigators. As a binary classifier of placement errors, the IQRR score shows promise as a means of deviation detection. The fixed IQRR threshold of 0.8 results in very good sensitivity (true positive rates of~62-94%), with reasonably low corresponding thresholds for many discrete kinematic variables ( Table 3). The specificity of the IQRR score is not as high (true negative rates of~51-72%), however this is not entirely unexpected. It is clear that the IQRR score does not reflect only deviations in placement, but rather a combination of deviation and unique anatomical configuration [4]. Given this feature of the analysis, and the high false positive (fall-out) rates, the results cannot be taken as an absolute measure of placement deviation, but rather as a valuable training and reference tool. Pilot data using placement feedback demonstrated that a group of Novices with no biomechanics experience were able to immediately reduce their median deviation from the Expert by up to~80% for knee sagittal kinematics. In two individual cases, there were consistent A principal limitation of the current study is the reference database from which the feedback were generated. The nature of this reference database will define the limits of its applicability in terms of specific morphologies (i.e. developmental disorders, surgical changes, diseases affecting bone structure). However, while this method may currently be limited in application to a population of morphologically average individuals, the database may easily be scaled to include more individuals as data are collected on them. This is a significant strength of the current method, as the database will continue to evolve and expand as the method is applied in practice. In conclusion, important and identifiable relationships exist between marker placement deviations and downstream kinematics, placement classification based on the IQRR score detected up to 94% of simulated deviations, and pilot testing of a placement feedback tool by Novices resulted in significant reductions in their deviation from results obtained by the Expert. Thus, the current study supports the use of placement deviation detection and feedback, and speaks to the need for future research in evaluating the utility and optimal application of this approach. Supporting Information S1 Dataset. Comprehensive results from iteration 1 of the simulation experiment. Each iteration of the simulation is saved as a separate file. Within each file, individual spreadsheets contain data from each subject. Within each spreadsheet, rows 3-30 contain calculated discrete variables from the unaltered trials and with errors added to each marker/coordinate, and rows 34-60 contain IQRR scores from the marker placement GPA calculations.
2016-05-04T20:20:58.661Z
2016-01-14T00:00:00.000
{ "year": 2016, "sha1": "f234c55f773d5b2d708feeab21d1bc814ed4a8c6", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0147111&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f234c55f773d5b2d708feeab21d1bc814ed4a8c6", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
5256724
pes2o/s2orc
v3-fos-license
The Steroid Metabolome in the Isolated Ovarian Follicle and Its Response to Androgen Exposure and Antagonism The ovarian follicle is a major site of steroidogenesis, crucially required for normal ovarian function and female reproduction. Our understanding of androgen synthesis and metabolism in the developing follicle has been limited by the sensitivity and specificity issues of previously used assays. Here we used liquid chromatography – tandem mass spectrometry to map the stage-dependent endogenous steroid metabolome in an encapsulated in vitro follicle growth system, from murine secondary through antral follicles. Furthermore, follicles were cultured in the presence of androgen precursors, nonaromatizable active androgen, and androgen receptor (AR) antagonists to assess effects on steroidogenesis and follicle development. Cultured follicles showed a stage-dependent increase in endogenous androgen, estrogen, and progesterone production, and incubations with the sex steroid precursor dehydroepiandrosterone revealed the follicle as capable of active androgen synthesis at early developmental stages. Androgen exposure andantagonism demonstrated AR – mediated effects on follicle growth and antrum formation that followed a biphasic pattern, with low levels of androgens inducing more rapid follicle maturation and high doses inhibiting oocyte maturation and follicle growth. Crucially, our study provides evidence for an intrafollicular feedback circuit regulating steroidogenesis, with decreased follicle androgen synthesis after exogenous androgen exposure and increased androgen output after additional AR antagonist treatment. We propose that this feedback circuit helps maintain an equilibrium of androgen exposure in the developing follicle. The observed biphasic response of follicle growth and function in increasing androgen supplementations has implications for our understanding of polycystic ovary syndrome pathophysiology and the dose-dependent utility of androgens in in vitro fertilization settings. ( Endocrinology 158: 1474 – 1485, 2017) LLOQ and LLOD were calculated from calibration series experiments employing steroid-spiked cell culture media. LLOQ was defined as a detectable signal with a signal/noise ratio of more than 10:1 and with a signal variation of , 20%. LLOD was defined as the lowest detectable concentration witha signal/noise ratioof morethan3:1 andwith a signal variation of , 20%. ’ steroid quantifications and D CT values between control and treatment conditions. Matched or repeated measurements were analyzed using paired t tests, and unpaired t tests were used for independent measurements. All studies were performed in at least three independent experiments unless otherwise specified. The ovarian follicle is a major site of steroidogenesis, crucially required for normal ovarian function and female reproduction. Our understanding of androgen synthesis and metabolism in the developing follicle has been limited by the sensitivity and specificity issues of previously used assays. Here we used liquid chromatography-tandem mass spectrometry to map the stage-dependent endogenous steroid metabolome in an encapsulated in vitro follicle growth system, from murine secondary through antral follicles. Furthermore, follicles were cultured in the presence of androgen precursors, nonaromatizable active androgen, and androgen receptor (AR) antagonists to assess effects on steroidogenesis and follicle development. Cultured follicles showed a stage-dependent increase in endogenous androgen, estrogen, and progesterone production, and incubations with the sex steroid precursor dehydroepiandrosterone revealed the follicle as capable of active androgen synthesis at early developmental stages. Androgen exposure and antagonism demonstrated AR-mediated effects on follicle growth and antrum formation that followed a biphasic pattern, with low levels of androgens inducing more rapid follicle maturation and high doses inhibiting oocyte maturation and follicle growth. Crucially, our study provides evidence for an intrafollicular feedback circuit regulating steroidogenesis, with decreased follicle androgen synthesis after exogenous androgen exposure and increased androgen output after additional AR antagonist treatment. We propose that this feedback circuit helps maintain an equilibrium of androgen exposure in the developing follicle. The observed biphasic response of follicle growth and function in increasing androgen supplementations has implications for our understanding of polycystic ovary syndrome pathophysiology and the dose-dependent utility of androgens in in vitro fertilization settings. (Endocrinology 158: [1474][1475][1476][1477][1478][1479][1480][1481][1482][1483][1484][1485]2017) F emale reproductive health relies on the proper development of the follicle, the fundamental unit of the ovary. As waves of follicles grow, they produce sex steroid hormones that regulate maturation in an autocrine/paracrine manner, supply endocrine feedback that sets the tempo of each reproductive cycle, prepare the reproductive tissues for pregnancy, and regulate bone, cardiovascular, and metabolic health. Many elegant studies have evaluated androgen production in various follicle culture and in vivo settings (1)(2)(3)(4)(5)(6). We have extensively validated a method to study encapsulated in vitro ovarian follicle growth (eIVFG) from mouse, bovine, goat, canine, nonhuman primate, and human biomaterials, all of which result in mature eggs or embryos (7)(8)(9)(10)(11)(12)(13)(14)(15). Steroid hormone measurements in this culture system provided valuable information but relied on immunoassays (7,16). This latter technology is hampered by intrinsic problems of sensitivity and specificity, especially in the presence of low steroid concentrations (17), such as the production of androgens by individual preantral growing follicles in culture. Modern mass spectrometry-based steroid analysis overcomes these challenges (18) but has not yet been applied to the developing follicle. Here we studied endogenous basal steroid production in isolated ovarian follicles by liquid chromatography-tandem mass spectrometry (LC-MS/ MS), employing a murine eIVFG system. Another advantage of eIVFG is the possibility of directly studying the dose-and stage-dependent effects of exogenous factors on individual follicle development and function. Manipulating the local or endocrine microenvironment of the growing follicle may also phenocopy certain aspects of human ovarian disease (19). With use of eIVFG, testosterone directly increases survival and growth of macaque secondary follicles, supporting the notion that androgens regulate follicle dynamics (20). Indeed, androgen action is essential for preantral follicle development, as initially demonstrated by global androgen receptor (AR) knockout models and mirrored by the granulosa cell-specific AR knockout mice, in which females are subfertile and have reduced follicle development, altered gonadotrophin regulation, decreased ovulation rates, and poor oocyte quality (21)(22)(23)(24)(25)(26). Recent work has shown that nuclear and extranuclear ARmediated signaling pathways are crucially involved in promoting follicle growth and survival (27). These fundamental studies are important because alterations in androgen homeostasis in women may result in infertility and anovulation. In clinical conditions of androgen excess, as observed in women with polycystic ovary syndrome (PCOS), follicle development is arrested, leading to chronic anovulation and subfertility (28,29). The dysfunctional follicle phenotype may relate to excess androgen exposure during critical developmental stages, as demonstrated by studies in mice showing that prepubertal androgen exposure leads to follicular arrest and increased follicular atresia (30). Similarly, in nonhuman primates, in vivo exposure to exogenous androgens in early gestation results in PCOS-like ovarian dysfunction in the adult offspring, manifesting with follicle excess, oligomenorrhea, and hyperandrogenemia (31,32). Although androgen excess is deleterious for follicle development, androgen deficiency might equally alter follicle maturation. In assisted reproductive clinics, androgen supplementation, either with the androgen precursor dehydroepiandrosterone (DHEA) or with testosterone, is widely used to improve follicular development and fertility in women with diminished ovarian reserve (33)(34)(35). Here we have used the murine eIVFG system and steroid analysis by LC-MS/MS to comprehensively map the stagedependent endogenous steroid metabolome of the follicle during development and to directly examine the dosedependent effects of the nonaromatizable potent androgen 5a-dihydrotestosterone (DHT), the sex steroid precursor DHEA, and the selective AR antagonist enzalutamide (MDV) on follicular function and steroidogenesis. Methods Murine encapsulated in vitro follicle culture CD1 mice were housed and bred in a temperature-and lightcontrolled (12-hour light, 12-hour dark cycle) environment and were provided with unrestricted access to water and chow (PicoLab Mouse Diet 20; Sandown Scientific, Hampton, UK) in the Biomedical Services Unit at the University of Birmingham. Nonweaned pups (days 15 to 17) were culled by cervical dislocation before dissection for excision of ovarian tissue. The euthanasia procedure was conducted in accordance with current UK Home Office regulations in accordance with the UK Animal (Scientific Procedures) Act 1986 and was covered by the generic breeding license of the Biomedical Services Unit. Ovaries were transported in L-15 GlutaMAX medium (Thermo Fisher Scientific, Loughborough, UK) supplemented with 1% fetal bovine serum (FBS; Sigma-Aldrich, Gillingham, Dorset, UK) and 0.5% penicillin/streptomycin (Thermo Fisher Scientific) in a carrier-incubator at 37°C. After transport, ovaries were transferred to a dish containing L-15 medium supplemented with 0.1% DNase I (Lorne Laboratories Limited, Reading, UK) and 0.1% Liberase TM (Roche Life Science, West Sussex, UK) and were placed on a shaker in a 37°C 6% CO 2 incubator for 35 to 40 minutes. After the addition of 10% FBS, multilayered secondary follicles (diameter, 150 to 180 mm) were mechanically isolated employing insulin-gauge needles under a dissection scope. Follicles were placed in a maintenance medium containing minimal essential medium (a-MEM GlutaMAX; Life Technologies Ltd, Paisley, UK) supplemented with 1% FBS and 0.5% penicillin/streptomycin for 2 to 3 hours in a 37°C 6% CO 2 incubator. For the treatment conditions, culture medium was supplemented with 25 or 50 nM DHT (Sigma-Aldrich); 100, 200, or 500 nM DHEA (Sigma-Aldrich); and 10 or 25 nM estradiol (E2) (Sigma-Aldrich). The steroid concentrations used were based on published dose-response experiments (27,36). For AR blockade, MDV (Axon Medchem, Groningen, The Netherlands) was used at the dose of 1 mM on the basis of its half maximal inhibitory concentration value (37). After plating, encapsulated follicles were imaged using a Nikon Eclipse TE300 light microscope (Leica, Nikon, UK) with 103 phase objective. Follicles with intact alginate beads and with preserved integrity of the oocyte and somatic cell compartment were selected for culture. Follicles were cultured for 6 days in a 37°C 6% CO 2 incubator. Media changes (50 mL) were performed on alternate days, with fresh steroids at the initial concentration for the treatment conditions as well as repeated imaging. Images were analyzed using ImageJ Software (National Institutes of Health, Bethesda, MD). Follicle sizes were obtained by averaging two perpendicular measurements of follicle diameter. The movement of the oocyte to an eccentric position with the appearance of a fluid-filled space determined the presence of an antrum. Follicles were classified as nonviable when the oocyte or somatic compartment appeared shrunken or dark, when their interphase was compromised, or when the alginate bead was disrupted. Only surviving follicles were included in the data analysis. In vitro follicle maturation After the 6-day culture period, follicles were retrieved from the alginate bead using alginate lyase (Sigma-Aldrich) and were transferred to a maturation medium composed of a-MEM GlutaMAX, 10% FBS, 1.5 IU/mL human chorionic gonadotropin (Sigma-Aldrich), and 5 ng/mL epidermal growth factor (BD Biosciences, Oxford, UK) for 16 hours at 37°C, 6% CO 2 , as previously described (7). Oocytes were then denuded from the surrounding cumulus cells by treatment with 0.3% hyaluronidase (Sigma-Aldrich) and gentle aspiration. The oocytes were classified as mature, or metaphase II, when a polar body was visible in the perivitelline space. Healthy oocytes that had not resumed meiosis were classified as immature. Steroid analysis by LC-MS/MS Pooled follicle culture supernatant (from 30 to 100 follicle incubations) was placed in silanized glass tubes, and 20 mL of internal standard was added. Three milliliters of methyl tertbutyl-ether was added to each sample, followed by vortexing and freezing for 1 hour. The upper organic phase was transferred to a 96-well plate using glass Pasteur pipettes, followed by evaporation under nitrogen at 55°C. Samples were reconstituted with 125 mL methanol:water mixture (50:50) and were frozen at 220°C before analysis. Steroids were quantified by LC-MS/MS using a Waters Xevo mass spectrometer with an Acquity UPLC system with the following settings: electrospray ionization source with capillary voltage at 4.0 kV, temperature source at 150°C, and a desolvation temperature of 500°C. Steroid identification was based on an identical retention time and two identical mass transitions when compared with authentic reference compounds. Quantification was performed relative to a calibration series (0, 0.5 to 250 ng/mL of each steroid) with an appropriate internal standard steroid, as previously described (38), and was appropriately validated, including determination of the lower limits of detection (LLOD) and quantification (LLOQ) ( Table 1). Steroid concentrations above the steroid-specific LLOQ were considered accurately quantified; steroid concentrations below the steroid-specific LLOQ but above the respective LLOD were described as detectable. All measurements were performed in triplicate except for treatment conditions DHT + MDV and DHEA 100 nM, which were assessed in duplicate because of a shortage of biological material. Messenger RNA expression analysis At the end of culture, we pooled 18 to 30 follicles for each experimental condition, which were immediately flash frozen in liquid nitrogen. RNA was purified from the follicles using the RNeasy Micro Kit (Qiagen, Manchester, UK). RNA quality and quantity were assessed employing NanoDrop technology (ND-1000; Thermo Fisher Scientific) and High Sensitivity R6K ScreenTape System (Agilent, Cheshire, UK). RNA was diluted to a concentration of 50 to 100 ng/mL. RNA was reverse transcribed to complementary DNA (cDNA) using an AccuScript High Fidelity 1st Strand cDNA Synthesis Kit (Agilent Technologies) according to the instructions of the manufacturer. Messenger RNA (mRNA) expression levels were assessed by quantitative polymerase chain reaction using an ABI sequence detection system (Perkin-Elmer Applied Biosystems, Warrington, UK). All analyses were assessed in 10-mL final volume in reaction buffer, containing 2 X Taqman Universal PCR Master Mix (5.0 mL; Thermo Fisher Scientific), probeprimer mix for the target gene (0.5 mL), and 4.5 mL cDNA (100 ng) (39). All reactions were normalized against the housekeeping genes 18S ribosomal RNA and ribosomal protein L18 (Rpl18) ribosomal RNA. Data were expressed as Dcycle threshold (CT) values [DCT = (CT of target gene) 2 (CT of housekeeping gene)]. Statistical analysis Statistical analysis was performed with Prism 6 (GraphPad) software, using one-way analysis of variance with a post hoc Tukey test to compare follicle growth and oocyte size between the different treatment groups. Contingency analysis by Fisher's exact test was used for survival, antrum formation, and oocyte maturation status. Independent t tests were used to compare LLOQ and LLOD were calculated from calibration series experiments employing steroid-spiked cell culture media. LLOQ was defined as a detectable signal with a signal/noise ratio of more than 10:1 and with a signal variation of ,20%. LLOD was defined as the lowest detectable concentration with a signal/noise ratio of more than 3:1 and with a signal variation of ,20%. steroid quantifications and DCT values between control and treatment conditions. Matched or repeated measurements were analyzed using paired t tests, and unpaired t tests were used for independent measurements. All studies were performed in at least three independent experiments unless otherwise specified. Results Endogenous steroid synthesis in the developing follicle We used a murine eIVFG system to assess stagedependent steroidogenesis in the developing follicle using mass spectrometry-based multisteroid profiling optimized for highly sensitive and specific detection of sex steroids and their precursors [ Fig. 1(a)]. At day 2 of culture, we detected progesterone (Prog) and the sex steroid precursors DHEA, androstenedione (A'dione), and estrone (E1) at levels close to the lower limit of detection (0.5 to 2.0 nmol/L) [ Fig. 1(b)]. At day 4, the androgen precursors DHEA and A'dione as well as bioactive testosterone were generated in quantifiable amounts, and 17b-estradiol became detectable. At day 6 of culture, Prog synthesis increased significantly (P , 0.001 vs day 4) to the quantifiable range, and we observed a significant surge in active sex hormones, including testosterone (P , 0.001), DHT (P = 0.04), and 17b-estradiol (P , 0.001) [ Fig. 1(b)]. Corresponding to the increasing production of active sex steroids across follicle development, steroid enzyme mRNA also increased in a stage-dependent fashion. A significant (P , 0.01) increase in 17b-hydroxysteroid dehydrogenase type 1 was noted by day 6 (Supplemental Table 1). This enzyme catalyzes the conversions of A'dione to testosterone and E1 to E2 and is FSH responsive (40). Concurrent with follicle maturation, we detected significantly increased transcription (P , 0.05) of the FSH-regulated CYP19a1 gene encoding aromatase, the enzyme responsible for the conversion of androgens to estrogens (Supplemental Table 1). Consistent with the increasing generation of Prog detected by mass spectrometry, mRNA expression analysis showed increased transcription of the side-chain cleavage enzyme CYP11a1 (P , 0.01) [Supplemental Table 1; Fig. 1(a)]. Effect of exogenous androgen exposure and antagonism on follicular development and steroidogenesis We next examined the direct effects of androgen supplementation on follicle morphology, oocyte development, and steroid synthesis in the isolated follicle. For these studies, we delivered exogenous DHT to secondary follicles. DHT is the most potent androgen, which, in contrast to testosterone, cannot be converted to estrogens by aromatase activity. To determine AR-mediated androgen effects, we used AR blockade by administration of the highly selective AR antagonist MDV, isolated and in combination with DHT. Individual follicles were imaged at days 0, 2, 4, and 6 of culture to study follicle growth by measuring follicle diameters, antrum formation, and follicle survival. Oocyte quality was assessed following in vitro maturation. DHT-treated follicles were significantly growth advanced at all stages of follicular development [ Fig. 2(a) and 2(b)]. Conversely, MDV-and DHT + MDV-treated follicles were growth restricted compared with control follicles [ Fig. 2(a) and 2(b)]. DHT supplementation resulted in significant acceleration of the preantral to antral follicle transition (P , 0.0001), with a higher total number of follicles reaching the antral stage at day 6 (P , 0.001) [ Fig. 2(c)]. By contrast, MDV-and DHT + MDV-treated follicles showed evidence of delayed antrum formation (P , 0.05 at day 4 for MDV follicles and P , 0.0001 at days 4 and 6 for DHT + MDV follicles) [ Fig. 2(c)]. DHTtreated follicles had a significantly increased survival rate compared with control follicles; survival of MDV-and DHT + MDV-treated follicles did not differ from that of Fig. 3(c)]. These differences in androgen production were not mirrored by significant changes in steroidogenic enzyme expression at the mRNA level (data not shown). These results suggest that in vitro cultured follicles are capable of autonomously adapting endogenous androgen synthesis in response to changes in AR activation status, possibly indicating an intrafollicular AR-mediated autocrine feedback circuit involved in steroidogenesis. Interestingly, the addition of MDV alone resulted in significantly decreased steroid output at days 4 and 6 ( Fig. 3), which suggests that the observed intrafollicular feedback circuit becomes activated only after induction by endogenous or exogenous androgen exposure. DHT treatment decreased E2 synthesis at day 6 (77 6 10 nmol/L vs 138 6 8 nmol/L in controls; P , 0.01) and had no effect on Prog production (10 6 0.5 nmol/L vs 11 6 1 nmol/L in controls; not significant). DHT + MDV supplementation tended to increase E2 synthesis at day 6 (169 6 75 nmol/L vs 77 6 10 nmol/L with DHT alone; not significant) and significantly increased Prog production (45 6 2 nmol/L vs 10 6 0.5 nmol/L with DHT alone; P , 0.01). The androgen precursor DHEA is converted to active sex steroid in the developing follicle Because the secondary follicles synthetize appreciable levels of steroid hormones in the second half of the in vitro culture, we used the addition of the sex steroid precursor DHEA as a probe to further examine the stage-dependent steroidogenic capacity of the follicle. Steroid profiling by LC-MS/MS revealed that DHEA was actively converted by the follicle at all time points, including the immature stage (day 2) when endogenous steroidogenesis in control follicles was not quantifiable [ Fig. 4(a)]. Supplementation with 100 nM DHEA revealed high capacity for downstream androgen generation (A'dione, testosterone, and DHT) and high levels of conversion to estrogens at day 4 [ Fig. 4(b)], which appeared further enhanced by day 6 [Fig. 4(c)]. When increasing DHEA concentrations to 200 nM and 500 nM, we observed a gradual loss of appreciable generation of DHT from testosterone alongside a decrease in estrogen production, which became significant at day 6 of 500 nM DHEA (P , 0.05) [Fig. 4(c)]. At mRNA level, incubation with DHEA resulted in a significant (P , 0.05) downregulation of CYP19a1 mRNA expression compared with control follicles at When calculating the androgen/estrogen ratio [(A'dione + testosterone)/(E1 + E2)] to assess aromatase (CYP19a1) activity (5,41), we found that DHEA 100 nM maintained the balance observed in control follicles (DHEA 100 nM vs controls on day 6: 0.9 6 0.3 vs 1.3 6 0.01, respectively; P = 0.3), whereas higher DHEA concentrations significantly increased the androgen/estrogen ratio (DHEA 200 nM, 1.7 6 0.09, P , 0.01 vs controls; DHEA 500 nM, 4.3 6 1, P , 0.05; day 6), indicative of an androgenic intrafollicular milieu. Thus, using exogenous DHEA administration as a probe, we found that earlier stages of the developing follicle were capable of active androgen generation and that increased exposure to DHEA resulted in inhibition of aromatase activity and, consequently, estrogen production. Effects of increasing concentrations of exogenous androgens and estrogens on follicular development Next, we looked at the impact of the androgen precursor DHEA on follicular development and oocyte maturation. We showed that DHEA was converted by the follicles to androgens and subsequently estrogens. Therefore, we compared the effects observed after DHEA stimulation with the impact of increasing doses of nonaromatizable DHT and biologically active estrogen, E2, to dissect effects due to androgens vs estrogens in a potentially distinct fashion. Follicle size, reflective of follicular growth, was enhanced by DHEA 100 nM and DHT 25 nM [ Fig. 5(a) and 5(b)]; increasing androgen exposure to DHEA 200 nM neutralized this effect, and a further increase to DHEA 500 nM and exposure to DHT 50 nM showed the opposite effect, with a significant reduction in follicle size [ Fig. 5(a) and 5(b)]. The higher dose of E2 (25 nM) increased follicle size significantly, whereas 10 nM of E2 had no effect [ Fig. 5(c)]. Antrum formation was significantly enhanced by DHT 25 nM, whereas increasing DHT to 50 nM reverted this effect [ Fig. 5(e)]. DHEA 100 nM appeared to have a beneficial effect on antrum formation, whereas higher concentrations had an adverse effect, though the differences failed to reach statistical significance [ Fig. 5(d)]. E2 had no effect on antrum formation [ Fig. 5(f)]. Follicle survival rates increased significantly with exposure to DHEA 100 and 200 nM and DHT 25 nM. This effect was lost when DHEA was increased to 500 nM and DHT was increased to 50 nM. By contrast, E2 at 10 nM yielded no discernible effect on follicle survival, whereas a significant increase was observed after increasing E2 to 25 nM [ Fig. 5(g-i)]. Oocyte size was significantly reduced by the higher concentrations of DHEA (200 and 500 nM) and the higher DHT concentration (50 nM) [ Fig. 5(j) and 5(k)]. By contrast, E2 exposure had no effect on oocyte size [ Fig. 5(l)]. These findings were completely mirrored when assessing oocyte nuclear maturation, which was significantly decreased by higher androgen concentrations but was not affected by estrogen administration [Fig. 5(m-o)]. Taken together, although moderate androgen levels exerted beneficial effects on follicle growth, survival, and antrum formation, increasing bioactive androgen caused poor oocyte quality and negative effects on follicle growth and antrum formation. Increasing estrogen exposure enhanced follicle growth and survival, with no effect on antrum formation or oocyte quality. Discussion Although previous studies have proven the importance of androgen action in follicular development, our study extends this knowledge by approaching the follicle as a coordinated steroidogenic unit. We report simultaneous quantitative analysis of multiple steroids in the developing murine follicle under physiological conditions and in the presence of androgen exposure and antagonism, with highly sensitive and specific multisteroid profiling by tandem mass spectrometry. We demonstrated that the growing follicle has the capacity for active sex steroid synthesis at all developmental stages examined and provided evidence for the existence of an intrafollicular AR-responsive feedback circuit that dynamically regulated androgen synthesis in an autonomous fashion. We confirmed the beneficial effects of low-dose androgen supplementation to the growing follicle and described an AR-mediated facilitating role in antrum formation. Finally, we observed that gradually increasing androgen concentrations resulted in follicle developmental arrest, characterized by suppressed oocyte maturation, follicular growth stagnation, and decreased estrogen synthesis. We reported a quantitative multisteroid metabolome of the developing follicle, indicative of FSH-stimulated endogenous production of androgens, estrogens, and progestins, consistent with the current knowledge of follicular steroid production and in vivo hormone dynamics (42). Androgen secretion was quantifiable around day 4, which corresponds to antrum formation and gonadotrophin responsiveness. Estrogen biosynthesis increased sharply between days 4 and 6 of culture, as the follicle reached ovulatory maturity. At day 6, Prog secretion started to increase, as expected, in preparation for luteinization. The FSH-dependent steroidogenic enzymes 17b-hydroxysteroid dehydrogenase type 1 and CYP19a1 significantly increased with ongoing follicle maturation. According to the two-cell, two-gonadotrophin hypothesis, luteinizing hormone (LH) stimulates A'dione production by theca cells, which provides a substrate for estrogen biosynthesis by granulosa cells (43). In the culture system used, however, maturing follicles produced significant amounts of E2 in an LH-free and serum-free medium. This means that follicular cells are able to constitutively produce androgens in the absence of LH. Paracrine theca cell LH-independent androgen production is known to occur under the influence of insulin (44,45), present in physiological amounts in the culture medium used. Expression of steroidogenic acute regulatory protein, a theca cell marker (46), remained stable during follicle culture, which suggests that de novo theca cell formation was limited and that granulosa-theca cell trans-differentiation possibly accounts for the observed androgen production. Although the eIVFG system allowed for complete theca cell development, LH was not present; future studies on the effects of increasing doses of LH on androgen production in our system would enhance the translatability of our results. In our study, LC-MS/MS measured similar (47) or lower (7,48) concentrations of A'dione and E2 compared with those obtained in similar culture conditions but measured with immunoassays. These differences could be explained by the fact that immunoassays are prone to cross-reactivity, which may lead to falsely increased concentrations (49). A major advantage of our mass spectrometry-based multisteroid profiling assay is the ability to simultaneously measure multiple steroid concentrations in a single assay, whereas immunoassays are limited to one target molecule. Therefore, our validated mass spectrometry approach (50) yielded a state-of-theart representation of the dynamic endogenous steroid production in murine follicles. When follicles were exposed to exogenous DHT, the most potent and nonaromatizable androgen, we observed a downregulation of endogenous androgen secretion. This was AR-mediated, as the addition of the selective AR antagonist MDV prompted increased endogenous androgen synthesis. These findings are indicative of a feedback circuit at the level of the follicle, which may provide the homeostatic set point for androgen-AR downstream effects. We further reported a detailed analysis of AR-mediated androgen effects on the development of the follicle and oocyte. The current results are in line with the previously reported follicular growth-promoting effects of androgens (4,27,36,51) and their roles in protecting from atresia (27) and enhancing follicle survival (20). We showed that the process of antrum formation occurred earlier and to a higher extent in DHT-treated follicles and was impaired in AR-blocked follicles. It was previously shown that follicles grown in antiandrogen serum (4), in the absence of FSH (27) or in steroid-depleted conditions (20), displayed limited antrum formation. Oocyte growth and maturation were not affected by AR agonist (DHT) or antagonist (MDV) treatment in our system. Tarumi et al. (52) treated mouse ovarian follicles in culture with a concentration rate of 10 210 to 10 26 M DHT and found no effect on the capacity of the oocyte to resume meiosis following an ovulatory stimulus. Lenie and Smitz (5) observed no change in oocyte quality when treating mouse follicles in vitro with the AR-antagonist hydroxyflutamide or bicalutamide (in a concentration range of 5 nM to 5 mM), and only the highest dose (50 mM) of AR blockade resulted in decreased oocyte meiotic maturation. Murine steroidogenesis resembles human steroid production but differs slightly in some details; for example, the human CYP17A1 enzyme does not efficiently convert 17-hydroxyprogesterone to A'dione (53), which means that the overwhelming majority of androgen synthesis in humans proceeds through the androgen precursor DHEA. The addition of 100 nM of DHEA had positive effects on follicle growth and survival and did not impair oocyte development, whereas increasing concentrations of DHEA (200 and 500 nM) provoked dysfunctional follicle development, with dose-dependent robust suppression of oocyte growth and maturation, aromatase enzyme activity, estrogen production, and follicular proliferation. Previous studies reported that in vitro supplementation of mouse follicles cultured with A'dione at doses .200 nM (54) or 10 25 M (55) was associated with decreased meiotic maturation and impaired spindle formation. The toxic effect on the oocyte was attributed to estrogen excess in one study (52) and was inconclusive with regard to its androgen-mediated mechanism in the other study (54). In our study, the detrimental oocyte phenotype in .200 nM of DHEAtreated follicles was clearly attributable to increased provision of active androgens to the follicle generated by conversion of DHEA to testosterone (T) and DHT. The androgen-mediated downregulation of aromatase is in line with reported observations in granulosa (56) and Leydig cells (57,58). In rats, administration of DHT was accompanied by decreased granulosa cell proliferation (59), suppressed aromatase activity, and reduced E2 production (60). In primates, DHT administration resulted in reduced FSH-stimulated estrogen synthesis (61). DHEA does not mediate its effect by direct binding and transactivation of the AR but exerts androgenic activity only indirectly, after downstream conversion to ARbinding androgens, such as testosterone and DHT. In the context of our experiments with isolated murine ovarian follicles, we used DHEA as a probe for exploring the steroidogenic capacity of the developing follicle, which is more readily achieved by adding substrate than by looking at baseline production only. Employing single-follicle steroid metabolome analysis, we showed that the follicle is capable of downstream conversion of DHEA to active androgen as early as day 2 of follicular development. Our findings were obtained with experimental androgen concentrations in murine follicles. Therefore, we have to be cautious in translating them to human pathologic follicle development in hypoandrogenic or hyperandrogenic conditions; however, some general implications might hold true. Our results contribute to the scientific foundation for DHEA pretreatment in poor responder women undergoing IVF to improve the developmental quality of the maturing follicles. As others have highlighted before (62,63), the maturing follicle is subject to a delicate androgen homeostasis, with a clear threshold level. In our study, using a murine model this threshold is DHEA 200 nM, beyond which the beneficial effects of enhanced active androgen generation become deleterious. Although a murine model has limitations in assessing DHEA action (i.e., given its limited physiological role in rodents), our results appear to indicate that over-replacement of DHEA in human-assisted reproductive settings might actually harm oocyte quality and become detrimental for follicle growth. This study clearly underlines the need for adequately powered, randomized, controlled trials on DHEA supplementation that takes into account baseline levels of circulating androgens and aims to restore physiological DHEA concentrations in women with low ovarian reserve undergoing fertility workup. Previous studies have shown that daily doses of DHEA (25 to 50 mg) restore physiological serum androgen concentrations from nondetectable baseline concentrations in women with adrenal insufficiency (64)(65)(66). Daily doses of $75 mg of DHEA will yield supraphysiological androgen concentrations (64). However, these are the doses used by many studies targeting enhanced fertility by DHEA treatment (67,68), which renders DHEA administration in this context a pharmacological intervention. Sex steroid production occurred earlier in DHEAtreated follicles than in nonstimulated follicles, indicating that in the presence of steroid substrate, immature follicles are steroidogenically active and capable of androgen synthesis. In women with PCOS, DHEA and A'dione production is increased (69)(70)(71), and these circulating androgen precursors are likely to be metabolized by the small preantral PCOS follicles, thereby contributing to intraovarian hyperandrogenism. The intrafollicular feedback circuit we observed, with decreased endogenous androgen synthesis after exogenous DHT and increased androgen production with added AR antagonist, may help to maintain an androgen equilibrium in the follicle, providing steady levels of AR activation during development to maximize the beneficial effects of androgens on follicle growth and function. However, if androgen exposure exceeds the physiological concentration range for women, this feedback circuit can no longer provide sufficient protection, and adverse biological effects of excess androgens affect follicle growth and function. We describe a gradual, oocyte-centered process of follicle developmental arrest in our study. From this, we extrapolate that local androgen excess may negatively affect oocyte quality in PCOS, which in turn could co-orchestrate antral follicle arrest. In conclusion, we have shown that androgen homeostasis in the developing preantral and antral murine follicle is crucial to ensure optimal growth, steroidogenesis, and oocyte maturation. Our study illustrates the dynamic steroid metabolome of the developing follicle in vitro and a feedback mechanism at the level of the isolated follicle that responds to androgen excess with downregulation of intrafollicular androgen production; these findings have translational implications for our understanding of PCOS and low ovarian reserve.
2017-07-16T09:24:22.755Z
2017-02-23T00:00:00.000
{ "year": 2017, "sha1": "ba3b83afd83146099fb7080e694a87ac1ddb8202", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/endo/article-pdf/158/5/1474/14068919/en.2016-1851.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "8fafd1260df1375a084555d12edc4b0ae3378794", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
263843988
pes2o/s2orc
v3-fos-license
A Profound Vitamin B12 Deficiency in a Patient with Lofgren’s Syndrome Abstract Lofgren’s syndrome is a unique manifestation of sarcoidosis presenting with erythema nodosum, bilateral hilar lymphadenopathy and migratory polyarthritis. A concurrent vitamin B12 deficiency is not well described and may be related to a rare gastrointestinal manifestation of sarcoid and Lofgren’s syndrome. We describe a case of a 57-year-old male presented with migratory polyarthritis, erythemic nodules, edema of his legs and fever. His laboratory tests showed anemia with a profound vitamin B12 deficiency. Imaging demonstrated bilateral hilar adenopathy. Pathology revealed non-necrotizing granulomas consistent with sarcoidosis. The patient was started on prednisone and vitamin B12 supplements with improvement of his complaints and vitamin B12 levels. Sarcoidosis can manifest in many extrapulmonary organs, including the gastrointestinal tract, resulting in nutritional deficiencies, such as vitamin B12 deficiency. Treatment of these nutritional deficiencies includes treatment with steroids, as well as vitamin supplementation. We suggest this case to be a rare manifestation of gastrointestinal involvement in Lofgren syndrome; however, a biopsy from the GI tract was not performed to confirm the diagnosis. An informed consent was obtained from the patient. An institutional approval was not required for the publication of this case. Introduction Sarcoidosis is a multisystem granulomatous disease characterized pathologically by the presence of noncaseating granulomas in involved organs. 1 The etiology of sarcoidosis remains unknown, characterized by the accumulation of T lymphocytes, mononuclear phagocytes, and noncaseating granulomas in involved tissues.The most common manifestation of sarcoidosis is pulmonary disease; however, skin and articular manifestations are not rare. 2,3Gastrointestinal (GI) involvement is rare, occurs in less than 1% of patients, and is often silent and under-diagnosed.GI sarcoidosis is often masquerading as an infection, an inflammatory bowel disease, a peptic ulcer disease, a malignancy or even a foreign body, depending on the GI organ involved. 4,5Lofgren's syndrome (LS) is a unique form of sarcoidosis, and it is characterized by the triad of acute onset of erythema nodosum, bilateral hilar lymphadenopathy and migratory polyarthritis. 3,6Treatment of LS is typically supportive and prognosis is excellent, with a greater than 90% chance of spontaneous remission within 2 years. 6,7However, GI involvement and especially symptomatic GI involvement in LS is poorly described in the literature.There are many etiologies which can cause vitamin B12 deficiency.A few mechanisms are thought to cause severe vitamin B12 deficiency including severe malabsorption, food cobalamin malabsorption, pernicious anemia, bariatric surgery, and intestinal malabsorption.Vitamin B12 deficiency may lead to hematological disorders including anemia, and neurological disorders.Treatment includes vitamin B12 replacement and treatment of the underlying disease. 8his case report will describe a rare manifestation of LS presented with a profound vitamin B12 deficiency. Case Description A 57-year-old male presented with complaints of migratory joint pain mainly in his lower limbs.His medical history was positive for vitamin B12 deficiency, gastrointestinal (GI) reflux disease, and fatty liver.Initial evaluation demonstrated an elevated erythrocyte sedimentation rate of 93 mm\hour, X-rays of the vertebra were normal.He was started on prednisone 40 mg daily with improvement of his complaints.While tapering down prednisone doses, symptoms reoccurred.The patient reported of peripheral edema, night sweats and weight loss, with no fever.He noticed a nodular and sensitive to touch lesion on his palms and bilateral non-purulent conjunctivitis.Physical examination revealed bilateral leg edema and erythemic nodules at the palms of his hands.He had mild normochromic normocytic anemia with hemoglobin level of 13.1 g\dL and a normal red blood cell distribution width.C-reactive proteins was elevated 14.5 mg\dL.Inquiry for anemia demonstrated a profound vitamin B12 deficiency -levels of 75 pmol\dL, folic acid and TSH were within the normal range, there was no evidence of iron deficiency -iron levels were 79 µg/dl, transferrin levels were 272 mg/dl, ferritin levels were 188ng/mL, intrinsic factor antibodies and anti-parietal cell antibodies were negative, rheumatic panel was negative.Blood cultures and infectious panel were negative.Chest X-ray showed bilateral hilar adenopathy.Ultrasound of the swollen ankle did not demonstrate a collection or abscesses; abdominal ultrasound showed hepatic steatosis with no enlargement of liver or spleen.A fundus examination showed no signs of uveitis.A skin biopsy from the erythemic nodules on his palms demonstrated no signs of vasculitis, with nonspecific inflammatory infiltrates.A positron emission tomography-computed tomography (PET-CT) demonstrated bilateral hilar adenopathy with pathologic fluorodeoxyglucose (FDG) uptake, as well as pathologic FDG uptake at pulmonary nodules.Another finding was thickening throughout the ascending colon.Bronchoscopy with endobronchial ultrasound was performed, the pathology report indicated lymphatic tissue and non-necrotizing granulomas consistent with sarcoidosis. Discussion This case is an example of a rare manifestation of Lofgren's syndrome (LS), with a concurrent profound vitamin B12 deficiency.Limited data are available regarding nutritional deficiencies and the association to LS or sarcoidosis.This patient presented a profound and resistant vitamin B12 deficiency, even though his diet was not vegan or vegetarian.His vitamin B12 levels throughout the years ranged from 111 to 250 pmol\L alternately, without a significant response to supplemental therapy.In this current hospitalization vitamin B12 levels were measured as low as 75 pmol\L, the lowest value recorded throughout his entire medical history.These coinciding events raise the suspicion that there is a relation between vitamin B12 deficiency and the diagnosis of LS.Several mechanisms are suggested to be attributed to the relationship between vitamin B12 deficiency and LS.The first is sarcoid GI involvement.The process of absorbing vitamin B12 involves a few sites in the GI tract -vitamin B12 is bound first to Haptocorrin in the saliva, then in the stomach Haptocorrin is dissolved, and the cobalamin is bound to intrinsic factor (IF) until this complex reaches the terminal ileum where it is absorbed to the blood stream. 9Any disruption of the gastric or terminal ileum mucosa could potentially cause vitamin B12 deficiency.Sarcoid in the GI tract is relatively rare; however, gastric sarcoid is considered the most frequent form of sarcoid in the GI tract.It usually presents with epigastric pain, nausea, vomiting and weight loss, although 10% of patients can be asymptomatic.Microscopically, the most frequent lesion observed is diffuse infiltration of the gastric wall, 5 with the assumption that such infiltrations will reduce levels of IF and therefore vitamin B12 malabsorption.There are a few case reports describing gastric sarcoidosis with concomitant severe vitamin B12 deficiency. 10,11In both cases, the patients presented with a megaloblastic anemia, unlike this patient, presented with normocytic anemia, that can be explained by a combined etiology, such as vitamin B12 deficiency with a concurrent chronic disease. 12Ileal sarcoidosis is infrequent, usually presents in the terminal ileum, and occurs with a concomitant sarcoid gastric involvement. 5A case report published in 1992 described a woman with a persistent folate deficiency and proven ileal sarcoidosis preceding systemic manifestations of sarcoidosis in years. 13 second proposed mechanism is pernicious anemia.There are several case reports describing sarcoidosis with vitamin B12 deficiency, atrophic gastritis, and positive anti-parietal cell antibodies. 14,15Although sarcoidosis with the combination of other immune mediated disorders exists, 16 pernicious anemia with sarcoidosis is not well documented and it is not more prevalent in patients with sarcoidosis than in the general population. 14,17Also, pernicious anemia in this patient is less likely as the relevant antibodies were negative. A third proposed mechanism is food-cobalamin malabsorption (FCM), which is characterized by the inability to release cobalamin from food or intestinal transport proteins.This syndrome is more prevalent among the elderly and is usually related to atrophic gastritis. 18There are few reported cases of FCM in relation to Sjogren syndrome. 19Another study reported vitamin B12 deficiency in patients with systemic lupus erythematosus with a proposed underlying cause of FCM. 20The relation between FCM and autoimmune diseases is unknown.However, we assume that the same mechanism may possibly be related to LS. We suggest a possibility that this patient has sarcoidosis involvement of the GI tract, and more specifically gastric involvement.Vitamin B12 deficiency might have been the first and early manifestation of LS, although extrapulmonary involvement in LS is approximately 12%. 7To confirm gastric sarcoid, endoscopic investigation should be performed with biopsies demonstrating typical pathological findings of sarcoidosis. 10To note, PET-CT demonstrated an accidental finding of thickening of the ascending colon, which will require further investigation with a colonoscopy to rule out a rare colonic involvement. 5fter the initiation of steroid and vitamin B12 replacement therapy, vitamin B12 levels were >1476 pmol\L.It can be debated whether the replacement therapy of vitamin B12 alone elevated vitamin B12 levels, or steroid therapy improved the disease status and therefore also the vitamin B12 levels.However, supplemental treatments did not resolve the B12 deficiency throughout the years, until steroid treatment was initiated.Furthermore, during follow-up and initiation of tapering down of steroids regimen, the patient relapsed and developed vitamin B12 deficiency while taking supplemental vitamin B12. Although the patient has no GI symptoms, it is possible that vitamin B12 deficiency is the sole manifestation of GI sarcoidosis.Additional tests are warranted and were recommended to the patient, to diagnose GI involvement, mainly gastroscopy and colonoscopy with biopsies. In conclusion, sarcoid involvement of the GI tract is suggested to be the cause of vitamin B12 deficiency in this patient, responsive mainly to steroid treatment with partial response to supplemental vitamin B12.An informed consent was obtained from the patient for the publication of their case details and any accompanying images.
2023-10-12T15:05:00.503Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "b0f6bb280990a04253920d913fac9365235943b3", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=93372", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02bc57d9377f3c0a39285ec1ae3812014db081e4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260164729
pes2o/s2orc
v3-fos-license
Limiting Light Dark Matter with Luminous Hadronic Loops Dark matter is typically assumed not to couple to the photon at tree level. While annihilation to photons through quark loops is often considered in indirect detection searches, such loop-level effects are usually neglected in direct detection, as they are typically subdominant to tree-level dark matter-nucleus scattering. However, when dark matter is lighter than around 100 MeV, it carries so little momentum that it is difficult to detect with nuclear recoils at all. We show that loops of low-energy hadronic states can generate an effective dark matter-photon coupling, and thus lead to scattering with electrons even in the absence of tree-level dark matter-electron scattering. For light mediators, this leads to an effective fractional electric charge which may be very strongly constrained by astrophysical observations. Current and upcoming searches for dark matter-electron scattering can thus set limits on dark matter-proton interactions down to 1 MeV and below. I. INTRODUCTION Although dark matter (DM) makes up most of the mass in the Universe, how (or even whether) it interacts non-gravitationally is completely unknown [1][2][3].Direct detection bounds on DM, as well as astrophysical and cosmological constraints, are often presented in a "model-independent" way, e.g. as limits on dark matter's nonrelativistic scattering cross section with protons or electrons, rather than limits on a larger set of model parameters.In such a framework, limits on dark matter's interactions with different Standard Model particles are often treated completely independently.There are of course exceptions-DM-proton and DM-neutron cross sections are often assumed to be identical to avoid isospin violation-but for example, limits on DM-electron scattering are often set assuming that DM-nucleon scattering is negligible, and vice-versa. In this work, we explicitly compute the DM-photon interaction induced by DM couplings to hadronic states at low energy, where the relevant degrees of freedom are not quarks, but mesons and baryons.This is in contrast with much of the literature on kinetically mixed dark photons, which treats the mixing between the photon and dark photon as a phenomenological parameter that is generated at much higher energy scales.We show that such loop-level couplings can produce detectable event rates in direct detection experiments, and/or induce a nonnegligible effective charge for the DM, in a wide range of sub-GeV parameter space.We thus set new limits on sub-GeV dark matter's coupling to protons. This paper is organized as follows.In Section II, we review the ideas of Ref. [4], and discuss the types of interaction that yield nonzero results.We then introduce both our model and the effective Lagrangians used to describe dark matter's interactions with hadronic states, and compute the induced interactions with photons and electrons.In Section III, we compute new constraints on sub-GeV DM.In Section IV, we discuss the implications of our results.Detailed descriptions of the calculations performed in this work can be found in the Appendix A. Suppose that DM is a fermion, and interacts with a particular charged Standard Model fermion-in this case, a proton-via a new vector mediator that we will refer to as Z (the tree level diagram is shown in Fig. 1 (a)).A proton loop then induces a mixing between the Z and the photon, as well as a DM-electron interaction, as shown in Fig. 1 (b).If Z is massless, the result is an effective charge for the DM, as derived by Ref. [4].Even if the mediator is massive, a mixing with the photon is still generated, but the different momentum dependence means that the DM no longer behaves as a truly millicharged particle. The proton is, of course, not the only hadronic state that can be included in loops like this.In fact, the more typical approach to such hadronic corrections would be to use loops of pions (Fig. 1 (c) and (d)), whose masses are below the QCD scale and for which the framework of chiral effective field theory (ChEFT) can readily be applied.The use of pion loops in this context is strikingly similar to their role in hadronic vacuum polarization, notably in the context of muon g − 2 (see Ref. [41] for a detailed review). In this work, we include both proton and meson (specifically pion and kaon) loops in order to compute the induced mixing between the Z and photon.We take inspiration from Ref. [42], which showed that nucleons could be included in the framework of chiral perturbation theory while still preserving chiral power counting. Throughout this work, we will consider only the case of a vector mediator, because for a scalar or axial vector, the diagrams shown in Fig. 1 (b)-(d) vanish (see Ref. [24]).An analogous interaction between DM and electrons in the scalar case can be induced at the 2-loop level [24], or by instead mixing with the Higgs at one loop.However, these options are suppressed compared to the one-loop mixing with the photon, so we do not consider them. Next, we will estimate the effective DM-electron scattering cross section that results from couplings through hadronic loops.We focus on DM masses in the range 1 MeV m χ 100 MeV, which is difficult to probe using nuclear recoil searches, but where electron recoil searches have set some bounds [43][44][45][46][47][48][49][50].At the corresponding energies (T χ 1 keV), quarks are confined into baryons and light mesons, and their behavior is best described using ChEFT.Hadronic loops are dominated by pions and kaons, the light pseudoscalar mesons.Proton loops will contribute as well, and may dominate depending on the underlying theory.We start with description of the DM quark interaction and use this to build a consistent description of the DM-proton and DM-meson coupling.These will then be used to estimate and compare the tree level DM-proton cross section with the 1-loop DMelectron cross section. Interactions between a dark fermion and quarks through a vector mediator can be described by where χ is the massive DM particle, q are the quarks, α q is the coupling of each quark specie to Z µ , g χ is the coupling between the Z and the DM.We may define a resulting effective proton interaction [51]: The meson interaction terms can be derived from the ChEFT Lagrangian.The relevant lowest order interaction term in the ChEFT Lagrangian is [52] L ⊃ where U = e i F π contains the light meson octet F is the pion decay constant.Interactions with external vector fields (namely the photon or Z ) are captured in the derivative terms where v µ = Z µ diag(α u , α d , α s ) is a matrix which represent an external vector interaction emerging from interactions of the form depicted in (1).We can include electromagnetic interactions here as well by including a term of the form 3 ).Expanding out the chiral Lagrangian (3) gives the following interaction terms between light mesons, photons, and Z . L ⊃i(α Using these interaction terms we calculate the χp scattering cross section at tree level and the χe scattering cross section at loop level based on the diagrams shown in Fig. 1.These calculations are shown in more detail in the Appendix A. At tree level and low momentum exchange where m Z is the Z mass.The loop terms together give a cross section of where The first term in c loop comes from the proton loop as shown in Fig. 1 (b), the second term comes from a pion loop and the third from the kaon loop.The meson loop terms get contributions from diagrams of the form depicted in Fig. 1 (c) and (d).Loop divergences are contained in the log terms, which depend on the mass of the particles traveling through the loops and a cutoff term that is logarithmic in µ.We follow Ref. [42] and set µ = m p .We use m π = 140 MeV, m K = 494 MeV [53], and m p = 938 MeV.γ E = 0.577 is the Euler Mascheroni constant. In the case of a heavy mediator, one can integrate over scattering angles and give a relative total cross section of The factor e 2 2304π 4 ≈ 4 × 10 −7 .While this suppression seems substantial, the effective cross sections are large enough that planned and currently running electron recoil detectors should be able to observe or rule out DM that is difficult to observe using traditional nuclear recoil detectors. For a light mediator, the integral of dσ χe /dΩ and dσ χp /dΩ over the scattering angle diverges.So we instead report the ratio between σχp and σχe , where Here q represents the momentum exchanged between scattered particles, and q ref is a reference momentum, usually taken to be ∼ αm e [13].σ is Lorentz-invariant and typically used when discussing constraints on light mediator scattering.The resulting relation between light mediator cross sections is Hence, the ratio between the proton and electron cross sections does not depend on mass of the vector mediator.As the terms shared between proton and electron scattering diagrams divide out, the ratio between the proton and electron cross section also does not depend on the spin of the DM, or the Lorentz structure of its interaction with the vector mediator. III. RESULTS In direct detection literature, the interactions between DM and individual quarks that generate the DM-nucleon cross section are typically left unspecified.Because we also include meson loops, recasting these limits requires a concrete choice for the couplings to individual quarks.In this Section, we report our results for the case α u = −α d and α s = 0, i.e. the case where the Z couples to isospin.Results for an alternative case, α u = α d = α s , are shown in the Appendix B. The limits between these two cases differ by a factor of ∼ 8.We do not expect different choices of couplings to weaken the limits much beyond this range without fine tuning. For our constraints on the DM-proton cross section using electron-recoil searches, we focus on SENSEI [46], which has reported some of the strongest limits on DMelectron scattering while being only ∼100 meters underground (just under 300 meters water equivalent, or m.w.e.), presenting a lower overburden than most direct detection experiments.Our limits result from rescaling the reported limits of Ref. [46] by the ratio of the DMproton and loop-induced DM-electron cross sections.In the same way, we also recast projections for the DAMIC-M experiment [54][55][56], which has recently released its first results [50], as well as the upcoming Oscura experiment [57]. Figure 2 shows limits and projected sensitivities based on effective loop interactions, for a heavy Z , compared to existing limits from direct detection and cosmology.Our limit constrains DM masses from about 1 MeV to 30 MeV, and at the lowest masses is comparable to the strongest existing Migdal effect limit, from SEN-SEI [46,58].It is also competitive with the strongest cosmological bounds, which come from Lyman-alpha observations [59].The projected sensitivities reach cross sections of 10 −36 cm 2 for masses of a few MeV, orders of magnitude better than existing Migdal effect searches and Lyman-alpha bounds.For comparison, Ref. [57] shows projections for future Migdal effect searches from FIG. 2. Limits on the dark matter-nucleon cross section due to their loop-induced coupling to electrons.Interactions via a heavy vector mediator with αu = −α d and αs = 0. Our recasting of constraints from SENSEI [46] is shown in red, while the regions outlined in red dashed and dotted lines will be accessible to DAMIC-M [54][55][56] and Oscura [57] respectively.Existing detector constrains From Migdal effect searches at SENSEI [46,58], XENON10/1T ( [60], as shown as in Ref. [58]), CDEX [61,62] and EDEL-WEISS [63,64] are shown in gray, while Lyman alpha constraints [59] are shown in blue. At the large cross sections we consider, DM may be stopped in the Earth before reaching the detector.Even though the signal we use to set limits is scattering with electrons, attenuation will be dominated by scattering with nuclei due to the much larger cross section.To account for attenuation in the Earth, we use the ceilings computed for SENSEI in Ref. [13].These ceiling calculations are also dominated by nuclear scattering, but were computed in a dark photon model, so we rescale them by a factor of 4 to account for the scattering with neutrons (assuming typical spin-independent scattering).For DAMIC-M and Oscura, we use the same ceiling, lowered by factors of 17 and 20, respectively, to simulate the overburdens at Laboratoire Souterrain de Modane (4800 m.w.e.) [65] and SNOLAB (6000 m.w.e.) [66]. Figure 3 shows our limits and projected sensitivities for a light Z , compared to existing limits from direct detection (cosmological bounds exist on scattering via light mediators, e.g.[67,68] but only at higher cross section).As mentioned above, in the case of a massless mediator, the total cross section diverges, and limits are typically reported in terms of a reference cross section σ.We follow the parametrization of Ref. [13] (see also the Appendix A for additional discussion).Also in the light FIG.3. Limits on the dark matter-nucleon cross section due to their loop-induced coupling to electrons.Interactions via a light vector mediator with αu = −α d and αs = 0. Our results for SENSEI [46] are highlighted in red, and the expected reach of DAMIC-M [54][55][56] and Oscura [57] are outlined by the red dashed and dotted lines respectively.Constraints from Migdal effect searches at SENSEI [46,58], and XENON10/1T ( [60], as shown in Ref. [58]) are shown in gray. Z case, the scattering is typically softer, making attenuation less of an issue, so we can constrain a much wider range of parameter space.At large DM masses, Migdal effect bounds from SENSEI and XENON10/XENON1T are stronger than our bounds.However, our limits are stronger than existing Migdal effect limits for masses up to 5 MeV, and extend down well below 1 MeV.In our projections, we show that DAMIC-M and Oscura can again probe cross sections in the range 10 −34 -10 −36 cm 2 , competitive with the Migdal effect projections from Ref. [57] and surpassing them for masses below ∼10 MeV if one were to extrapolate those projections to lower mass.We again show direct detection constraints from SENSEI, XENON10, and XENON1T for comparison.It deserves mention that models of new light Z mediators coupled to baryon currents will also induce SM anomalies that can be constrained through their contribution to rare meson decays [69,70]. Finally, we note that an effectively massless Z produces an effective fractional electric charge (or "millicharge") for the DM.We can compute the effective charge induced by loops of hadronic states, and recast limits on millicharged DM as limits on DM-proton interactions via an effectively massless vector.We report our results in terms of , the DM charge in units of the electron charge, i.e. q DM = e. Figure 4 shows, as a color scale, the DM charge corresponding to a given m χ and σ nχ .In gray we superimpose the same Migdal effect limits shown in Fig. 3.In addition, we show two astrophysical bounds on millicharged DM, which are relevant in this parameter space as a re-sult of the induced DM charge.First, Ref. [71] argued that fractionally charged DM interacting with Galactic magnetic fields in the Milky Way would extract angular momentum from the Milky Way disk, spinning down the disk over the course of gigayears.Although they report an order-of-magnitude uncertainty on their limit, Fig. 4 covers more than 10 orders of magnitude in .Taking this uncertainty into account, these bounds still far supersede those set by the tree level interactions.Second, Ref. [72] considered millicharged DM moving in galaxy clusters under the influence of cluster magnetic fields, and argued that if the DM charge were too large, magnetic fields would substantially alter the DM density profile.This results in another strong bound on DM charge, also shown in Fig. 4. Other limits on millicharged DM may not apply, or may need to be considered more carefully in this scenario.For example, supernova cooling constraints on millicharged particles [73] do not apply here because the proton coupling is large enough to trap the DM within the proto-neutron star.Similarly, the argument that millicharged particles would be evacuated from the Galaxy by supernovae, put forward by Ref. [74], depends on the dark matter not scattering too frequently with Standard Model particles, an assumption that may be violated in at least some of the parameter space we consider.For this reason, and because the corresponding limit is weaker than the other astrophysical bounds we show, we do not plot the limit from Ref. [74].We also note that a specific model of DM that has a loop-induced effective electric charge through its hadronic couplings will interact with protons differently than a typical millicharged particle.This could even strengthen the astrophysical bounds shown in Fig. 4, by, for example, enhancing the amount of angular momentum extracted from the Milky Way disk. IV. CONCLUSIONS We have presented a one-loop calculation of the low energy DM-electron cross section for DM which interacts exclusively with quarks at tree level.This interaction should generically emerge in a wide range of DM models that interact with quarks through a vector mediator.This has allowed us to derive novel constraints on the DM-proton cross section using existing constraints from SENSEI data.We have shown that currently-running and upcoming electron recoil detectors, DAMIC-M and Oscura, should be able to probe DM-proton cross sections that may be beyond the reach of nuclear recoil detectors.Finally we have demonstrated that DM that interacts with quarks through a light mediator at tree level has an effective electric charge which can be used to recast astrophysical and cosmological constraints on the DMelectron cross section. Standard Model loop interactions can be an effective tool in exploring DM behavior, and are an inevitable but often-ignored part of any DM theory.While we focused on quark scattering interactions through a vector mediator in this work, we note that loop interactions similar to those described in this work may be effective at bridging different DM-Standard Model interactions in annihilation processes and with mediators not explored here. Appendix A: Cross Section Calculation Effective Lagrangians We calculate the dark matter (DM) proton cross section based on tree level interactions of the form shown in Fig. 5 and the DM-electron scattering cross sections resulting from one loop diagrams of the form shown in Fig. 6.These calculations are all done in the center of mass frame, as the total cross sections are Lorentz invariant.We start with the proton tree level scattering. FIG. 6. Dark matter-electron interaction induced by proton and pion loops. We start the calculation by considering an underlying DM-quark interaction of the form where χ is the massive DM particle, q are the quarks, α q is the coupling of each quark specie to Z µ , g χ is the coupling of χ to its vector mediator.In the low energy limit quarks are confined to light mesons and baryons.Their behavior is best described my Chiral Effective Field Theory (ChEFT) with Baryons.At low energies (A1) gives an effective coupling to protons of the form [75] The lowest order terms in the ChEFT Lagrangian that will contribute to the meson-Z interaction are [52] where U = e i F π contains the light meson octet Here we have dropped q µ q ν terms as these do not contribute to our final cross section.Using the relation 1 AB = 1 0 dx 1 (A+(B−A)x) 2 , shifting k µ to k µ + q µ (1 − x), and dropping the q µ q ν terms and terms odd in k from the numerator (these will go to zero or not contribute to the final cross section) give This integral will diverge in 4 dimensions.We manage this divergence using dimensional regularization by integrating over d dimensions then taking the limit d → 4 − .To do these integrals, we take k µν → 1 d k 2 g µν .In d dimensions the integral takes the form Using the following formulas (A13) we complete the k integral in Eq.(A12).This gives (A14) If we take the limit p m P then the integral over x is simple and gives This diverges in the = 0 limit, expanding around = 0 gives Keeping only the finite terms gives proton loop contribution to the scattering diagram Here we define µ as a cutoff scale for low energy ChEFT with Baryons.Following Ref. [42] we take µ = m P . b. Light Meson Loops Both pions and kaons can contribute to DM-electron loop level scattering.There are two different loop diagrams for each light meson particle.The general form of these diagrams is displayed in Fig. 6 (b) and (c).Below, we calculate a general expression for these loop contributions, and then add in the specific coupling and mass values for specific mesons.These calculations modify the vacuum polarization derivations for scalar QED shown in [76].Using the interaction terms of the lagrangian shown in Eq.(3) we get the following amplitudes for the loop in Fig. 6(b) and Fig. 1 (c) respectively We use g as a stand in for the meson-Z coupling and m to represent the meson mass.Summing these together and then preforming the same integration, approximation and simplification steps outlined in Section A 3 a gives the following total loop contribution The specific loop contributions from pions and kaons are then Combining the loop contributions calculated in (A17) and (A20) gives the overall loop contribution at low energies where For a heavy mediator, we can perform the integration explicitly to relate the total cross sections: In the case of a light mediator, the total cross section diverges.Literature involving light mediators often parameterizes the differential cross section in terms of a reference cross section σ: where q ref is a reference momentum, usually taken to be ∼ αm e [13].We can rewrite our differential cross sections in terms of the momentum transfer q, to match direct detection literature, by noting that As it turns out, the relation between the reference cross sections is exactly the relation between the total cross sections found in the heavy mediator case.FIG. 7. Limits on DM with a vector mediator and αu = α d = αs.Our recasting of constraints from SENSEI [46] is shown in red, while the regions outlined in red dashed and dotted lines will be accessible to DAMIC-M [54][55][56] and Oscura [57] respectively.Existing detector constrains From Migdal effect searches at SENSEI [46,58], XENON10/1T ( [60], as shown as in Ref. [58]), CDEX [61,62] and EDELWEISS [63,64] are shown in gray, while Lyman alpha constraints [59] are shown in blue. FIG. 1 . FIG. 1. Feynman diagrams utilized in this work.The tree level diagram (a) captures DM-proton scattering.The loop level diagrams (b)-(d) show DM-electron scattering that results from the DM-proton interaction. µ ab = m a m b /(m a + m b ) represents the reduced mass of two particles with masses m a and m b . S E I (T h is W o rk )DAMIC-M (This Work, proj.)O sc u ra (T h is W o rk p ro j. ) S E I (T h is W o rk )DAMIC-M (This Work, proj.)O sc u ra (T h is W o rk , p ro j. ) FIG. 4 . FIG.4.Effective electric charge resulting from hadronic loop interactions assuming αu = −α d and αs = 0.The white lines are astrophysical constaints on millicharged DM: above the solid line, cluster magnetic fields would noticeably alter the density profile of galaxy clusters[72], while above the dashed line, millicharged DM would extract too much angular momentum from the Milky Way disk[71].Migdal effect constraints from SENSEI[46,58] and XENON10/XENON1T ([60], as shown in Ref.[58]) are shown in gray. FIG. 5 . FIG. 5. Tree level Feynman diagram for dark matter proton scattering where T denotes the target of the scattering, either proton or electron.We can thus write parametrization above, we can relate the reference cross sections:σ χe = σ χp e 2 2304π 4 (2α u + α d ) S E I ( T h i s W o r k ) DAMIC-M (This Work, proj.)Oscura (This Work proj.)
2023-07-27T01:22:25.923Z
2023-07-25T00:00:00.000
{ "year": 2023, "sha1": "125dc0a61d565c16a88750c17cefa8ac87a4218c", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.132.051001", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "125dc0a61d565c16a88750c17cefa8ac87a4218c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225686809
pes2o/s2orc
v3-fos-license
The development of a socio-economic model to promote women’s empowerment initiatives in the renewable energy sector of South Africa This study investigates the main contributors that can positively influence the socio-economic empowerment of women in the renewable energy sector in the Republic of South Africa, and recommends new and innovative approaches to mainstream gender in the sector. Empirical evidence showed that ethical leadership positively influences good governance and successful women’s empowerment. The results also indicated that social investment and broad-based black economic empowerment positively influence successful women’s empowerment. Finally, the results indicated that sustainable programmes are a positive contributing factor to good governance. However, the respondents did not consider stakeholder engagement statistically significant to good governance or successful women’s empowerment. This study also has the potential to contribute to the improvement of impoverished communities in South Africa and elsewhere. Introduction There is a complexity associated with gender equality and women's empowerment in the Republic of South Africa. This complexity is compounded due to the multi-dimensional nature of the problem, and the stakeholders that are spread across sectors like the public organs of state, private sector, organised labour, and civil society. The challenge is therefore to identify and quantify the drivers of success needed to achieve sustainable socio-economic empowerment of women at all levels of society. This challenge was addressed by focusing on the renewable energy value-chain. PricewaterhouseCoopers (2016) conducted a study that researched 2 500 of the most significant global companies. The findings indicated that, of the 359 permanent or interim CEOs appointed in 2015 worldwide, only ten were female. At 2.8% of all new CEOs, this was the lowest rate since 2011 (McGregor, 2016). An International Business Report of 2015, which surveyed 5 520 businesses in 36 economies, revealed that of the 200 South African business executives surveyed, women only occupied 23% of the senior positions. The same report stated that 39% of businesses did not have any women in leadership positions (Kilian, 2016). The Johannesburg Stock Exchange recognises that during the past decade there has been little change in the proportion of women in senior positions of listed companies and has changed its listing requirements to encourage listed companies to disclose female representation. The disclosure would encourage companies to increase the nature and pace of gender transformation (Kilian, 2016). According to Eberhard et al. (2014), the South African Renewable Energy Independent Power Producer Procurement Programme (REIPPPP), a competitive tender process to facilitate private sector investment into grid-connected renewable energy (RE) generation, is one the most effective policy instruments to accelerate and sustain private investment within the renewable energy sector. Through the economic development funds generated by the independent power producers, South Africa has the potential to substantially contribute to mainstreaming gender and be a catalyst to transition women entrepreneurs to become owner-operators of their power generation facilities (Eberhard et al. 2014;Kilian, 2016;Pricewater-houseCoopers, 2016). Studies like those cited indicate that identifying the factors that influence the success of new and sustainable women's empowerment initiatives is complex and elusive because of the complex nature of mainstreaming gender in South Africa. The present study investigated the status of women in business at international, continental, and country levels. Although the focus was on the economic empowerment of women in the RE sector, it is equally applicable to the mining, manufacturing, and agricultural sectors. The intention of this study was, therefore, to provide a possible blueprint to mainstreaming gender and sustainable women's empowerment. The entry point was the body of knowledge that already existed within the United Nations, the World Bank, the World Economic Forum, the Global Environmental Facility, and other relevant institutions. The main contributors and variables that could positively influence the socio-economic empowerment of women initiatives in the RE sector were examined. Methodology This study can be described as theoretical and model-building, where the proposed theoretical model was supported by the collection of empirical data from various sources: • semi-structured interviews with policymakers, influencers of funding and implementation, and energy sector decision-makers; • structured interviews with gender experts; • researcher field notes; and project experience; and • publications on women's empowerment. The collected data was analysed and empirically tested using an advanced statistical technique called structural equation modelling (SEM) (Wothke, 2010). It was used to analyse simultaneous multiple independent relationships against the dependent variable: the perceived success of socio-economic empowerment initiatives of women in the RE sector in South Africa. The technique can be used in real-life situations using quantitative data gathering and analysis in a format compatible with the proposed theoretical research model (Hair et al. 2009). SEM also allows for both exploratory and confirmatory modelling, which means it is suited to both model testing and model development (Wothke, 2010). Several factors from the preliminary literature review informed the conceptual model, which was then empirically interrogated by utilising the SEM technique. The study was divided into three phases: data collection, analysis, and the extrapolation of the new hypothesis (Walwyn & Buys, 2014). The dependent variable, the socio-economic empowerment of women and the perceived success of women's empowerment in the sector was examined. The intervening variable articulated the importance of corporate governance. The independent vari-ables consisted of the need for the socio-economic development, the importance of stakeholder engagement, requirement of strategic acumen, the importance of strategic planning, need for broad-based black economic empowerment, benefit of executive leadership, advantage of change management, requirement for executive education, criticality of fund management, and the importance of corporate culture. The hypothesised inter-relationships presented in Figure 1 provide a graphical representation of the theoretical model, which presents the independent, intervening and dependent variables that positively influence the socio-economic empowerment of women in the sector. The process of operationalising the variables included defining the variables of interest operationally, and developing valid and reliable scales of measurement. Although the definitions do not guarantee the accuracy, they assisted the researcher to comprehend an abstract construct using concrete variables (Babbie, 2010). These variables are discussed in detail below. Socio-economic development Development can be defined as a planned and comprehensive economic, social, cultural, and political process in a defined geographic area, 36 Journal of Energy in Southern Africa • Vol 31 No 2 • May 2020 Source: Own construction one that is based on rights and ecologically oriented, and that aims to continually improve the well-being of the entire population (Fritz, 2001). Socio-economic development means professional intervention with the intention of improving socio-economic conditions on several levels: individual and group empowerment, conflict resolution, institution-building, community-building, nation-building, region-building, and world-building (Fritz, 2001). Socio-economic development was therefore considered as a process that seeks to identify the social and the economic needs within a community as well as encouraging capacity building for more significant participation, improved planning processes, greater decision-making power and control of all contributing transformative actions (Eberhard et al. 2014;Fritz, 2001;Department of Trade and Industry, 2013;Presence, 2018). Stakeholder engagement As described by the International Finance Corporation (2007), consultation activities that are mainly driven by rules and requirements invariably become a one-time agenda of public meetings that revolve around environmental and social assessment processes. This type of consultation normally does not progress beyond the project planning phase, is seldom integrated into core business activities, and lacks monitoring and evaluation of the effectiveness of developing constructive working relationships. The emerging terminology to describe stakeholder engagement describes a broader, more inclusive, ongoing process of engagement between the company; and interested and affected parties; and spans a range of activities and approaches over the entire life of a project. In the context of this study, stakeholder engagement refers to the internal and external stakeholders that have an influence on the economic empowerment of women within the RE sector. A stakeholder can be further defined as a party that has an interest in a company, the community, the economic empowerment of women in these communities, representing the government or any other interested party. A company can have a diverse set of stakeholders, both internal and external. The internal stakeholders of the company are its investors, employees, service providers, and customers, while the external ones are civil society (community), government, organised labour or service providers of non-core initiatives (International Finance Corporation, 2007;Creamer, 2018;BusinessTech, 2018). Therefore, different stakeholders have different levels of engagement, decision-making power and influence at differing stages of the project lifecycle. Strategic acumen Business acumen can be defined as eagerness and speed in comprehending and deciding on a business environment, which implies having the business view to get important information about a situation. Strategic acumen includes the ability to focus on the key strategic objectives and to have the experience to articulate the various scenarios for a solution (Grillo, 2015;Prince, 2008;Ragas, 2019). Strategic acumen is intricately linked with leadership characteristics like authenticity, decisiveness, vision, humility, talent selection, coaching and feedback -characteristics that promote trust between leadership and their management team (Erb, 2008). Therefore, strategic acumen has been considered as a process in which people think about, consider and create the future for themselves and others. Strategic acumen includes the ability to develop practical plans and interventions that are aligned with the strategic objectives of the company within a socio-economic situation. Strategic acumen helps decision-makers to review policy issues, perform long-term planning, set goals and determine priorities, and identify potential risks and opportunities. Strategic planning Strategic planning was suggested by Kenny (2016) to be a process whereby companies determine their vision for the future, identify their goals and objectives, outline the activities that will achieve the stated objectives, are prepared to take calculated risks and, most importantly, put in place the monitoring and evaluation plan that will guide the achievement of their objectives. Strategic planning also takes into consideration the human, financial and any additional resource requirements to achieve the strategic direction. Broad-Based Black Economic Empowerment Broad-Based Black Economic Empowerment (B-BBEE) is a policy that was initiated by the South African government to address the gross inequality in South Africa by redistributing the wealth across as broad a spectrum of previously disadvantaged South Africans (Department of Trade and Industry, 2013). The B-BBEE Act (Act 53 of 2003) was premised on the fact that decades of systemic racism contributed to the socio-economic challenges that the country faces (Department of Trade and Industry, 2013). The B-BBEE codes provide a guidebook for the measurement of ownership, management control, employment, skills development, preferential procurement, enterprise development, socio-economic development and qualifying small enterprises. The economic development requirements of the REIPPPP in South Africa have been controversial, confusing, and expensive for bidders to respond to (Eberhard et al. 2014). Executive leaders that are visionary and have an appreciation of the business and reputational risks to their companies and shareholders tend to go beyond compliance when implementing socio-economic development initiatives (Department of Trade and Industry, 2013; Eberhard et al. 2014). This visionary approach by these executive leaders tends to avoid a compliance-driven tick-box exercise. Executive leadership There should be a comprehensive definition and practical ways to measure leadership performance, as leadership is about establishing an enduring and flexible architecture that facilitates performance and achieves the desired results. A good or bad strategy is based on measurable results, meaning that results are the measurement. The role of leaders is to provide precise definition and differentiation, eliminate the many leadership positions that are artificially created and not needed, and measure performance rather than potential (Drotter, 2003). Executive leadership can therefore be defined as the leadership of a company with the expertise to define the company's strategic objectives and to articulate practical ways to measure leadership performance. Executive leadership also includes the ability to eliminate unnecessary leadership positions, the experience to establish flexible and enduring systems to facilitate performance, and the depth to have a precise definition of the leadership role (Drotter, 2003;Engelbrecht, 2009;Global Environmental Facility, 2017;Zinn, 2017). Change management Extensive effort has been invested in developing methodologies and approaches to apply change management concepts to managing the development and implementation of projects and programmes. The primary focus being to prepare the parties impacted by these initiatives to embrace the change that results from a project's activities (Harrington and Voehl, 2015). Change management theories and philosophies have both an emotional and situational component. The methods for managing each of these change management components was based by Campbell (2008) on an eight-step model: developing earnestness, constructing a conducting team, formulating a vision, communicating for buy-in, facilitating action, creating short-term victories, doing no let-up, and making it stick. Change management has been defined as an approach to transition individuals, teams, and companies by preparing these parties to embrace the change that results from project implementation. It is intended to guide or significantly reshape a company or community by using change management methods to re-direct the use of human and financial resources, processes, or other operating modes. It spans several disciplines from behavioural and social sciences, information technology and business solutions (Campbell, 2008;Harrington and Voehl, 2015). Executive education Executive education refers to the importance of education of executives and decision-makers and how their perspectives influence their decision-making as it relates to the non-core business activities linked to the community and socio-economic development initiatives, as described by Marsh (2014) and Martín (2016). The premise is that socio-economic development is non-core, misunderstood, and creates a significant degree of discomfort for executive management. The consequence is that executive management invariably makes decisions based on their belief systems, their current frame of reference informed by their learnt background, and least-risk approach initiatives. This can, therefore, create a negative bias towards socio-economic development based on their personal paradigm. Fund management Fund management refers to the active financial management, investment, and disbursement of socio-economic development funds (European Commission, 2018;Humentum, 2018;Paramasivan and Subramanian, 2009). As per regulatory requirements, these funds should be invested in economic development activities to promote quality job creation, local manufacturing, investment in community development and black economic empowerment as defined under the REIPPPPs Implementation Agreement (Stands, 2015). There are several key factors that positively influence effective fund management activities: effective corporate governance, sound financial planning, proper budgeting, appropriate financial controls, efficient support systems, and unbiased investment in the target beneficiaries (European Commission, 2018;Humentum, 2018;Paramasivan and Subramanian, 2009). Corporate culture Corporate culture refers to the prevailing culture within a company towards the socio-economic development of the intended beneficiaries. This could be a proactive, long-term approach to socio-economic development; or there could be a very narrow, short-term compliance approach. Companies that recognise that they lack the necessary socio-economic development expertise in-house and engage outside advice, potentially deliver more impact through their socio-economic investment (Michael Watkins, 2013;Teasly, 2016;Harrington and Voehl, 2015). Corporate governance Good corporate governance can be considered, as by Engelbrecht (2009) and Hamilton (2003) as the processes to make and implement decisions, with an emphasis on the best possible process for making those decisions. Governance can apply to a company, government, community, governing body or any entity that manages an outcome. Specifically, the good corporate governance and the integrity of the IPP involves their executive leadership, and its shareholders, as it relates to decision-making regarding the obligatory socio-economic development initiatives in the RE sector. Women's empowerment Women's empowerment can be defined as a process of enhancing the capacity of women or groups of women to make choices and convert these choices into actions and outcomes (World Bank, 2007). During 2016 only 23% of the senior positions in South African corporations were occupied by women, and 39% of businesses did not have any women in leadership positions (Kilian, 2016). The socio-economic empowerment of women within the RE sector of South Africa also has a complex stakeholder domain, which includes government, the private sector, organised labour, and civil society. Further to this complexity, the most controversy and uncertainty generated amongst the bidders of the REIPPPP was the reliance on the economic development requirements (Eberhard et al. 2014). Applying the World Bank's defini-tion, women's empowerment in the RE sector can be considered as the processes of enhancing the capacity of individuals or groups, typically indigent groups, to make choices and implement decisions to acquire assets in the sector. The elements that underpin institutional reform are access to information, inclusion and participation, accountability, and local organisational capacity (Eberhard et al. 2014;Kilian, 2016;World Bank, 2007). Data analysis This section discusses the factor analysis results, in order to assess the discriminant validity of the model. For the exploratory factor analysis, a total of 243 cases were analysed, no cases were discarded. The pattern matrix is presented in Table 2. To interpret the relevant factors, the initial selection process was accompanied by a rotation of the retained factors (Abdi, 2003). Two primary types of rotation were utilised: orthogonal when there is no correlation between the new axes; and oblique when the new axes are not necessarily orthogonal (Fabrigar and Wegener, 2011). In other words, they were correlated. The rotations were performed in a subspace, i.e. the factor space and the new axes explained less variance than the proposed factors, which computed to be optimum. The component of variance described by the full total subspace after rotation was equivalent to it before rotation, with only changes to the partition of the variance (Kothari, 2004). All the variables in the theoretical model were assessed for discriminant validity using exploratory factor analysis that utilised the principal axis factoring extraction technique with direct quartimin oblique rotation specified as the rotation method. The results of the factor analysis are presented in Table 1. Seven factors with eigenvalues greater than 1.0 were extracted, explaining 69.09% of the variance in the data. The number of factors extracted had not been specified initially, but the eigenvalues indicated that seven factors were to be extracted. This model was refined using an iterative process of deleting items that did not demonstrate adequate discriminant validity, low loading and cross-loading on more than one factor, and reexecuting the exploratory factor analysis until all the remaining items loaded to a significant extent (p ≥ 0.350) without cross-loadings. Reformulation of the hypotheses Several items in the theoretical model expected to measure socio-economic development, corporate culture, strategic acumen, executive education and fund management loaded collectively to form the new factor, social investment. Items expected to measure strategic acumen, stakeholder engagement, strategic planning, and executive leadership loaded onto the new factor, social investment. The new factor, sustainable programmes, was formed because the items expected to measure strategic acumen, stakeholder engagement, strategic planning, and executive leadership loaded onto the new factor. The variables socio-economic development, strategic acumen, strategic planning, change management, executive education, fund management, and corporate culture were removed from the proposed theoretical model as the exploratory factor analysis process could not verify their discriminant validity. The revised hypotheses are presented in Table 3. Assessment of goodness of fit The following hypotheses were addressed: null hypothesis (H 0 ), when the data is normally distributed, and the alternative hypothesis (H a ) when the data is not normally distributed. The null and alternative hypotheses were respectively evaluated by assessing the skewness and the kurtosis of the data, while the chi- H 2 Stakeholder engagement: There is a positive relationship between the importance of ongoing stakeholder engagement and good governance. H 3 Sustainable programmes: There is a positive relationship between the importance of sustainable programmes and good governance. H 4 Broad-Based Black Economic Empowerment: There is a positive relationship between the importance of B-BBEE policy and good governance. H 5 Executive leadership: There is a positive relationship between the importance of executive leadership and good governance. H 6 Social investment: There is a positive relationship between the importance of sustained social investment and successful women's empowerment. H 7 Stakeholder engagement: There is a positive relationship between the importance of ongoing stakeholder engagement and successful women's empowerment. H 8 Sustainable programmes: There is a positive relationship between the importance of sustainable programmes and successful women's empowerment. H 9 Broad-Based Black Economic Empowerment: There is a positive relationship between the importance of B-BBEE policy and successful women's empowerment. H 10 Executive leadership: There is a positive relationship between the importance of executive leadership and successful women's empowerment. H 11 Good governance: There is a positive relationship between the importance of good governance and successful women's empowerment. square (x 2 ) value was utilised to determine the associated p-value (Hair et al. 2009). The results of the test for multivariate normality produced a chi-square of 5 891.166 and a resulting pvalue of 0.000. Based on the chi-square value, it was inferred that the data was not multivariate normally distributed, consequently the robust maximum likelihood method of estimation for all the subsequent SEM analyses was employed. Any t-value greater than 1.96 was considered statistically significant (p<0.05) (Shah and Goldstein, 2006). The measurement model represented the degree of success with which the measured variables, i.e., the manifest variables, represented the latent constructs. It also represented the extent to which the structural model demonstrated how the constructs were associated with each other (Hair et al. 2009). The specification of the measurement model indicates conclusively the variables that measure the specified constructs in the structural model using fit indices (Shah and Goldstein, 2006). The assessment of the measurement model was then followed by a similar assessment of the structural model. To assess the extent that the proposed model represented a satisfactory approximation of the data, several fit indices were considered for this model. Table 4 presents the criteria for goodness-of-fit indices and Table 5 presents the fit indices for the measurement model. The Satorra-Bentler x 2 value divided by the degrees of freedom was 1.806, an indicator of a good fit. The root mean square error of approximation of 0.0577 was also regarded as a good fit. The fit indices all provided evidence of a model with a good fit, however, the null hypothesis (for the data fits the model perfectly) was rejected. Structural model assessment To evaluate the identification of the structural model, the size of the covariance matrix relative to the number of estimated coefficients is usually of concern (Hair et al. 2009). The next step was, therefore, to evaluate the goodness-of-fit for the entire model. It was found that the data was not multi-variate normally distributed, and therefore the robust maximum likelihood estimation method was utilised. SEM was used to empirically assess the effectiveness of the relationships between the latent variables in the proposed theoretical model, as opposed to determining a well-fitting model (Hair et al. 2009;Shah and Goldstein, 2006). The robust maximum likelihood estimated method was utilised as the estimation process, since the data was not normally distributed. In the case of the non-normality distribution of the data, the adjusted goodness-of-fit index, and the goodness-of-fit index was not used to evaluate this model fit (Hair et al. 2009;Shah and Goldstein, 2006). This approach signifies that the goal of the statistical analysis was focused on measuring relationships instead of pursuing a good model fit (Hair et al. 2009;Shah and Goldstein, Table 6. The Satorra-Bentler x 2 divided by the degrees of freedom ratio was 1.806. An acceptable value is lower than 2 and can be an indicator of a good fit (Hair et al. 2009). The RMSEA 0.0577 indicated a comparatively close fit, whereas the upper limit of the 90% confidence interval for RMSEA of 0.0662 was less than 0.08, and therefore the fit indices provided proof of a model with a good fit (Schumacker and Lomax, 2004). Consequently, the null hypothesis that the data fits the model perfectly was rejected. Although the model does not fit the data perfectly, there was a reasonable fit. Model amendments This step of the data analysis process was to check all the hypotheses and, predicated on the empirical results of the path coefficient, the hypotheses that have been defined could be considered as supported or not supported. Based on the observations and the empirical outcomes, it can be confirmed that not all of the principal relationships in the theoretical model are supported and statistically significant. The model was re-specified by adding, deleting or amending approximate parameters to the proposed theoretical model to establish an improved goodness-of-fit value (Hair et al. 2009). Empirical results Several SEM steps were applied to the model to evaluate whether the various hypotheses associated with the model ought to be accepted or rejected, and the results presented below. Social investment H 1 There is a positive relationship between the importance of sustained social investment and good governance. It was found that social investment is not statistically significant to the perceived success of good governance with a path coefficient = 0.214; t-value = 1.532; p < 0.001. Hypothesis 1 was therefore rejected. Stakeholder engagement H 2 There is a positive relationship between the importance of ongoing stakeholder engagement and good governance. This indicated that stakeholder engagement does not influence the perceived success of good governance with a path coefficient = 0.330; t-value = 0.717; p < 0.001. Hypothesis 2 was therefore rejected. Not supported. Sustainable programmes H 3 There is a positive relationship between the importance of sustainable programmes and good governance. It was found that the importance of sustainable programmes is positively related to the intervening variable of good governance with a path coefficient = 0.332; t-value = 3.606; p < 0.001. Hypothesis 3 was therefore accepted. Supported Broad-Based Black Economic Empowerment H 4 There is a positive relationship between the importance of broad-based black economic empowerment policy and good governance. It was found that the importance of broad-based black economic empowerment is not statistically significant to the perceived success of good governance with a path coefficient = 0.113; tvalue = 1.280; p < 0.001. Hypothesis 4 should, therefore, be rejected. Not supported Executive leadership H 5 There is a positive relationship between the importance of executive leadership and good There is a positive relationship between the importance of sustained social investment and successful women's empowerment. This indicated that social investment is positively related to the dependent variable of successful women's empowerment with a path coefficient = 0.382; t-value = 3.156; p < 0.01. Hypothesis 6 was therefore accepted. Supported Stakeholder engagement H 7 There is a positive relationship between the importance of ongoing stakeholder engagement and successful women's empowerment. It was found that stakeholder engagement is negatively related to the dependent variable of successful women's empowerment with a path coefficient = -0.188; t-value = -3.726; p < 0.001. Hypothesis 7 was therefore rejected. Sustainable programmes H 8 There is a positive relationship between the importance of sustainable programmes and successful women's empowerment. It was found that sustainable programmes are not statistically significant in relation to the dependent variable of successful women's empowerment with a path coefficient = -0.032; t-value = -0.500; p < 0.001. Hypothesis 8 was therefore rejected. Broad-Based Black Economic Empowerment H 9 There is a positive relationship between the importance of B-BBEE policy and successful women's empowerment. It was found that B-BBEE policy is positively related to the dependent variable of successful women's empowerment with a path coefficient = 0.326; t-value = 3.357; p < 0.001. Hypothesis 9 was therefore accepted. Supported Executive leadership H 10 There is a positive relationship between the importance of executive leadership and success-ful women's empowerment. This indicates that executive leadership is positively related to the dependent variable of successful women's empowerment with a path coefficient = 0.468; t-value = 3.415; p < 0.001. Hypothesis 10 was therefore accepted. Supported Good governance H 11 The hypothesis for good governance states that there is a positive relationship between the importance of good governance and successful women's empowerment. It was found that good governance is not statistically significant to the perceived success of successful women's empowerment with a path coefficient = 0.143; tvalue = -1.322; p < 0.001. Hypothesis 11 was therefore rejected. Not supported Figure 2 illustrates the final model with the hypotheses, path coefficients and t-values. Conclusions The structural equation modelling examined the socio-economic empowerment of women within the renewable energy sector of the Republic of South Africa. This model was predicated on the scenario that, if women were economically empowered, they would to have greater social status. The results revealed that ethical leadership positively influences good governance and successful women's empowerment. The results also indicated that social investment and broadbased black economic empowerment positively influence successful women's empowerment and that sustainable programmes are a positive contributing factor to good governance. This study has the potential to contribute to future developments in the socio-economic empowerment of women by recommending innovative approaches to mainstream gender in the renewable energy sector, thereby improving the lives of communities, and women, in South Africa and elsewhere.
2020-07-16T09:06:33.573Z
2020-06-14T00:00:00.000
{ "year": 2020, "sha1": "2f4dc52cbf07c0d43c23369e784b60b5d0feba97", "oa_license": "CCBYSA", "oa_url": "https://journals.assaf.org.za/index.php/jesa/article/download/6166/10217", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f33d6fd49dff4d84df89b1c4cae2b845ab2d730b", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Political Science" ] }
234625854
pes2o/s2orc
v3-fos-license
A flour impurity detection system based on image processing . Flour plays an important role in People's Daily consumption, and the content of impurities in flour indicates the quality of flour. At present, most domestic factories are using magnifying glass and other simple tools for impurity detection. This method is troublesome and does not meet the requirement of precision. This paper designs an automatic impurity detection system based on image processing, which not only improves the detection efficiency, but also greatly improves the detection accuracy. The basic process of this system is to grayscale the image obtained by photographing, then carry out local entropy transformation, and then map to form entropy image. Finally, the impurity detection is completed after image filtering, image segmentation and edge detection. Research background and significance With the development of economy, people pay more and more attention to the health of diet, and the content of impurities in flour directly determines the quality of flour, which is highly valued by manufacturers and consumers. However, the traditional detection methods are mostly manual and simple physical ones. In this way, the detection accuracy is not high, resulting in excessive flour impurities and other quality problems. With the rapid development of computer system and related hardware, the detection accuracy is much higher than the traditional manual physical detection, so the detection of impurities through the computer image processing system can not only reduce the labor cost, but also improve the detection accuracy. Image preprocessing Images are usually obtained by taking pictures, so it is possible to miss some places by simply using traditional visual observation for unclear points. Before using the computer to process the image, the image should be strengthened. The purpose of doing so is to improve the image quality and facilitate subsequent processing. Image enhancement processing uses image graying, local entropy transformation and other related technologies [1]. Introduction to Digital Image When a computer processes an image, it converts the image signal into a digital signal. Generally, the image is two-dimensional and the function f= (x, y) can be used. The x and y here represent the coordinate position in the two-dimensional array, and the function value represents the pixel value at a certain point. The image is discretized into digital image which can be processed by computer. A Image graying After the completion of image acquisition, put in the MATLAB directory, and according to the path to find the load, and then the image grayscale processing. The so-called image graying is obtained by changing the gray value of the pixel by changing certain rules of the image.Since the acquired images are usually color images, it is necessary to grayscale them. Color images are composed of red, green and blue. So it can be expressed as (0,0,0) to (255,255,255), where the former represents pure black and the latter pure white. The purpose of image graying is to make the light and shade of the image become the main information. So as to improve the detection accuracy. Generally speaking, there are three commonly used schemes for image gray change: the maximum method, the average method, and the weighted average method.Here are three formulas for specific algorithms [3]. This paper adopts the method of maximum value to change the gray level of the image. The reason is that a gray level image with maximum brightness can be obtained, which is more convenient for image observation and processing. Figure 1. Image after graying The gray entropy of the image Entropy is a measure of the uncertainty of an event, and its value can effectively reflect the information contained in the event. Generally, the higher the entropy value is, the greater the disorder degree will be; otherwise, the smaller the disorder degree will be [4]. In image processing, the local entropy is defined according to the orderliness of the distribution of every pixel. This value reflects the richness of the image information.The entropy of an image of size M and N is defined as follows: Where, represents the distribution probability of point (i, j), represents the pixel value of the midpoint (i, j), M×N is a field window centered on point (i, j), and H represents the local entropy value of the field window [5]. Image gray entropy reflects the degree of pixel gray level difference, the flour in the image as a result of the existence of impurities spots, so there are big differences between the gray levels of pixels, calculated according to the difference area of gray entropy value is smaller, the greater the image the stronger the degree of chaos, and entropy value, the greater the has reflected the gray level of the region is relatively uniform. For the image of flour impurities, the background gray distribution is uniform or the fluctuation between them is small, and the entropy value can be regarded as approximately equal [6]. However, for the impurities in the flour image, its gray distribution is not uniform, and it jumps compared with the gray distribution of the background image, so the entropy value obtained also changes. In general, entropy is calculated for an image of size W×L. First take a small window, generally take the rectangle with the same length and width. The current pixel is the center, the probability of the point in this window is calculated first, then the probability is substituted into the calculation formula of gray entropy, and the gray entropy value of each pixel is calculated in the image in turn. Then, the obtained grayscale entropy value is mapped to the [0,255] image grayscale space in a certain range and reverse-color processing is carried out to obtain the entropy image. The result is an inverted, enhanced image. Compared with the previous image, the picture of entropy value of flour bran becomes clearer and brighter. The entropy value image obtained by MATLAB simulation is as follows: Image filtering In the process of image transmission and processing, noise interference may occur to a certain extent, which will affect the processing and observation of images and ultimately affect the detection results. In order to improve the detection accuracy and reduce noise pollution, filtering measures should be taken. The commonly used methods are median filtering, mean filtering and so on. The median filter is adopted in this paper. In MATLAB, the medfilt () function is used for filtering [7]. The resulting image is shown in the following figure. It can be seen that the filtered image can effectively remove noise more clearly. Image recognition and marking Image is one of the most important technical template computer visual identification, is the key of image processing to analysis, on the one hand, it is the foundation of target expression, has important influence to the specific measure, on the other hand, the expression of the goal of image segmentation and image segmentation, feature extraction and the parameter measurement and convert the original image into mathematical expressions, makes the analysis and understanding of image. Threshold segmentation The target expression, feature extraction and parameter measurement of image segmentation and image segmentation transform the original image into mathematical expression, which makes it possible to analyze and understand the image. Since the value of grayscale image is a number between [0, 255], the so-called threshold segmentation is to determine a threshold value and compare the gray value with this number to determine the relevant background and target. A threshold value T is set, and the value within the gray range is set as ) , ( y x f . After processing, there are only black and white colors [8]. After the calculation of gray entropy, the expression is as follows: , ( CISAT 2020 Journal of Physics: Conference Series 1634 (2020) 012129 IOP Publishing doi:10.1088/1742-6596/1634/1/012129 5 The methods of threshold segmentation include single threshold segmentation and multi-threshold segmentation. MATLAB was used for simulation. The display threshold of query related data is set at about 95. The image after threshold segmentation is as follows: The distribution of impurities can be clearly seen. Edge detection Image edge detection is also a kind of threshold segmentation, mainly according to the target and background, the image is divided into black and white two parts, the visual difference is that the edge detection is only on the edge of the bran star spot segmentation. The function is to extract some important feature information (texture, shape) in the image and analyze it. From a mathematical perspective, the points on the image are generally extreme points or discontinuity points, and the edge detection is generally expressed by the first derivative and the second derivative [9]. The maximum value of the first derivative represents the edge position, and the second derivative through the origin represents the edge position, as shown in the following table: This paper uses Sobel operator to achieve. Sobel operator is a first-order derivative detection operator, which has two horizontal and vertical 3×3 templates. It USES the template and each pixel of the image to do convolution sum operation, and finds out the appropriate threshold, so as to detect the image edge. Its calculation formula is as follows: with the left side darker than the right side.Since there is a function edge of Sobel operator in the toolbox of MATLAB, we can directly use the function to detect its edge [10]. The renderings are as follows Figure 5. Image after edge detection Flour impurity mark Flour impurities tag occupies important role in the study of undergraduate course topic, although after segmentation image you can see, flour image points, there are a lot more obvious impurities it is only through naked eye out image binarization, want of impurities were identified through the original image, from the impurities to flour image points, some characteristics of flour impurities to extract relevant information.After some series of processing, in order to facilitate the observation of impurities in flour, the picture can be circled and processed with matlab-related toolkits to obtain the final detection picture: Conclusion The flour impurity detection system based on image processing designed in this paper is faster and more accurate than the traditional detection method. Through image graying, local entropy transformation, edge detection and other steps, the flour impurities will finally be detected and marked. These systems can also be extended to other related detection scenarios. Finally, relevant tests shall be made on whether the products are qualified.
2020-10-28T19:12:02.874Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "103405c38e32a7e58a8d105311980fee07a90f22", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1634/1/012129", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "123982463077f7eb557450aa77c4242498329214", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Materials Science" ] }
53988796
pes2o/s2orc
v3-fos-license
Hybrid Adsorptive and Oxidative Removal of Natural Organic Matter Using Iron Oxide-Coated Pumice Particles The aim of this work was to combine adsorptive and catalytic properties of iron oxide surfaces in a hybrid process using hydrogen peroxide and iron oxide-coated pumice particles to remove natural organicmatter (NOM) inwater. Experiments were conducted in batch, completely mixed reactors using various original and coated pumice particles. The results showed that both adsorption and catalytic oxidationmechanisms played role in the removal of NOM.The hybrid process was found to be effective in removingNOM from water having a wide range of specific UV absorbance values. Iron oxide surfaces preferentially adsorbed UV 280 -absorbing NOM fractions. Furthermore, the strong oxidants produced from reactions among iron oxide surfaces and hydrogen peroxide also preferentially oxidized UV 280 -absorbing NOM fractions. Preloading of iron oxide surfaces with NOM slightly reduced the further NOM removal performance of the hybrid process. Overall, the results suggested that the tested hybrid process may be effective for removal of NOM and control disinfection by-product formation. Introduction Disinfection by-products (DBPs) form in drinking water as a result of reactions between oxidants/disinfectants such as chlorine and natural organic matter (NOM).One of the major challenges for drinking water treatment is to control carcinogenic and mutagenic disinfection by-products (DBPs) formation.Today, more stringent standards are being imposed in developed countries in an effort to minimize the impacts of DBPs in public health.In addition, future regulations are expected to focus more on individual instead of combined DBPs because recent toxicology studies indicate that individual DBP species may have different health effects.In order to comply with current and future regulations, water utilities have been exploring various strategies to minimize the DBP formation.Two approaches are commonly used to meet the DBP regulations: (1) removal of the DBP precursors (i.e., DOM: dissolved organic matter which is the dissolved fraction of NOM) before chlorine addition and (2) use of alternative disinfectants/oxidants (e.g., ozone, chloramines, chlorine dioxide, or UV light) instead of chlorine.The use of alternative disinfectants may not be always feasible to apply in existing treatment plants, and they may produce other unfavorable disinfection by-products (e.g., nitrosamines) and water quality problems (e.g., nitrification and/or elevated lead levels in some distribution systems).Since reducing the precursor levels will result in lower degree of overall DBP formation, precursor control is the most commonly used and preferred method for DBP control. Several studies have shown that iron oxides adsorb humic materials and NOM from water [1][2][3][4][5][6][7][8][9].Furthermore, the surfaces of various metal oxide particles including iron oxides catalyze the decomposition of oxidants (e.g., ozone and hydrogen peroxide) resulting in the formation of strong oxidants such as hydroxyl radicals [10][11][12][13][14][15][16][17][18].As in Fenton reactions, the decomposition of peroxide through interactions with the surface sites of such catalysts results in the formation of strong oxidants including hydroxyl 2 Journal of Chemistry radicals, which are shown to effectively oxidize various synthetic organic chemicals and NOM [14].The combination effectiveness of H 2 O 2 and iron-coated pumice combined processes has been attributed to the generation of highly reactive and nonselective hydroxyl radicals [16].Therefore, iron oxide particles when added to water along with oxidants may remove NOM and/or synthetic chemicals through both adsorption and catalytic oxidation mechanisms.Zelmanov and Semiat [19] displayed the photochemical and catalytic properties of iron-based nanoparticles for use in degradation of some organic pollutants in wastewaters in the presence of hydrogen peroxide.Iron oxide particles added to hybrid ultrafiltration processes always improve both NOM removal and membrane flux [20]. Iron oxide can be immobilized on various support materials such as sand, soil, and zeolite.Pumice was selected as granular support medium for iron oxide coating for the purposes of this study.Pumice has been used as adsorbent and photocatalysts in water treatment [7,16,21].As an alternative to other support materials such as sand, the advantages of pumice particles are that they are highly porous and have higher surface areas, which immobilize more amounts of metal oxide catalysts, thus providing more reaction sites.In our previous work, we investigated adsorptive NOM removal from water using iron oxide-coated natural pumice particles and found that, for all pumice particle size fractions, the coating of natural pumice with iron oxide significantly increased NOM uptake on both an adsorbent mass and surface area basis [7].Furthermore, in another previous study, iron oxide-coated pumice particles were found to be effective in catalyzing the decomposition of hydrogen peroxide and removal of NOM from humic acid solution and a raw drinking water source with low specific UV absorbance (SUVA 254 ) value [16].Therefore, based on the findings of our previous studies, the objective of this work was to combine both the adsorptive and catalytic properties of iron oxide-coated pumice particles in a hybrid process.The main objective was to investigate the effectiveness of this hybrid process in the removal of NOM from water.In our previous work [7], we investigated the adsorptive NOM removal from water using iron oxide-coated natural pumice particles.In our other previous study [16], the oxidative removal of natural organic matter with using hydrogen peroxide and iron-coated pumice particles was evaluated in a natural water with relatively low specific UV absorbance (SUVA) value (SUVA 254 nm: 1.9 L/mg-m).The main difference of this study from previous published work [16] was to investigate the effectiveness of iron oxide-coated pumice and volcanic slag particles in removing DBP precursors from a high-SUVA 254 water.Natural water with SUVA 254 values less than 2.0 generally contains mainly hydrophilic and low molecular weight NOM moieties [22,23].On the other hand, water with higher SUVA254 values (i.e., >4 L/mg-m) mainly contains humic materials of higher molecular weight and hydrophobic character.Such water after chlorination may exert higher concentrations of disinfection by-products [22,23].Iron oxides have been shown to exhibit higher adsorption capacity for larger molecular size hydrophobic NOM fractions and acidic NOM fractions rich in carboxyl/hydroxyl functional groups such as aromatic moieties in humic materials [16].Thus, it was hypothesized that iron oxide-coated pumice and slag particles will be more effective for NOM removal and DBP control in high-SUVA water.Natural surface water with a high-SUVA value and high dissolved organic carbon (DOC) concentration was chosen for this purpose.Raw water sample was obtained from the influent of drinking water treatment plant in Myrtle Beach (MB), South Carolina (SC), USA.NOM in MB water was concentrated using a pilot-scale reverse osmosis (RO) membrane system, which allowed conducting all adsorption experiments at a constant initial dissolved organic carbon (DOC) concentration (4.1-4.2 mg/L) and SUVA 254 (4.8 L/mg-m).Furthermore, iron oxide-coated pumice particles were preloaded with NOM prior to the application of the hybrid process to evaluate the impact of the preloading of iron oxide surfaces on process performance. Experimental and Methods Three different natural pumice sources in Turkey (Isparta, Kayseri, and Nevsehir) with varying physicochemical characteristics were used in this study.The following codes are used throughout this paper for these pumice sources: Isp: Isparta; Kay: Kayseri; Nev: Nevsehir.Two different particle size fractions (<63 and 250-1000 m) were obtained for each pumice source after grinding and sieving.Pumice samples were used as received and coated with iron oxides.The pumice fractions were coated with iron oxides using reagent grade FeCl 3 ⋅6H 2 O, employing the method reported by Lai et al. [24] and Lai and Chen [6] with some modifications.Detailed information on the employed coating procedures can be found in our previous publication [7].Control experiments demonstrated that iron oxide precipitates/colloids were effectively removed from cleaning solutions during coating and that catalytic NOM oxidation was due to iron oxide-coated pumice particles not colloidal iron in the solution.Each uncoated (original) and coated pumice fraction was characterized by measuring specific surface area, point of zero charge (pH PZC ), iron content, total surface acidic and basic groups, and scanning electron microscope, energy dispersive X-ray spectrometer (SEM-EDX) and X-ray fluorescence (XRF) analysis.Iron contents of the coated pumices were measured according to acid digestion analysis and with further AA spectrometry measurements.The detailed physicochemical characteristics of all pumice particles were presented elsewhere [7,16].NOM used in this study was collected from the influent of Myrtle Beach (MB) drinking water treatment plant in South Carolina using a reverse osmosis (RO) membrane system to represent a raw water source with high DOC and SUVA values.The RO concentrate was diluted by distilled and deionized water (DDW) to obtain a constant initial DOC concentration (4.1-4.2 mg/L) for all experiments. All hybrid process experiments were conducted in completely mixed batch reactors (CMBRs).First, kinetic experiments in CMBRs were performed with constant pumice (3000 mg/L) and peroxide (300 mg/L) dosages at periods of 1, 2, 4, 8, 12, 24, 36, and 48 h.The results indicated that no more statistically significant NOM removal occurred after 24 h of reaction with peroxide and iron oxide-coated pumice.Therefore, 24 h reaction period was employed for all remaining batch experiments.After kinetics experiments, determining the extent of NOM removal by adsorption and peroxide oxidation only, pumice and peroxide were also dosed alone.Then catalytic oxidation experiments were performed at various uncoated/coated pumice (0-3000 mg/L) and hydrogen peroxide (0-1000 mg/L) dosages.Three different natural pumice sources in Turkey (Isparta, Kayseri, and Nevsehir) with two different particle size fractions (<63 and 250-1000 m) were tested to determine the effect of pumice source and particle size on NOM removals.The four different iron-coated pumice dosages (30, 100, 1000, and 3000 mg/L) were examined to determine effects of catalyst dosages on decomposition of hydrogen peroxide.Three different NOM sources were used in the experiments: a surface water (Alibeykoy reservoir) supplying some portion of City of Istanbul's drinking water demand; a surface water obtained from the influent of drinking water treatment plant in Myrtle Beach (MB), South Carolina (SC), USA; and humic acid (HA) isolate purchased from Acros Organics.These NOM sources were selected since they represented a low-(Alibeykoy) and high-SUVA (MB and HA) water, enabling the evaluation of various pumices in removing NOM with a wide range of chemical characteristics.All tests were conducted in parallel CMBRs.Statistical analysis of the data was performed based on -statistics and 95% confidence intervals were calculated from parallel tests and triplicate measurements.CMBRs used were 255 mL glass amber bottles with PTFE screw-caps (solution volume: 200 mL).All experiments were conducted at constant temperature of 20 ± 1 ∘ C.After dosing coated pumice and/or peroxide, CMBRs containing MB water were kept well mixed (150 rpm) in oxic conditions in a temperature-controlled orbital shaker.CMBRs were covered with aluminum foil to prevent the introduction of light.After employing the hybrid process, the bottles were opened and samples were taken to measure residual peroxide and determine the amount of sodium sulfite (Na 2 SO 3 ) solution required to quench the residual peroxide and stop the reactions.Solutions were filtered (0.45 m membrane filter (polyethersulfone)) to remove pumice particles and analyzed for pH, UV absorbance, and DOC concentration to quantify NOM removal.Filter papers were prewashed with 1 L of DDW to prevent potential leaching of materials from the filter matrix.Changes in DOC concentrations in control bottles (without peroxide and pumice dosing) were not statistically significant based on 95% confidence intervals, indicating the stability of NOM during mixing.Kinetic experiments in CMBRs were performed with constant pumice (3000 mg/L) and peroxide (300 mg/L) dosages at periods of 1, 2, 4, 8, 12, 24, 36, and 48 h.The results indicated that no more statistically significant NOM removal occurred after 24 h of reaction with peroxide and iron oxide-coated pumice.Therefore, 24 h reaction period was employed for all remaining batch experiments. Preloading experiments using MB water were conducted to evaluate the impacts of NOM preloading on further adsorptive and catalytic properties of iron oxide surfaces. The smaller particle size fraction (<63 m) of coated Isp pumice was used for the preloading experiments.Three different MB raw water samples with different initial DOC concentrations (1, 5, and 10 mg/L) were prepared by diluting the RO concentrate of MB water using DDW.The preloading experiments were conducted in CMBRs.After dosing coated pumice dose of 2000 mg/L to each CMBR having different initial DOC concentration, CMBRs were mixed (150 rpm) in oxic conditions in a temperature-controlled orbital shaker for one week.Preliminary tests indicated that one week of mixing was highly sufficient to reach adsorption equilibrium on iron oxide surfaces.At the end of preloading experiments, water samples were taken and filtered prior to UV 254 absorbance and DOC measurements.All pumice particles remaining on filter papers were collected and dried at 80 ∘ C in an oven until the moisture content was removed and constant weight was achieved.Using these preloaded coated pumice particles, hybrid process experiments were then conducted and the results were compared with those of nonpreloaded coated particles while other experimental variables were constant. All chemicals used were of either analytical or reagent grade.DDW was used for stock solution preparations and dilutions.DOC concentrations were measured using a highsensitivity TOC analyzer (TOC-VCPH, Shimadzu) employing high-temperature combustion.A UV-visible spectrophotometer (UV-1601, Shimadzu) was used to measure the UV absorbances (in triplicate) in water samples.Hydrogen peroxide concentration was measured with a titrimetric test kit (HYP-1, Hach-Lange). Results and Discussion The SEM images of original and iron oxide-coated Isp pumice particles are shown in Figures 1(a) and 1(b), respectively.The porous structure of original pumice particles and partial coverage/filling of these pores by the coating can be clearly seen in these images.Similar to our SEM results, the study shows that original pumice surfaces were apparently occupied by iron oxides, which were formed during the coating process [25].Various physicochemical characteristics of all pumice particles were presented in detail in our previous publications [7,16]. Initially, experiments were conducted by dosing hydrogen peroxide alone to determine the degree of NOM removal in MB raw water by peroxide oxidation only.The results showed that NOM removal by peroxide oxidation was minimal and less than a 6% reduction in UV 254 absorbance was achieved with peroxide dosages up to 1000 mg/L (Figure 2(a)).Similar results were also found for DOC (Figure 2(b)).These findings were expected since peroxide is known to be generally ineffective in oxidizing refractory synthetic chemicals or NOM.For all the tested original (uncoated) pumice sources and particle size fractions, the adsorptive removal of NOM in MB water without peroxide was also minor.The maximum UV 254 absorbance and DOC reductions achieved via uncoated particles were 10% and 5%, respectively, even at maximum dosages.On the other hand, the coating of pumice particles with iron oxide significantly enhanced adsorptive NOM removal.When a dose of 3000 mg/L coated pumice was used, UV 254 absorbance and DOC removals as high as 43 and 36% were obtained, respectively, in all tested pumice types and size fractions.These results overall suggest that iron oxide-coated pumice particles may be effective in the adsorptive removal of NOM in water with high-SUVA 254 values (4.84 L/mg-m for the tested MB water).In our previous work [7], iron oxidecoated pumice or volcanic slag particles were also found to be effective adsorbents in removal of NOM in water samples with lower SUVA 254 values (<2.0 L/mg-m).While SUVA 254 values less than 2.0 (L/mg-m) generally indicate that NOM is mainly of hydrophilic character with lower molecular weight fractions (i.e., nonhumic materials), SUVA 254 values higher than 4.0 (L/mg-m) indicate water with dominantly humic materials with higher molecular weight and higher degree of aromaticity [23,24].Thus, iron oxide-coated pumice particles appear to be effective adsorbents for a wide range of raw water sources having different NOM moieties. When hydrogen peroxide and iron oxide-coated pumice particles were dosed together, both UV 254 absorbance and DOC removal further increased.A 73% reduction in UV 254 absorbance was achieved with iron oxide-coated Isp pumice (<63 m) and peroxide doses of 3000 mg/L and 1000 mg/L, respectively (Figure 2(a)).The DOC removal was 57% at these doses (Figure 2(b)).Increased NOM removal was detected for all coated pumice types and size fractions when dosed with peroxide.Furthermore, as the peroxide dosages were increased at a constant coated pumice dose, both UV 254 absorbance and DOC reductions also increased.These results prove that in addition to the adsorbent properties iron oxide surfaces also catalyze the decomposition of hydrogen peroxide resulting in the formation of strong oxidants, probably hydroxyl radicals.Thus, it is apparent that both adsorption and surface catalytic oxidation mechanisms play a role in the removal of NOM by this hybrid process.Control experiments prove that iron oxide species bound on pumice surfaces have effective coating and stability.Total iron release to solution was always less than 0.15 mg/L at pH values 5.5-8.5 even at a maximum dose of coated pumice (3000 mg/L) after 24 h of mixing (peroxide: 150 mg/L).This finding indicated two important points: (1) potential iron release to water in this hybrid process is not a concern at neutral pH values of typical natural water; (2) strong oxidants are produced by the coated iron oxides on pumice surfaces, not by colloidal or soluble iron species in water.Figure 3 shows the UV 280 absorbance reductions achieved by this hybrid process in different water sources.Some of the results for Alibeykoy raw water (Istanbul, Turkey) and humic acid (HA) solution were presented in our previous publication [16].Since sodium azide was added to Alibeykoy water and HA solution to prevent microbial NOM degradation during their long storage, UV 280 absorbance data were given in Figure 3.This is because while sodium azide absorbs UV light at 254 nm wavelength it does not absorb UV at 280 nm.Therefore, UV 280 absorbance measurements were used to compare the NOM removal performances in these water samples.It was found that the degree of NOM removal by the hybrid process generally increased along with increasing SUVA 280 values in the tested water samples (Alibeykoy water: 1.41, MB water: 3.64, and HA solution: 5.11 L/mgm) (Figure 3).UV 280 absorbance reductions achieved in the Alibeykoy water, MB water, and HA solution were 49, 60, and 70%, respectively, employing a dose of 150 mg/L peroxide and 3000 mg/L iron oxide-coated pumice.At the same doses, the corresponding DOC removals were 22, 53, and 61%.Higher DOC and UV 280 absorbance reductions were achieved with increasing peroxide dosages from 150 to 1000 mg/L.For example, when doses of 1000 mg/L peroxide and 3000 mg/L coated pumice were used, the DOC removals achieved in the Alibeykoy water, MB water, and HA solution have been 35, 57, and 73%, respectively.Alver et al. [25] studied the catalytic activity of the iron-coated pumice particles used as heterogeneous catalysts in the oxidation of humic acid solution.Consistent with our results, they reported that DOC reduction reached 74% at the maximum iron-coated pumice dosage (5000 mg/L) and the maximum H 2 O 2 dosage (200 mg/L) whereas for adsorption alone this was 11%.These findings indicate that (1) the hybrid process is effective for a wide range of natural water sources having SUVA 280 values from 1.41 to 5.11 L/mg-m and (2) a higher degree of NOM removal can be achieved in water sources with dominantly humic materials rich in aromatic structures.All experiments were conducted at the pH of original solution (8.0, 7.1, and 6.8 for Alibeykoy, M. Beach, and humic acid, resp.)without any buffer and constant temperature of 20 ± 1 ∘ C. Final pH values after experiments were between 7.8-8.0,6.2-7.6, and 6.4-7.2 for Alibeykoy water, M. Beach, and humic acid solution, respectively. It was observed that SUVA 280 values decreased following the application of hybrid process with increasing peroxide and coated pumice doses in all the tested water samples.This trend suggests that (1) iron oxide surfaces preferentially adsorb UV 280 -absorbing NOM fractions such as aromatic moieties in humic materials, consistent with previous studies [3,5,7], and (2) the strong oxidants produced as a result of surface reactions of iron oxides and hydrogen peroxide also preferentially oxidize UV 280 -absorbing NOM fractions in solutions.It was difficult to exactly determine which mechanism contributed more to the preferential removal of UV 280 -absorbing NOM fractions.This is due to the observed synergistic effects and the possibility that adsorbed UV 280absorbing NOM fractions on iron oxide surfaces may have oxidized as a result of surface reactions.Nevertheless, the hybrid process preferentially removed the UV 280 -absorbing NOM fractions by the above-mentioned mechanisms, which is very important since the major disinfection by-product (DBP) precursors during chlorination are believed to be UVabsorbing NOM fractions.Alver et al. [25] reported that the removal of trihalomethane formation potential (THMFP) was 65% for 100 mg/L H 2 O 2 and 2000 mg/L iron-coated pumice dosage for HA solutions.Thus, this property of the hybrid process may be useful in terms of controlling DBP formations in drinking water treatment. Many researchers [26,27] have reported the catalytic oxidation of HA solutions by iron-coated pumice/H 2 O 2 process that the initial H 2 O 2 concentration played a very important role for the generation of hydroxyl radicals to oxidation of NOM.NOM removal in MB water increased as coated pumice and peroxide doses were increased.At constant pumice dose, UV254 absorbance reductions increased when peroxide doses were increased.Similarly, at constant peroxide doses, UV 254 absorbance reductions increased when pumice dose was increased.Furthermore, the ratio of H 2 O 2 /coated pumice dosage (mg/mg) also impacted NOM removal; increasing this ratio also increased NOM removal (Figure 4).The range of H 2 O 2 /coated pumice dose ratio tested in MB water was 0.02-33.About 73% UV 254 absorbance and 57% DOC reductions were achieved when this ratio was 0.33 (1000 mg/L peroxide and 3000 mg/L coated pumice dose).This ratio may be optimized based on target DOC removal and cost consideration in a specific application.For each peroxide dose, peroxide consumption generally increased with increasing coated pumice dose (Figure 5).In addition, positive correlations were found between peroxide consumption and UV 254 absorbance or DOC reductions at each pumice dosage level (data not shown).These two findings further proved that iron oxide surfaces are responsible for the catalytic decomposition of peroxide and further production of strong oxidants and that the oxidative NOM removals are directly linked to the produced oxidants.If NOM removals had not increased with increasing peroxide consumption, then this would indicate that NOM removals were only through adsorption.Another evidence supporting this is that no correlation among peroxide consumption and NOM removal is observed when only peroxide is dosed (no coated pumice).Overall, the results show that NOM removal by the iron oxide-coated pumice and peroxide is the result of a hybrid process of adsorption and catalytic oxidation. The effects of pumice source on NOM removals are shown in Figure 6.For both particle size fractions tested, iron oxide-coated Isp pumice provided the highest extent of UV 254 absorbance and DOC removals among all pumice sources.This result is mainly because (1) the most effective coating as measured by iron contents was achieved in Isp pumice particles and (2) Isp pumice particles had generally higher specific surface areas.A strong linear correlation ( 2 = 0.99) was found between iron contents of coated pumice particles and UV 254 absorbance reductions.The iron contents of <63 m coated Isp, Kay, and Nev pumices were 16.2, 13.4, and 11.1 mg Fe/g, respectively.The specific surface areas of <63 m coated Isp, Kay, and Nev pumices were 12.9, 12.4, and 10.1 m 2 /g, respectively.As shown in Figure 6, UV 254 absorbance reductions achieved by different pumice sources were generally in the following order: Isp > Kay > Nev pumice particles.This trend is consistent with the iron content and specific surface area data presented above.Overall, these data indicate that the mass amount of iron oxides coated on surfaces and the specific pumice surface area are important factors for NOM removal performance of the hybrid process.Apparently, as the values of both factors increase more reaction sites become available for producing strong oxidants.In addition, the adsorptive removal of NOM also increases due to more available sorption sites. Iron oxide-coated pumice particles were preloaded with NOM prior to the hybrid process to evaluate the impact of preloading of iron oxide surfaces on process performance.Iron oxide surfaces may become filled/saturated as a result of NOM adsorption during the hybrid process, if the adsorbed NOM moieties on iron oxide surfaces are not continuously oxidized and removed by the produced strong oxidants.Thus, the catalytic and oxidant production properties of iron oxide surfaces may diminish by time due to NOM adsorption.In an effort to test this hypothesis, hybrid process experiments in MB water were conducted using preloaded or nonpreloaded coated pumice particles employing 1000 mg/L pumice and peroxide dosages.The results showed that preloading of iron oxide surfaces reduced the NOM removal performance of the hybrid process only about 7-14% (as measured by DOC and UV 254 absorbance).Such reductions in NOM removal performances further increased as the initial DOC concentrations were increased during preloading.The reductions in DOC removals with respect to those achieved by nonpreloaded particles were 7, 12, and 14% for the initial DOC concentrations of 1, 5, and 10 mg/L during preloading, respectively.In other words, as the iron oxide surfaces were preloaded with higher masses of DOC, DOC removal performance of the hybrid process further decreased.Thus, the results indicated that iron oxide sites were partially covered with the adsorbed NOM, which reduced the catalytic and oxidant production properties of iron oxide surfaces.However, the negative impact of preloading can be considered minimal since only 14% reduction in DOC removal performance was found even at the highest degree of preloading.Further work will be conducted to determine regeneration protocols and efficiencies for the coated pumice particles. Concluding Remarks Both the adsorptive and catalytic properties of iron oxide surfaces were combined in a hybrid process by using hydrogen peroxide and iron oxide-coated pumice particles to remove NOM from water.The results show that both adsorption and catalytic oxidation mechanisms play a role in the removal of NOM.Iron oxide surfaces on pumice particles effectively catalyzed the decomposition of hydrogen peroxide resulting in the formation of strong oxidants.Release of iron to water during the hybrid process was negligible at pH values 5.5-8.5 even at the maximum coated pumice and peroxide doses.The hybrid process was effective in removing NOM from water sources with a wide range of SUVA 280 values ranging from 1.41 to 5.11 L/mg-m.It was found that iron oxide surfaces preferentially adsorbed UV 280 -absorbing NOM fractions.In addition, strong oxidants produced as a result of surface reactions between iron oxides and hydrogen peroxide also preferentially oxidized UV 280 -absorbing NOM fractions in tested water samples.This property of the hybrid process may be useful in terms of controlling DBP formations in drinking water treatment. The area of iron oxide-coated surfaces and specific pumice surface area also proved to be important factors since more adsorbing-and oxidant-producing-sites became available as the values for these factors increased.Preloading of iron oxide surfaces with NOM slightly reduced further NOM removal performance of the hybrid process, indicating that iron oxide sites were partially covered with the adsorbed NOM, which reduced the catalytic and oxidant production properties of iron oxide surfaces.However, the negative impact of preloading was minimal; only 14% less DOC removal was detected even at maximum degree of preloading.In the next phase of the project, long-term experiments for the hybrid process will be conducted in both continuous flow fixed-bed and completely mixed batch reactor configurations.Thus, the long-term performance of the process and the potential negative impacts of the irreversible saturation of iron oxide surfaces on the catalytic properties will be investigated in detail.Furthermore, since the iron oxides on pumice surfaces behaving as both a catalyst and an adsorbent may need regeneration due to NOM adsorption, further tests will be conducted to determine regeneration protocols and efficiencies. Figure 3 : Figure 3: NOM removal performances of the hybrid process in three different water sources.<63 m iron oxide-coated Isp pumice, pumice dose: 3000 mg/L, HA: humic acid solution.Error bars indicate the 95% confidence intervals. Figure 5 : Figure 5: The effects of iron oxide-coated pumice and hydrogen peroxide dosages on hydrogen peroxide demands in MB water.<63 m iron oxide-coated Isp pumice: peroxide dosages in the legend are in mg/L.Error bars indicate the 95% confidence intervals. Figure 6 : Figure 6: The effects of the pumice source on NOM removal performances of the hybrid process.<63 m iron oxide-coated pumice, pumice dose: 3000 mg/L.Error bars indicate the 95% confidence intervals. Figure 4: The effects of H 2 O 2 /iron oxide-coated pumice dose ratio (mg/mg) on NOM removals in MB water.<63 m iron oxide-coated Isp pumice: pumice dosages in the legend are in mg/L.Error bars indicate the 95% confidence intervals.
2018-12-01T02:40:56.443Z
2016-03-29T00:00:00.000
{ "year": 2016, "sha1": "cf32d105ff145b6090ac74759a95334cfd282323", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/jchem/2016/3108034.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cf32d105ff145b6090ac74759a95334cfd282323", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
2435911
pes2o/s2orc
v3-fos-license
C-C chemokine receptor type-4 transduction of T cells enhances interaction with dendritic cells, tumor infiltration and therapeutic efficacy of adoptive T cell transfer ABSTRACT T cell infiltration at the tumor site has been identified as a major predictor for the efficacy of adoptive T cell therapy. The chemokine C-C motif ligand 22 (CCL22) is highly expressed by immune cells in murine and human pancreatic cancer. Expression of its corresponding receptor, C-C chemokine receptor type 4 (CCR4), is restricted to regulatory T cells (Treg). We show that transduction of cytotoxic T cells (CTL) with CCR4 enhances their immigration into a pancreatic cancer model. Further, we show that binding of CCR4 with CCL22 strengthens the binding of T cell LFA-1 to dendritic cell (DC) ICAM-1 and increases CTL activation. In vivo, in a model of subcutaneous pancreatic cancer, treatment of tumor-bearing mice with CCR4-transduced CTL led to the eradication of established tumors in 40% of the mice. In conclusion, CCR4 overexpression in CTL is a promising therapeutic strategy to enhance the efficacy of adoptive T cell transfer (ACT). Introduction ACT is a powerful approach for the treatment of different cancer types. 1 ACT uses tumor-specific T cells either isolated from the patient s own tumor or rendered tumor specific through transduction with a given T cell or chimeric antigen receptor. [2][3][4] The transfer of tumor antigen-specific cytotoxic T lymphocytes (CTL) can induce complete disease remission in some patients with metastatic melanoma, 5 Epstein-Barr virus-positive non-Hodgkin lymphoma, 6 acute lymphatic leukemia, 7 B cell lymphoma 8 or nasopharyngeal carcinoma. 9 However, the capacity of adoptively transferred T cells to invade the tumor and to induce an efficient antitumor immune response is limited. Thus, only a small subgroup of patients benefit from ACT in the long term. 10 Infiltration of CTL or other immune cells into the tumor is mainly regulated by the local chemokine milieu. Several chemokines are known to attract immunosuppressive cell populations that shield tumor cells from the host's immune response. 11,12 Chemotherapy and irradiation of tumors can enhance the migration of T cells into tumors, among other by altering the chemokine profile of the tumor environment. The induction of apoptosis and necrosis in tumor cells by chemotherapy and irradiation generates an inflammatory reaction, which promotes the recruitment of T cells into the tumor. [13][14][15] However, a limitation is the lack of specificity and the high toxicity of these therapies. 16 To circumvent these obstacles and to enhance specificity, we aimed to genetically modify CTL ex vivo prior to ACT to improve their entry into the tumor. Determinants of T cell infiltration into tumors include adhesion molecules that enable lymphocytes to attach to and pass the endothelial barrier of blood vessels 2,17,18 and chemokine gradients sensed by receptors expressed on CTLs to attract T cells chemotactically toward tumors. 19 The endothelial integrin intercellular adhesion molecule 1 (ICAM-1) and its receptor lymphocyte function-associated antigen 1 (LFA-1) are mandatory for the process of extravasation. 20 Moreover, the interaction of LFA-1 on T cells with ICAM-1 on antigen-presenting cells (APC), is a prerequisite for APC-mediated T cell activation. 21 The affinity of integrin receptors can be regulated by activation of chemokine receptors. CCR7, for example activates LFA-1 through a process known as inside-out-signaling: Binding of CCR7 by its ligand CCL21 changes the conformation of LFA-1 and its affinity for ICAM-1 is strongly increased. 22 The chemokine CCL22 is expressed in many tumors and mediates the recruitment of Treg into the tumor tissue. 11,23 The corresponding chemokine receptor CCR4 is highly expressed by Treg, whereas CTL lack CCR4 expression. 24 We hypothesized that a strategy increasing the migration of CTL into the tumor could improve the therapeutic efficacy of ACT. In this context, CCR4 may be a promising candidate to increase CTL tumor infiltration and potentially to enhance antitumor effects of CTL by increasing the LFA-1 affinity for ICAM-1. In this study, we show that the transduction of CCR4 into CTL enhances the LFA-1-mediated binding to DCs and increases the activation of CTL. We demonstrate that adoptively transferred CTL overexpressing CCR4 accumulate in pancreatic cancer and induce increased antitumor immune responses. We also show CCL22 expression in patient pancreatic cancer specimens as evidence that T-cell transduction with CCR4 may warrant further investigations for the treatment of human pancreatic cancer. CCL22 is over-expressed in experimental tumors of pancreatic cancer cells We aimed to identify chemokines with strong intratumoral expression and with no expression of their corresponding chemokine receptors on CTL to explore unique chemoattractant stimuli for these cells. We hypothesized that the de novo expression of such chemokine receptors in CTL prior to adoptive transfer could increase the capability of these chemokines to attract CTL into the tumor and to improve the therapeutic efficacy of ACT. In order to identify appropriate chemokines, we screened established subcutaneously induced murine Panc02-OVA tumors for C-C chemokine expression by realtime PCR (Fig. 1A). The strongest expression was found for the chemokines CCL2, CCL6, CCL7 and CCL22 (Fig. 1A). The CCL22-specific receptor CCR4 is not expressed on CTL. In contrast, CCR4 is highly expressed on Tregs and guides these cells into the tumor tissue. 11 Thus, the de novo expression of CCR4 in CTL could be a promising approach to increase tumor-directed migration of CTL in ACT. To validate the potential of CCL22 to selectively attract CCR4-expressing cells into the tumor tissue, we quantified the expression of CCL22 on protein level in tumor and in other organs of Panc02-OVA tumor-bearing mice by ELISA. Expression of CCL22 was strongest in the tumor and peripheral lymph nodes (Fig. 1B), suggesting that CCR4-mediated migration of T cells would be preferentially directed to these sites. In these tumors, we could identify CD11c-positive immune cells as the main source of CCL22-production (Fig. S1). For the second ligand of CCR4, CCL17, only low concentrations were detected in the same tissues (Fig. S2). Normal murine pancreas did not express detectable levels of either chemokine. We next investigated the expression of CCR4 on T cells in tumor-bearing mice. Cell populations from tumor, peripheral lymph nodes, spleen, lung and blood of Panc02-OVA tumor-bearing mice were analyzed for CCR4 expression on non-T cells (CD3neg.), CTL (CD3 C CD8 C ), Teff (CD3 C CD4 C CD25neg.) and Treg (CD3 C CD4 C CD25 C ) (Fig. 1C). In all analyzed compartments, CCR4 was preferentially expressed on Treg (Fig. 1C). These experiments identify the CCL22-CCR4 axis as a potential target to improve CTL migration into Panc02-OVA tumors. CCR4-transduced CTL specifically migrate toward CCL22 and effectively kill tumor cells in vitro To test the ability of CCR4 to promote CTL migration, we transduced OVA-specific T cells from OT-1 transgenic mice with CCR4-GFP or with a non-functional mutant of CCR4 (CCR4del-GFP) (Fig. S3). In the trans-well assay, CCR4-GFP but not CCR4del-GFP expression mediated specific and dosedependent migration of the transduced T-cells toward CCL22 ( Fig. 2A and 2B). Specificity of migration was confirmed by enrichment of GFP-expressing cells in CCR4-GFP but not in CCR4del-GFP-transduced OT-1 T cells (Fig. 2B). Dose-dependent migration and GFP enrichment of CCR4-GFP cells was also observed toward CCL17 (Fig. S4). CCL22 neutralization by antibody completely abrogated migration ( Fig. 2A and 2B). To exclude that the overexpression of CCR4 alters the effector function of T cells, migration toward and cytotoxicity against Panc02-OVA-CCL22 dox cells, a tumor cell line with doxycycline (Dox)-inducible expression of CCL22 were analyzed. We could show that tumor cell-derived CCL22 strongly promoted CCR4-GFP but not CCR4del-GFP-transduced OT-1 T cell migration (Fig. 2C). By analyzing cytotoxicity of the migrated cells, we could further show that CCR4-GFP-transduced CTL efficiently lysed Panc02-OVA-CCL22 dox tumor cells. CCL22blockade abrogated both migration and subsequent tumor cell lysis (Fig. 2C). Thus, transduction of CTL with CCR4 strongly enhances migration toward CCL22-expressing cells and promotes tumor cell lysis. CCR4 enhances ICAM-1-dependent T cell activation Substantial amounts of CCL22 are produced by mature DC in vivo. 25 The process of CTL priming against tumor antigen requires the interaction of DC with the corresponding T cells. To investigate whether CCR4 expression in CTL influences DC-CTL interactions and in consequence CTL activation, cocultures of both cell types were imaged in vitro and analyzed for T cell activation. CCR4-and CCR4del-transduced OT-1 T cells were mixed at a 1:1 ratio and were co-cultured with OVAprimed DC derived from either wild-type or CCL22-deficient mice. After 6 h DC-CTL clusters were analyzed by confocal microscopy for the ratio of CCR4 to CCR4del within the clusters (Fig. 3A, top panels) and CCL22 concentration in the coculture supernatant was measured by ELISA (Fig. S5). Interestingly, clusters with CCL22-expressing DC derived from wildtype mice contained almost twice as many CCR4-expressing CTL as CCR4del-expressing CTL (Fig. 3A, lower left panel). In contrast, equal amounts of CCR4-and CCR4del-transduced CTL clustered around DC derived from CCL22-deficient mice. These findings indicate that DC-derived CCL22 induces CCR4-mediated cell contacts between DC and CTL. An important factor of DC-T cell aggregation is the interaction of LFA-1 on T cells with ICAM-1 on DC. 21 As chemokines can affect LFA-1-ICAM-1 interaction, 22 we aimed to test whether this ligand-receptor pair mediates the enhanced clustering of CCR4-expressing CTL. Indeed, blocking of ICAM-1 completely abrogated the preferential accumulation of CCR4-expressing CTL (Fig. 3A, lower left panel). In addition, ICAM-1 blockade resulted in a significant reduction of cluster size in all conditions, irrespective of CCL22 expression (Fig. 3A, lower right panel). To elucidate whether CCL22 binding to CCR4 on T cells indeed enhances T cell LFA-1 affinity for ICAM-1, we analyzed the binding of recombinant ICAM-1 to CCL22-stimulated T cells. Indeed, in the presence of CCL22, 2fold more CCR4-transduced T cells bound recombinant ICAM-1 than in the absence of CCL22, whereas no ICAM-1 binding increase was observed on CCR4del-transduced (Fig. 3B) or on untransduced GFP-negative T cells (Fig. S6A). Binding of ICAM-1 to CCL22-stimulated CCR4-transduced T cells was LFA-1 specific, as preincubation with an LFA-1 blocking antibody, completely abrogated ICAM-1 binding (Fig. S6B). We next tested the adhesion of T cells to immobilized (platebound) ICAM-1. CCL22 pretreatment significantly increased the adhesion of CCR4-transduced CTL to ICAM-1, while the binding of CCR4del-transduced cells was not affected by CCL22 (Fig. 3C). These results suggest that CCL22-CCR4 interactions indeed increase ICAM-1 to LFA-1 binding and thus enhances DC-T cell interaction. To test the functional consequence of the strengthened interaction of CCR4-transduced T cells with DC, we analyzed the activation of CCR4- and CCR4del-transduced CTL by DC. In the presence of CCL22, the recognition of OVA presented on DC by CCR4transduced OT-1 CTL was markedly increased compared to CCR4-del-transduced OT-1 CTL, as measured by IL-2 and IFNg release (Fig. 3D). Again, the addition of ICAM-1 blocking antibody abrogated the CCL22-induced increase of T cell activation (Fig. 3D). These results suggest that the binding of ICAM-1 to LFA-1 contributes to the CCL22-induced enhancement of CCR4-over CCR4del-transduced CTL-DC interaction. CCR4-transduced CTL enhance the efficacy of adoptive T cell transfer in a subcutaneous Panc02 murine tumor model To examine whether CCR4 expression can increase the therapeutic efficacy of ACT, we made use of the Panc02-OVA syngeneic tumor, which expresses CCL22 and is known to be widely resistant to ACT. We mixed CD45.1 C CCR4-GFP-transduced OT-1 T cells and CD90.1 C CCR4del-GFP-transduced OT-1 T cells at a 1:1 ratio and adoptively transferred these cells into Panc02-OVA tumorbearing CD45.2 C CD90.2 C mice. One week after transfer, the GFP distribution among all transferred marker cells was analyzed in the spleen, the peripheral lymph nodes and the tumors by flow cytometry. Remarkably, CCR4-transduced T cells specifically enriched over CCR4del-transduced T cells in the tumor tissue but not in the spleen or total peripheral lymph nodes (Fig. 4A). When we analyzed lymph node sites individually, we found a slight enrichment in the axillary ipsilateral but not in the ipsilateral or contralateral inguinal lymph nodes (Fig. S7). Next, we treated mice bearing established Panc02-OVA tumors twice, at days 6 and 12, with either GFP-, CCR4del-or CCR4-transduced OT-1 T cells (by i.v. injection). Treatment with CCR4-transduced T cells resulted in inhibition of tumor growth and cured four out of eight mice compared to one out of eight mice (p < 0.05) in the control groups treated with GFP-or CCR4del-transduced OT-1 T cells ( Fig. 4B and C). Tumor-free mice remained cured for the duration of the observation period (up to 70 d) and were protected from re-challenge with a lethal dose of Panc02-OVA tumor cells (Fig. 4D), suggesting established immunity against the Panc02-OVA cells and long-term persistence of the transferred T cells. These results suggest that CCR4 expression increases the migration of adoptively transferred T cells preferentially into the tumor and thereby enhances their therapeutic efficacy. CCL22 is expressed by human pancreatic cancer and human CCR4-transduced CTL migrate toward CCL22 To examine whether CCL22 may be a promising target for improving ACT against human malignancies, we analyzed CCL22 expression in human pancreatic adenocarcinoma, the entity recapitulated in the murine model we used so far. In immunohistochemistry, CCL22 was expressed in all of 15 analyzed pancreatic cancer samples. It was expressed in cells that corresponded in size, shape and localization to infiltrating leukocytes, but not to cancer cells (Fig. 5A). To test if the impact of CCR4-transduction in murine T cells would also translate into human T cells, we retrovirally transduced primary T cells obtained from human peripheral blood mononuclear cells (PBMC) with human CCR4-GFP or GFP alone (Fig. S8). CCR4-GFP but not GFP transduction increased migration of transduced primary T cells toward CCL22 (Fig. 5B, left panel). Among the migrated cells, GFP-positive cells preferentially migrated upon CCR4-GFP transduction, but not in the control condition, indicating a specific chemotactic effect mediated through CCR4 (Fig. 5B, right panel). In summary, we show here that CCR4 transduction induces specific migration of primary human T cells toward CCL22, a chemokine that is expressed in human pancreatic cancers. Thus, in analogy to our findings in the murine tumor model of pancreatic cancer cells, CCR4 transduction of human T cells may be capable to improve adoptive T cell therapy in patients with pancreatic cancer. Discussion In patients suffering from hematological malignancies, ACT is a powerful treatment modality to treat even refractory disease. 26,27 However, a major limitation for the use of ACT in the treatment of solid tumors is the impaired access of immune cells to the tumor tissue, resulting in limited efficacy. 28 Strategies to improve tissue infiltration by adoptively transferred T cells, especially in tumor entities such as pancreatic ductal adenocarcinoma which feature dense and extended stroma, are critical for ACT success. 2 We could recently show that transduction of CTL with a marker antigen may enable bispecific antibodies to specifically engage these T cells to the tumor cell and enhance ACT efficacy. 29 Similarly, we could demonstrate using a novel PD1-CD28-fusion receptor, that these T cells can be rendered resistance against PD-L1 driven immune suppression. 30 In these studies, while we initially hypothesized an enhancement of T cell infiltration, we found few infiltrating T cells in the tumor. Furthermore, after initial response, the tumors relapsed, suggesting that a more extensive T cell infiltration is required for effective and persistent ACT effects. Treg, in contrast to CTL, can be found in large numbers in experimental and human tumors. 11 The main mechanism for the attraction of immunosuppressive cells and the relative repulsion of CTL from the tumor is the chemokine profile present in the tumor micromilieu. 19 In the preclinical tumor model studied here, we could identify the CCL22 -CCR4 axis as central for Treg tumor infiltration, as has been suggested previously for other diseases and models. 11,31 We reasoned that we could target this axis therapeutically to enhance ACT efficacy. We demonstrate a strong therapeutic impact of T cell transduction with CCR4 in a syngeneic tumor model, accompanied by the accumulation of CCR4-transduced OT-1-T cells in the tumor. Neither CCL22 nor CCL17 are expressed by pancreatic cancer cells, as shown in the present study, but by the surrounding immune cells both in mice and humans. Our results extend previous findings where CCR4 has been used to redirect T cells to the tumor cells in xenograft models. 32 The present approach is novel in the sense that previous strategies have used chemokine receptors to redirect T cells to the tumor cells directly [32][33][34] while none have tried to attract T cells by employing a chemokine secreted by non-tumor cells in or around the tumor, such as CCL22 in our model. Attracting T cells to the tumor tissue, instead of directly to the tumor cell, may be beneficial, since we could show that CCL22 also strengthens the interaction with APC such as DC and boosts antigen recognition in an integrin ICAM-1-dependent manner. The previous studies on chemokine receptor-enhanced recruitment of T cells to tumors have been performed in xenograft models in immunodeficient mice. 32,33 In these models, counteracting effects of immunosuppressive cells, such as Treg, are excluded and the transplanted tumors are the only tissue that expresses human chemokines for attracting human CTL. In contract, the results of the present study demonstrate the efficiency of CCR4-transduced CTL for tumor treatment in immunocompetent mice. It is known that the function of integrin receptors can be regulated by certain chemokines and their receptors. Engagement of CCL21 to CCR7 changes the conformation of LFA-1 and thereby increases the affinity of LFA-1 for ICAM-1. 22,35 Since DC express large amounts of ICAM-1, the interactions of LFA-1 and ICAM-1 are part of the immunological synapse formed by T cells and DC. 36 Our results show that CCR4 transduction not only increases the infiltration of adoptively transferred CTL into the tumor but also enhances the ICAM-1 -LFA-1-dependent interaction with antigen-presenting DC. This in turn is a crucial step for the activation of CTL. At this interface, CCR4-transduced T cells seem to outcompete CCR4del-transduced, potentially due to a limited amount of interaction sites for T cells per DC. 37 Thus, in the absence of CCL22 from DC or CCR4 on T cell the cluster composition but not the cluster size is altered. While our data confirmed previously reported expression of CCL22 in murine pancreatic cancer, 38 little is known about the expression of CCL22 in human pancreatic cancer. To test if the CCL22 -CCR4 axis could, in principle, be targeted in human cancer, we analyzed tissue specimens from 15 pancreatic ductal adenocarcinoma patients by immunohistochemistry. In all tumor samples, CCL22 expressing cells were found. Interestingly, infiltrating immune cells, but not tumor cells, appeared to be responsible for intratumoral CCL22 expression, as has been suggested before in other tumor entities. 39, 40 We could recently identify CD14 C and CD68 C myeloid cells as the origin of CCL22 secretion at the tumor site in breast-cancer patients 41 . However, if our expression data from the Panc02-OVAmodel holds for pancreatic cancer, there the secreting cells may be rather CD11c C myeloid cells. CCL22 is homeostatically expressed by DCs in lymph nodes and other lymphatic tissues 42,43 . Thus, attracting antigen-specific T cells to tumor-distant sites may be important in the safety assessment of the strategy, especially if the antigen chosen is not tumor selective and may become activated outside of the tumor tissue. However, under homeostatic conditions, CCL22 is not relevant for entry of T cells in the lymph node under homeostatic conditions but is controlled through CCL19 and CCL21 44 . As a consequence, we could not find an overall enrichment in lymph nodes but only in distinct anatomical location at the edge of the tumor site. Our data provide evidence that CCR4-transduced T cells potentially rather drain into tumor associated nodes which may reduce the risk of offsite T cell activation through unspecific redirection. In summary, our results indicate that CCR4 transduction of CTL may be a promising new approach for the therapy of patients with a CCL22-expressing tumor microenvironment. Given that arming T cells with CCR4 can only affect T cell activation in an antigen-specific manner, we suggest that equipping T cells with such a navigation system may enhance T cell efficacy without impacting safety. We suggest that an analysis of the chemokine expression profile of human cancers may help to identify entities amenable for a disease specific chemokine-based targeting strategy to enhance the efficacy of ACT. Cell lines The murine pancreatic cancer cell line Panc02 and its ovalbumin-transfected counterpart Panc02-OVA have been previously described. 45 The CCL22-expressing Panc02-OVA-CCL22 dox and MC38-OVA-CCL22 dox tumor cell lines were generated by lentiviral transduction with a construct containing a Dox-inducible CCL22 expression cassette. The transduction protocol has been described in detail. 46 The packaging cell line Plat-E was a kind gift of W. Uckert (Berlin, Germany) and HEK 293T cells were obtained from ATCC (Manassas, USA). T cell line Jurkat was purchased from Life technologies (USA). All cells were cultured in DMEM with 10% fetal bovine serum (FBS, Life Technologies), 1% penicillin and streptomycin (PS) and 1% L-glutamine (all from PAA, Germany). 1 mg/mL puromycin and 10 mg/mL blasticidin (both Sigma, Germany) were added to the Plat-E medium. Primary murine and human T cells were cultured in RPMI 1640 with 10% FBS, 1% PS, 1% Lglutamine, 1% sodium pyruvate, 1 mM HEPES and 50 mM b-mercaptoethanol (PAA, Germany and Sigma, Germany). Animal experiments C57BL/6 mice transgenic for a T cell receptor specific for ovalbumin (OT-1) were purchased from The Jackson Laboratory, USA (stock number 003831). OT-1 mice were crossed with CD45.1 congenic marker mice (obtained from The Jackson Laboratory, stock number 002014) or with CD90.1 congenic marker mice (a kind gift from R. Obst, Munich, Germany) to generate CD45.1-OT-1 and CD90.1-OT-1 mice, respectively. CCL22 knockout mice were obtained from KOMP, USA. For animal experiments, C57BL/6 mice were purchased from Janvier, France. Tumors were induced by subcutaneous injection of 2 £ 10 6 tumor cells and mice were treated by i.v. injection of T cells as indicated. For re-challenge experiments, mice were injected subcutaneously with 0.5 £ 10 6 tumor cells in the flank opposite to the initial tumor. All experiments were randomized and blinded. Tumor growth and condition of mice was monitored every other day. All animal experiments were approved by the local regulatory agency (Regierung von Oberbayern). Generation of new fusion constructs All constructs were generated by overlap extension PCR and recombinant expression cloning into the retroviral pMP71 vector, as follows: CCR4-GFP consists of murine CCR4 (Uniprot Entry P51680 amino acids 1-360) linked to GFP; the CCR4del-GFP consists of murine CCR4 amino acids 1-313 linked to GFP and the human CCR4-GFP consists of human CCR4 (Uniprot Entry P51679 amino acids 1-360) linked to GFP. Murine T cell transduction The retroviral vector pMP71 (kindly provided by C. Baum, Hannover) was used for transfection of the ecotrophic packaging cell line Plat-E. Transduction protocols have been described in detail. 29 In brief, for primary murine T cell transduction Plat-E cells were transfected and the produced retrovirus was used to transduce T cells. T cells were stimulated first by addition of anti-CD3 antibody, anti-CD28 antibody (eBioscience, clones 145-2C11 and 37.51, respectively) and IL-2, and subsequently by addition of anti-CD3 beads, anti-CD28 beads (Life technologies) and human IL-15 (Peprotech, Germany). Human T cell transduction CCR4-GFP was cloned into the retroviral vector pMP71. pMP71-CCR4-GFP or pMP71-GFP was used for transduction of human T cells. Transduction protocol has been described in detail. 29 In brief, for human T cell transduction, HEK-293T cells were triple-transfected with the respective retroviral vector together with the plasmids pcDNA3.1-MLVg/p and pALF10A1 (kindly provided by W. Uckert, Berlin, Germany). The produced retrovirus was used to transduce T cells. T cells were stimulated using anti-CD3 and anti-CD28 antibodies (clones HIT3a and CD28.2, eBioscience) and IL-2 (Peprotech). Migration and killing assays Cell migration was evaluated using transwell plates (Corning) as previously described. 47 In brief, 1 £ 10 6 CCR4-GFP-or CCR4del-GFP-transduced CTL or Jurkat cells were placed onto a 5 mm pore filter in the upper chamber of a transwell plate with the lower chamber containing different concentrations of CCL22 or CCL17 (both Peprotech). For antagonizing migration through neutralization of CCL22 10 ng/mL, anti-CCL22 antibody (clone 158132, R&D) was added to the lower chamber. After 3 h incubation at 37 C the migrated cells in the lower chamber were analyzed by flow cytometry. For migration assays in combination with killing assays, 5 £ 10 5 CCR4-GFPor CCR4del-GFP-transduced OT-1 CTL were placed in the upper chamber of a transwell plate containing 1 £ 10 5 Panc02-OVA-CCL22 dox or MC38-OVA-CCL22 dox tumor cells with Dox-inducible CCL22 expression in the lower chamber in the presence or absence of 2 mg/mL Dox (Sigma-Aldrich) and 10 ng/mL anti-CCL22 antibody. After 3 h incubation at 37 C the upper chamber was removed. The CTL-mediated lysis of tumor cells in the lower was measured by LDH release (Promega) after another 6 h incubation at 37 C. The percentage of cytotoxicity was normalized as follows: % lysis D (release of the target condition-spontaneous release of tumor and T cells) / (maximal release of tumor cells-spontaneous release of tumor and T cells) . The concentration of CCL22 in the supernatant of the tumor cells was measured by ELISA (R&D Systems, Minneapolis, MN, USA). Cytokine assays of tissue lysates Tissue homogenates were resuspended in lysis buffer (BioRad Laboratories, Hercules, CA, USA) and were centrifuged. Total protein concentration was measured by Bradford assay (BioRad Laboratories). All samples were diluted to a protein concentration of 10 mg/mL and CCL17 and CCL22 concentrations were measured by ELISA (R&D Systems). The final cytokine concentration was calculated as pg cytokine per mg protein in the respective lysate. RNA isolation and quantitative real-time PCR analysis Total RNA was extracted from subcutaneous tumors using High Pure RNA Isolation Kit (Qiagen, Valencia, CA) according to the manufacturer's instructions. 1 mg of RNA was converted to cDNA using the Revert Aid First strand cDNA Synthesis Kit (Fermentas, St. Leon-Rot, Germany). Quantitative real-time PCR amplification was performed with the Light Cycler Taq-Man Master (Roche Diagnostics, Mannheim, Germany) on a LightCycler 2.0 instrument (Roche Diagnostics) together with the Universal Probe Library System (Roche Diagnostics). Relative gene expression is shown as a ratio of the expression level of the gene of interest to that of hypoxanthine phosphoribosyltransferase (HPRT) RNA. Quantitative real-time PCR primers were obtained from Metabion (Planegg, Germany; for primer sequences see Table S1). ICAM-1 adhesion and flow cytometry assay For ICAM-1 adhesion assays, 10 7 T cells transduced with CCR4 or CCR4del were labeled with 10 mg/mL Calcein (Life Technologies). Flat bottom 96-well plates were coated with100 mg/mL ICAM-1 (R&D) for 1 h and blocked using 2% BSA for 30 min. After washing with PBS, T cells were plated in 200 mL PBS at a concentration of 2 £ 10 6 cells per mL and incubated for 1 h at 37 C. After washing 3 times with 200 mL PBS, remaining cells were lysed using 200 mL of 10% Triton X-100 and centrifugation at 800 g for 5 min. 100 mL of the supernatants were transferred into new plates and fluorescence was measured using an ELISA reader. For ICAM-1 binding assays, 0.1 £ 10 6 CCR4-GFP-or CCR4del-GFP-transduced T cells were incubated with 200 ng/mL CCL22 (Peprotech) in the presence or absence of 10 mg/mL anti-LFA-1 antibody (clone H155-78, Biolegend) in a total volume of 50 mL cell adhesion buffer (PBS containing 10% FCS, 1 mM MgCl 2 and 1 mM CaCl 2 ). After adding 50 mL of recombinant mouse ICAM1/human Fc chimera (10 mg/mL, R&D) and incubating for 15 min at room temperature, cells were washed using cell adhesion buffer and fixated with 1% PFA at 4 C for 30 min. Subsequently, cells were stained for 30 min at 4 C using an APC linked anti-human Fc antibody (clone HP6017, Biolegend) and analyzed using flow cytometry. Patient samples and tissue microarray (TMA) construction Formalin-fixed, paraffin-embedded tumor tissue of 15 patients with confirmed pancreatic ductal adenocarcinoma was retrieved from the archives of the Institute of Pathology of the Ludwig-Maximilians Universit€ at M€ unchen. TMA consisting of two cores of histologically confirmed PDAC tumor tissue, each 1.5 mm diameter, was constructed using a semiautomatic tissue arrayer (Beecher Instruments). Clinicopathological patient data was retrieved from the original pathology reports. This retrospective analysis was carried out according to the recommendations of the local ethics committee of the Medical Faculty of the Ludwig-Maximilians-Universit€ at M€ unchen. CCL22 immunohistochemistry and microscopy Immunohistochemical staining was performed on 2 to 3 mm thick TMA sections after deparaffinization and rehydration using the rabbit anti-human CCL22 antibody (1:350; Peprotech) and an alkaline phosphatase-conjugated secondary antibody for detection, including appropriate positive and negative control tissues. In each tissue core, two peritumoral regions containing high amounts of CCL22 C cells were examined and the number of CCL22 C cells of two high power fields (HPF) was determined. The average number of CCL22 C cells in each case was finally calculated by division by four. Exemplary microscopical images were acquired at 400-fold magnification using a camera-equipped Zeiss Axioskop microscope (Zeiss) and Zeiss Axiovision imaging software. Statistical analysis All data are presented as mean C/¡ SEM and the statistical significance of differences were determined by the two-tailed Student's t-test. Differences in tumor size were analyzed using two-way ANOVA with Bonferroni posttest corrections. Differences in survival were analyzed by log-rank (Mantel-Cox) test. Statistical analyses were performed using GraphPad Prism 6 (GraphPad Software). p values < 0.05 were considered significant. Disclosure of potential conflicts of interest No potential conflicts of interest were disclosed. Acknowledgments SE and SK are member of the German Center for Lung Research. SG received a stipend from the German Cancer Aid. Parts of this work have been performed for the doctoral theses of SG and MC at the Ludwig-Maximilians-Universit€ at M€ unchen. Funding This work was supported by the international doctoral program "i-Target: Immunotargeting of cancer" funded by the Elite Network of Bavaria (to SK and SE), the Melanoma Research Alliance (grant number N269626 to SK and SE), the Wilhelm Sander Stiftung (grant number 2014.018.1 to SE and SK), the Graduiertenkolleg 1202 "Oligonucleotides in cell biology and therapy" funded by the Deutsche Forschungsgemeinschaft (to SE, DA and SK), the German Cancer Aid (to MR, DA and SK), the Else-Kr€ oner Fresenius Stiftung (grant number 2014_A204 to SK), the Marie-Slodowska Curie innovative training network "IMMUTRAIN: training network for the immunotherapy of cancer," funded under the H2020 program of the European Union (to SE and SK) and by LMU Munich's Institutional Strategy LMUexcellent within the framework of the German Excellence Initiative (to SE and SK).
2016-08-09T08:50:54.084Z
2015-10-29T00:00:00.000
{ "year": 2015, "sha1": "a69f44d8d0f583c2d7e6a0048d0ffe9da7f93566", "oa_license": "CCBYNC", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2015.1105428?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a69f44d8d0f583c2d7e6a0048d0ffe9da7f93566", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233193329
pes2o/s2orc
v3-fos-license
Resistance to insect growth regulators and age-stage, two-sex life table in Musca domestica from different dairy facilities Among the vectorial insect pests, the domestic house fly (Musca domestica L., Diptera: Muscidae) is a ubiquitous livestock pest with the ability to develop resistance and adapt to diverse climates. Successful management of the house fly in various locations requires information about its resistance development and life table features. The status of insect growth regulators resistance and life table features on the basis of age, stage, and two sexes of the house fly from five different geographical locations of Riyadh, Saudi Arabia: Dirab, Al-Masanie, Al-Washlah, Al-Uraija and Al-Muzahmiya were therefore investigated. The range of resistance levels were 3.77–8.03-fold for methoxyfenozide, 5.50–29.75 for pyriproxyfen, 0.59–2.91-fold for cyromazine, 9.33–28.67-fold for diflubenzuron, and 1.63–8.25-fold for triflumuron in five populations of house fly compared with the susceptible strain. Analysis of life history parameters—such as survival rate, larval duration, pupal duration, pre-female duration, pre-male duration, adult and total pre-oviposition periods, longevity of male, oviposition period, female ratio, and fecundity female-1—revealed significant variations among the field populations. Additionally, demographic features—including the generation time, the finite and intrinsic rates of increase, doubling time, and net reproductive rate—varied significantly among the field populations. These results will be helpful in planning the management of the house fly in geographically isolated dairies in Saudi Arabia. Introduction Musca domestica L. (Diptera: Muscidae) is a ubiquitous insect pest of livestock and humans [1,2] capable of adapting to a wide range of climates. This insect is commonly known as the house fly and serves as a carrier for many pathogens of public health and veterinary importance [3,4]. Attempts to control the house fly with a wide range of insecticides have failed because this insect rapidly develops resistance to these chemicals [5][6][7][8][9][10][11]. This widespread development of insecticide resistance has necessitated the employment of integrated vector management strategies. Various control measures are available to manage the house fly, for example, cultural practices, chemicals, and biological agents including fungal/bacterial pathogens and parasitoids/ dairies were located averagely more than 30 kilometer apart to each other. At these dairies, cultural practices (sanitation) and about 8-10 number of insecticide sprays from pyrethroids and organophosphates classes per season are being used for the control of dairy pests (Personal communication). Laboratory rearing of house fly populations The rearing protocol of the house fly was adopted according to Abbas et al. [41] with some modifications. After collection, each population was shifted in a separate transparent cage (40×40 cm) in the laboratory and reared for one generation to get uniform insects. An adult diet (sugar + powdered milk at a 1:1 ratio, weighed in g) and water-soaked cotton wicks placed in 9 cm plastic petri dishes were provided for adult food. Adults were provided with fresh food every two days. Cotton wicks were moistened daily and replaced at two days interval. A paste of wheat bran, yeast, sugar and milk at a ratio of 20:5:1.5:1.5 (g), respectively, mixed with 120 mL water, was prepared in plastic cups (500 mL) and placed in adult cages for fecundity after two days rearing in the laboratory. Plastic cups with eggs were removed from adult cages daily and covered with a muslin cloth to avoid larval escape. When larvae had consumed the diet in the plastic cups, they were shifted into glass beakers with larval medium, where they became pupae. The emerged adults were shifted to cages to proceed to the next generation. All populations were maintained at 27±2˚C, 65±5% humidity and a 12:12 h (L:D) photoperiod in the laboratory. Larval bioassays The toxicity of IGR insecticides to the larvae of different dairy populations of M. domestica was determined through a diet incorporation bioassay following the method of Abbas et al. [20]. Different concentrations of each of the tested IGRs were incorporated into the larval medium (consisting of wheat bran, yeast, milk powder, and sugar at a ratio of 20:5:5:1.5:1.5 g, respectively). Five concentrations (causing mortality from >0% to <100%) of each IGR with four replicates of each were used in each bioassay. In total, 10 second instar larvae in each replicate, 40 larvae in each concentration, and 200 larvae were used in each bioassay. In the control treatment, 40 larvae with four replicates (10 larvae per replicate) were provided with larval medium without any IGR insecticide concentrations. All bioassays were conducted under the above mentioned constant conditions. Data were recorded at the emergence of adults after 3 weeks of the bioassay. Larvae that failed to transform into adults were considered as dead. Life table construction To construct the life table of house fly, 100 newly (�24 h) laid eggs were randomly collected from different egg batches at one day of each mass population and were placed in plastic cups (500 ml) supplied with artificial larval diet aforementioned above. Single egg was considered as a replicate for each population [42]. The cups were enclosed with muslin cloth to prevent larval escape. Each population was reared separately. Hatched larvae were reared in cups on provided diet until pupation, and their developmental periods were recorded. Freshly emerged adults within 24 h were sexed and one male and one female placed into plastic jars (15 cm x 11 cm). Larval medium was provided daily in 9 cm petri dishes for oviposition. Eggs were counted daily till the death of females and egg hatchability was recorded. The female ratio, adult longevity, and oviposition period were also recorded. The experiment was conducted under the aforesaid laboratory conditions. For each strain, the following parameters were calculated according to Chi and Su [43] and Tuan et al. [27]. Briefly, the age-specific survivorship (l x ) was determined as follows: The age-stage-specific fecundity (m x ) was determined as follows: The net reproductive rate (R0) was calculated as follows: The intrinsic rate of increase (r) was assessed by the Lotka-Euler equation with the age indexing zero as follows: The finite rate of increase (λ) was determined as: The generation time (T) was determined as follows: The life expectancy (e xj ) was calculated as follows: The reproductive value (v xj ) was detected as follows: The gross reproductive rate (GRR) was calculated as: Where, x = alpha to beta Data analyses The bioassay data were analyzed using the POLO Plus software [44] to determine the values of median lethal concentration (LC 50 ). In each bioassay, the mortality rates were corrected by the mortality rate obtained in the control treatment using the formula of Abbott [45], if required. Resistance ratios (RRs) were calculated as: LC 50 of field population/LC 50 of susceptible strain. The resistance levels were scaled as mention by Ahmad et al. [46]; RR = 1 (susceptibility), RR = 2-10 (low resistance), RR = 11-30 (moderate resistance), RR = 31-100 (high resistance), and RR > 100 (very high resistance). The life table data were analyzed with the TWO-SEX-MS Chart program [47] established on the basis of two sex life table principle [28,48]. The variances and standard errors (SE) of life history features were determined by paired bootstrap test with 100,000 replicates at P � 0.05 using the TWOSEX-MS Chart [47]. The parameters l x , s xj , f x , m x , l x m x , v xj , and e xj were graphed with Sigma Plot 11.0. Resistance of M. domestica larvae to IGRs The toxicity of tested IGRs was not significantly different (overlapped 95% FL) among the field populations. Low resistance against methoxyfenozide (3.77-8.03-fold) and triflumuron (1.63-8.25-fold) was detected in all the five populations of M. domestica in comparison to the susceptible strain. Populations collected from the Dirab, Al-Masanie, and Al-Uraija had moderate resistance (10.75-29.75-fold) to pyriproxyfen, while the Al-Washlah and Al-Muzahmiya populations had low resistance levels (5.50-8.25-fold), compared with the susceptible strain. Susceptibility to cyromazine was found in all the tested populations (0.59-1.32-fold), except the Al-Masanie population which had low resistance (2.91-fold). Populations collected from the Dirab, Al-Washlah, Al-Masanie, and Al-Muzahmiya had moderate resistance (13.67-28.67-fold) to diflubenzuron, while the Al-Uraija population had low resistance level (9.33-fold), compared with the susceptible strain (Table 1). Life history parameters of house flies from dairies The larval durations of all the tested populations were significantly different from each other (P < 0.05). The pupal duration of the Al-Uraija population was significantly longer than that of the Dirab, Al-Masanie, Al-Washlah and Al-Muzahmiya populations (P < 0.05). The egg to adult duration for male flies from the Al-Masanie population was significantly longer than for flies from all other areas except Dirab (P < 0.05). Similarly, the egg to adult duration for female flies from the Al-Masanie population was significantly longer than for flies from all other areas (P < 0.05). The male total longevity in the Al-Uraija and Al-Muzahmiya populations was significantly shorter than those of the Dirab, Al-Masanie, and Al-Washlah populations (P < 0.05). The female total longevity in the Al-Washlah, Al-Uraija, and Al-Muzahmiya populations was significantly shorter (P < 0.05) than that of the Al-Masanie population ( Table 2). The total-pre-oviposition period (TPOP) and adult-pre-oviposition period (APOP) of the Al-Masanie and Al-Washlah populations were significantly longer than those of the Dirab, Al-Uraija, and Al-Muzahmiya populations (P < 0.05). The oviposition period were significantly reduced in the Al-Washlah population compared with that of the Al-Muzahmiya population (P < 0.05), but similar with the other tested populations. The female ratio in the Al-Uraija population was significantly lower compared with the Dirab and Al-Washlah populations (P < 0.05). Whereas the reproductive female ratio was significantly lower in the Al-Washlah population compared with the other populations (P < 0.05). The fecundity/female was significantly lower in the Al-Washlah population than that of the Al-Masanie and Al-Muzahmiya populations (P < 0.05), but did not differ from that of the Dirab and Al-Uraija populations (Table 2). Demographic life table features of house flies from dairies The intrinsic rates of increase (r) in the Al-Masanie (0.013) and Al-Washlah (0.12) populations were significantly lower (P < 0.05) than that of the Dirab Table 3). Age-stage-specific survival rates (s xj ) of house flies from dairies The parameter s xj refers to the possible survival of newly born eggs to age x and development to stage j (Fig 1). In all the populations the peak value of s xj for larvae was similar. However, the highest peak values of s xj for pupae in the Dirab (0.80) and Al-Washlah (0.70) populations Age-specific survivorship (l x ), fecundities (m x , f x ), and maternity (l x m x ) of house flies from dairies The parameters l x , f x , m x and l x m x were determined for all populations (Fig 2). Among all populations, non-significant differences were found in the highest peak values for l Age-stage-specific life expectancy (e xj ) of house flies from dairies The e xj refers to the expected days over which an insect at age x and stage j will survive after age x shown in Fig 4. Correlation between IGR resistance ratios and life history features There were nonsignificant correlations among the resistance ratios of tested IGRs in different house fly populations (Table 4). All the tested IGRs had nonsignificant positive and negative correlations with most of the life history features (P > 0.05). However, triflumuron had a PLOS ONE Resistance to insect growth regulators and life table of house fly significant positive correlation with the pupal duration (P = 0.04). The methoxyfenozide had significant positive correlation with pre-adult male duration (P = 0.03) and female longevity (P = 0.04; Table 5). Discussion In the current study, the five populations of house fly collected from dairy facilities showed susceptibility to cyromazine, low resistance to methoxyfenozide and triflumuron, low-moderate resistance to pyriproxyfen and diflubenzuron in comparison to the susceptible strain. However, the toxicity of these tested IGRs was not significantly different among the field populations. The insect populations that showed more than tenfold resistance to insecticides are known to be resistant [5,49]. In this study, less than tenfold resistance to methoxyfenozide, cyromazine and triflumuron in all tested populations, pyriproxyfen in two populations, and diflubenzuron in one population were detected. These populations were considered tolerant rather than resistant. Previously, resistance to these insecticides has been found in house fly [20][21][22], Phenacoccus solenopsis Tinsley (Homoptera: Pseudococcidae) [50], Spodoptera litura (F.) (Lepidoptera: Noctuidae) [51], and Spodoptera frugiperda (J.E. Smith) (Lepidoptera: Noctuidae) [52]. The development of resistance depends upon the use of particular insecticides at a facility [5,53]. In Saudi Arabia, spray applications of different insecticides are more common relative to IGRs. Therefore, similar toxicities of IGRs against house fly populations may be due to no and/or very low use of these insecticides at these dairy facilities. However, this is the first susceptibility report of house fly populations to IGRs from dairy facilities in Riyadh, Saudi Arabia. Among the five tested IGRs, methoxyfenozide, cyromazine, and triflumuron were found to be the most toxic larvicides against house fly larvae. Therefore, these larvicides could be the most promising agents for the integrated vector management system in Saudi Arabia. However, the potential cross-resistance from other chemical classes [17] should be considered when aiming to design a successful IGRs rotation pattern which requires further studies to prolong the effectiveness of these larvicides. Additionally, use of cultural practices (sanitation) and biological control agents with inclusion of insecticides for the control of house fly should be considered as an integrated pest management tool [12,13]. The adaptation of an insect pest to an altered environment and insecticide applications depends on its life table parameters, and evidence defining its population dynamics is crucial in formulating an effective pest management plan. The present study was conducted on the five house fly populations from different dairy farms in Riyadh, Saudi Arabia, to explore their life table dynamics. Results revealed prominent variations in the larval duration, pupal duration, pre-adult male and female duration, oviposition period, longevity of males and females, APOP, TPOP, female ratio, and fecundity female -1 . The reason for the differences in life history features could be attributed to altered elevations, latitudes, and environmental factorsfor instance, temperature and humidity-favoring the adaptation of the house flies. Similar to those in the current results, significant variations in life history parameters have been reported in several differing isolated pest populations [29-31, 54, 55]. For example, the developmental time of P. xylostella populations from higher latitudes was longer than that from lower latitudes [32]. In contrast, Shirai [55] reported no difference in the immature developmental time among nine geographically separate P. xylostella populations. The larval duration in Colaphellus bowringi Baly (Coleoptera: Chrysomelidae) was increased with increasing latitudes [30], but decreased in geometrid moths; Cabera exanthemata (Scopoli), Lomaspilis marginata (L.), Chiasmia clathrata (L.), and Cabera pusaria (L.) (Lepidoptera: Geometridae) [54]. Chen et al. [29] reported a higher net reproductive rate at 24˚C than at other tested temperatures in P. crisonalis. In house fly populations from Pakistan, the immature developmental time was shorter and pupal weights were heavier in those from lower latitudes with hot climates [31]. The aforementioned variations among different insect pests could be due to the varied latitudes and favorable environments responsible for the flexibility of insect pests in their particular facilities [30,56]. The present results exhibited variations in the demographic parameters (r, λ, R O , T, and DT) among the five house fly populations. The r and λ of the Al-Masanie and Al-Washlah populations were significantly lower than those in other tested field populations. The Al-Muzahmiya population exhibited the highest Ro among all of the field populations. The mean generation time of the Al-Uraija and Al-Muzahmiya populations was significantly shorter than that of other tested populations, while the doubling time of the Al-Washlah population was higher than those of the Al-Uraija and Al-Muzahmiya populations. The parameters r, λ, and R O provide the growth potential estimation of pests-a wider insight than that provided by individual life history features [2]. Because the parameters r and λ depend upon the fecundity and growth of individuals, therefore differences in these parameters might affect the expansion rates of populations [2,57,58]. The lower rates of increase in the populations in this study could be attributable to lower fecundity female -1 . In agreement with present results, significant variations in demographic life table parameters of pests have previously been reported in P. crisonalis [29], M. femurrubrum [33], and P. xylostella [32]. The parameters s xj , l x , f x , m x , l x m x , v xj, and e xj are important indicators for evaluating the biological fitness of insect pest populations. Similar insect pests under altered environments have different survival rates, reproduction abilities, and life spans, all of which reflect specific environmental effects and species-specific parameters [59,60]. The present results for s xj , l x , f x , m x , l x m x , v xj, and e xj revealed significant variations in house fly populations from five different dairy farms. Because of differing environmental factors-for instance, temperature-and extents of insecticide exposure, significant variations in these parameters have previously been documented in a range of pests, including P. crisonalis [29], Sogatella furcifera (Horvath) (Hemiptera: Delphacidae) [60], Bradysia odoriphaga Yang and Zhang (Diptera: Sciaridae) [42], Phthorimaea operculella (Zeller) (Lepidoptera: Gelechiidae) [61], S. litura [27], and Tetranychus urticae Koch (Acari: Tetranychidae) [62]. The house fly has 5-7 kilometer flight range and dispersal [63], whereas in the present study, the house fly populations were collected from averagely more than 30 kilometer apart located dairies. The differences in life history features may be due to different populations and differences in extent of management activities for house fly in specific dairy facilities by the dairy owners. Effectiveness and implementation of control measures against insect pests depend on knowledge of the life table attributes of any pest in its respective environment [31,32]. In the present study, M. domestica populations collected from Dirab, Al-Uraija, and Al-Muzahmiya facilities showed faster development and better life table parameters than those from the Al-Masanie and Al-Washlah facilities. Rapid growth of house fly in these dairy facilities may enhance the insecticidal usage, which ultimately may lead to resistance problem in the future. Therefore, the management of house fly should be focused on specific dairy facility. Moreover, nonsignificant correlations between life history features and IGRs suggest that the integration of IGRs with cultural practices in these dairy facilities may suppress the house fly growth and resistance problem. Our results provide useful insights about the age, stage, and two sexes based life table variations in house fly populations from Riyadh, Saudi Arabia for the better management in specific dairy facilities.
2021-04-10T06:16:43.747Z
2021-04-08T00:00:00.000
{ "year": 2021, "sha1": "bcfdb7c0ecc67827d3e0adbfb431d967bab88ee8", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0248693&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80ed8ebbe7c0c3bbc5660502ab801ffda55c4b58", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
239646998
pes2o/s2orc
v3-fos-license
STUDY OF BUILDING MASS ARRANGEMENTS IN TAHFIDZ PRENEUR ISLAMIC BOARDING SCHOOL AREA QUR'AN A boarding school is an Islamic educational institution to study, understand, deepen, appreciate, and practice Islamic teachings with a boarding system. This research aims to pay attention to the environment and social conditions of the boarding school to be a comfortable place for students to study because all activities are carried out in the Boarding school. This study uses a qualitative method by gaining a deeper understanding of the object under study. The results of this study are the application of the concept of Ecological Architecture to the environment and society at the Tahfidz Preneur Kampoeng Qur'an Cendekia Islamic Boarding School, which can be seen from the use of materials in the boarding houses of the students using wood and bamboo as the primary material, the structure of the building by maximizing the use of contours, and also the orientation of the building. Conclusion The Tahfidz Preneur Kampoeng Qur'an Cendekia Islamic Boarding School is one of the Islamic boarding schools in the Parompong area of West Bandung with Ecological Architecture. INTRODUCTION Boarding school is a traditional educational institution in Islam in Indonesia to understand, appreciate, and practice the teachings of Islam (tafaqquh fiddin) by emphasizing the importance of Islamic moral as a guide for everyday social life (Mastuhu, 1994). Boarding school is also an education system that implements a boarding system. Islamic boarding schools have developed starting with traditional pesantren which only teach about the Islamic religion which only requires mosques and dormitories. However, along with the development of the era, the Boarding school developed both in terms of curriculum and vision and mission where Boarding school not only learn about the religion of Islam but also add teaching and learning activities in general or public schools and are now increasingly developing with the emergence of modern Islamic boarding schools (Nisa, 2017). With the development of Islamic boarding schools, the need for space and the design of Islamic boarding schools have developed, initially focused only on mosque buildings. However, nowadays, pesantren are experiencing design developments and the need for quite a lot of space to support the activities of modern boarding schools (Nuraeni, 2012). In addition to the development of spatial design, mass arrangement and architectural formations also developed. One of the architectural concepts that are of particular interest is the concept of ecological architecture. This concept is used in the arrangement and selection of building area material and also determine the orientation of the building. Islamic boarding schools with the concept of ecological architecture are also called eco-boarding schools (Christin, Naila Woro Martini & Bagus Pribadi, 2016). The notion of Ecological Architecture itself is the harmony between the building and its natural surroundings. These elements work in harmony to produce comfort, security, beauty and interest (Irwan & Hasanbahri, 2012). The concept of Architectural Ecology is a blend of environmental science and architectural science that is oriented towards a development model by taking into account the balance of the natural and artificial environments (Arfan, Ersina, & Irham, 2016). The focus of the study is that Tahfidz Preuneur Islamic Boarding School is one of the boarding schools in the Parompong area of West Bandung with the application of the concept of Ecological Architecture. Tahfidz Preuneur Islamic Boarding School applies the concept of ecological architecture with consideration of the potential contained in the site and the surrounding environment, such as maximizing contoured land, using materials and also making optimal use of the arrangement of building mass structures that minimize damage to the topography of the soil, optimizing view potential on the tread. Previous researchers (Digna, Nur Rahmawati, & Yayi Arsandrie, 2016) Previous research had been conducted at the boarding schools which later found many problems in terms of architecture, especially regarding the comfort of living in the santriwati's bedroom, which was caused by in optimal openings and the structure of the building period that was too coincided. Then this problem can be solved with the concept of architectural ecology. As well as further researchers (Adam & Rinnarsuri, 2020), The concept of the appearance of the Tahfidz Al-Qur'an Islamic Boarding School building considers the standards for the overall appearance of the building according to educational standards that are adapted to applying ecological architectural concepts such as the use of the concept of cross ventilation, the application of double glass, and solar panels. In the outdoor space there are many gardens, many parks so that students can memorize in an open environment with a calm atmosphere so that it can help in the process of memorizing the students. Within the boarding schools area there are also gardens to produce the food needs of students and implement systems for waste water management and rainwater management. The importance of this research, attention to the environment, and so s all schools should be a comfortable place for the students to learn because of all the activities undertaken in the boarding schools (Azhima, Wisnu Setiawan, & Arch, 2019). For example, one of the Islamic boarding schools in the Parompong area of West Bandung, namely Tahfidz Preuneur Islamic Boarding School, is designed using Ecological Architecture (Eco-Architecture). Ecological Architecture is a design solution in architecture with an ecological orientation and the interaction between living things and their environment (sunlight, climate, geology, including living things in their habitat) that is environmentally friendly. The research aims to pay attention to the Environment and Social Environment of the pesantren, which must be a comfortable place for students to learn because all activities are carried out in the boarding schools. RESEARCH METHODS This study used a qualitative method by gaining a deeper understanding of the object under study (Sugiyono, 2012). In this study, the researcher divided the research into three stages, the Research Preparation Stage, the Research Implementation Stage, and the final stage (Sugiyono, 2005). At the research preparation stage, the researcher makes observations, identifies problems, determines the problem formulation, and collects literature studies as a reference. In the next stage of research, the researcher conducts site surveys, documentation, and analysis of research variables, and the final stage is collecting data, processing data, analyze and draw conclusions (Moleong, 2007). This research is expected to reference his research and can be developed again to a broader scale. Because research on cultural heritage buildings is still quite extensive and there are not many people who take this theme as their research theme (Andiyan, Nurrisman, 2021). Ecological Architectural Design Principles • Solution Grows from Place By having two direct access, this boarding school is very open to interaction social on the surrounding communities and building materials from bamboo to develop economy Public about because it is not far from the school are artisans bamboo (Ernst, 2002). • Ecological Accounting Informs Design Reducing the pavement on the site with more softscape or water infiltration reduces the impact of water inundation (Hamdi, 2016). • Design with Nature Protecting the surrounding environment by maintaining the vegetation in the site is an effort to keep it natural (Reiza & Wibowo, 2017).  Aspects of building materials The use of building materials at the Tahfidz Preuneur Islamic Boarding School in Kampoeng Quran Cendekia partially uses bamboo as basic materials which are easy to find and renew. Building materials or materials used in this area are as follows (Ibrahim, 2016) : -Bamboo: 60% -Natural stone: 20% -Other ingredients: 20% 5. Mass Order The mass order at the Tahfidz Preuneur Islamic Boarding School Kampoeng Quran Cendekia uses a mass arrangement in a cluster configuration and a grid form where the percentage is 50% cluster configuration and 50% grid form (Ching, 2008). CONCLUSIONS Preneur Kampoeng Qur'an Cendekia Islamic Boarding Schoo is one of the pesantren in the Parompong area of West Bandung with the application of Ecological Architecture. Preneur Kampoeng Qur'an Cendekia Islamic Boarding Schoo applies the concept of ecological architecture by considering the potential contained in the site and the surrounding environment, such as maximizing contoured land, using materials and also making optimal use of it by structuring the mass structure of the building which minimizes damage to the topography of the land, optimizing the potential view on the site. . Based on the results of observations it can be concluded that: 1. The mass structure in the Tahfidz Preuneur Kampoeng Quran Cendekia Islamic Boarding School area uses a cluster configuration mass structure, and also the grid shape can be seen using the same mass structure but lack of ramp circulation in the Boarding School area. 2. The ecology of the Region Tahfidz Preuneur Kampoeng Quran Cendekia Islamic Boarding School uses an ecological architecture that can be seen from the design emphasis, the principle of design and architecture aspects ecological aspects. Suggestion 1. Based on the results of observations and research, it can be concluded that the researcher provides the following suggestions; 2. Maintenance in Tahfidz Preuneur Kampoeng Quran Cendekia Islamic Boarding School needs to be improved again, especially in the waste management section, so that it does not look slum. 3. Maintenance of buildings that use natural materials must be maintained so that they are maintained and sturdy. 4. Added a ramp for pedestrian access to make it friendly for people with disabilities. 5. I am adding drainage channels to make it easier for water to drain so that there are no many puddles.
2021-10-22T16:06:29.796Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "a18b3f9e13b9fcf63000e38683359d7da5e44828", "oa_license": "CCBYSA", "oa_url": "https://ejurnal.pps.ung.ac.id/index.php/Aksara/article/download/722/522", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "39c22a7519d88aaa9d1d08ba77c21dcc339c713e", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Sociology" ] }
33863001
pes2o/s2orc
v3-fos-license
Relationship of the Vitamin D Receptor and Collagen Iα 1 Gene Polymorphisms with Low Bone Mineral Density and Vertebral Fractures in Postmenopausal Turkish Women ©2011 Turkish League Against Rheumatism. All rights reserved. Amaç: Bu çalışmada postmenopozal Türk kadınlarında (i) D vitamini reseptörü (VDR) geni ve kollajen Ia1 (COL Ia1) geni polimorfizmlerinin sıklığını ve (ii) VDR genindeki BsmI polimorfizmi ve COL Ia1 genindeki Sp1 polimorfizmleri ile düşük kemik mineral yoğunluğu (KMY) ve vertebral kırıklar arasında bir ilişki olup olmadığının araştırılması amaçlandı. Hastalar ve yöntemler: Osteoporoz tanı ve tedavisi nedeniyle polikliniğimize başvuran postmenopozal 100 Türk kadın (ort. yaş 63.4±8.7 yıl; dağılım 48-86 yıl) çalışmaya dahil edildi. Çalışma grubu KMY ölçümü temelinde T skoru değerine göre üç gruba ayrıldı. Buna ek olarak T skoru >-1.0 olanlar kontrol grubunu (n=30) oluşturdu. BT skoru <-1.0−>-2.5 olanlar osteopenik grubu (n=30) ve T skoru <-2.5 olanlar (n=40) osteoporotik grubu oluşturdu. D vitamini reseptörü genindeki Bsm1 (B/b) ve COL Ia1 genindeki Sp1 (S/s) polimorfizmleri iki paralel polimeraz zincir reaksiyonu ve ardından hibridizasyon ile belirlendi. Lomber omurga ve femur kemik mineral yoğunlukları çift enerjili X ışını absorbsiyometrisi yöntemi ile ölçüldü. Bulgular: D vitamini reseptörü genindeki Bsm1 (B/b) polimorfizmi genotip frekansları ve COL Ia1 genindeki Sp1 (S/s) polimorfizmi genotip frekansları üç çalışma grubu arasında istatistiksel olarak farklı değildi. Ayrıca VDR genindeki BB, Bb, bb genotipleri ve COL Ia1 genindeki SS, Ss, ss genotipleri açısından vertebral kırığı olan hastalar ile kırığı olmayan hastalar arasında anlamlı farklılık bulunmadı. Sonuç: Bulgularımız postmenopozal Türk kadınlarında VDR genindeki Bsm1 polimorfizmi ile COL Ia1 genindeki Sp1 polimorfizmlerinin düşük kemik yoğunluğu ya da vertebral kırıklar ile ilişkili olmadığını göstermiştir. Bone mineral density (BMD) is commonly used in the diagnosis of bone disorders.Low BMD is an important predictor of fracture risk and osteoporosis. [1]Osteoporosis is a systemic skeletal disorder characterized by low BMD and microarchitectural deterioration of bone tissue.Consequently, this leads to an increased risk of fracture. [2,3]Osteoporosis is a significant health problem and, at some point, decreases the quality of life.Many factors influence the risk of osteoporosis, such as diet, physical activity, medication use, and coexisting diseases.However, one of the most important clinical risk factors is a positive family history, which emphasizes the importance of genetics in the pathogenesis of osteoporosis. [4]The genetic study of osteoporosis has been based mainly on research into candidate genes relevant to bone metabolism. [5]e vitamin D receptor (VDR) gene has been declared as one of the candidate genes for genetic control of bone mass. [6]The VDR mediates 1a,25 (OH)2D3 action by modulating the transcription of target genes. [7]Polymorphisms at the VDR locus, identified by the restriction endonuclease enzyme Bsm I, were originally thought to explain a large percentage of genetic variation in BMD population studies. [8]However, recent meta-analysis has suggested that the VDR gene polymorphisms (Bsm I, Apa I, Taq I haplotypes) have no effect on either BMD or fracture risk. [9]e collagen Ia1 (COL Ia1) gene, which encodes the alpha I chain of type 1 collagen, is one of the most commonly studied candidate genes for susceptibility to osteoporosis. [10]Guanine (G)-thymidine (T) polymorphisms, which affect the binding site for the transcription factor specificity protein 1 (Sp1) in the first intron of the COL Ia1 gene, has been associated with low bone density and increased occurrence of osteoporotic fracture. [11]It has been suggested that heterozygotes at the polymorphic Sp1 site (Ss) have significantly lower bone mineral density than SS homozygotes and ss homozygotes.Consequently, the Sp1 polymorphism of the COL Ia1 gene might contribute to peak bone mass. [12][15] On the other hand, some other studies have shown no relationship between VDR, COL Ia1 genes, and BMD or fracture risk. [16,17]Further work is required to determine whether genetic factors do indeed contribute significantly to the regulation of bone loss. Moreover, the importance of these polymorphisms within various ethnic populations, including the Turkish population, is not clear yet.There has been one study [18] from the Turkish population that investigated VDR genotypes in postmenopausal women with osteoporosis.Another Turkish study group. [19]Evaluated the effects of hormone replacement therapy (HRT) on BMD in postmenopausal women who had osteoporosis both with and without the COL Ia1 Sp1 binding site polymorphism.To our knowledge, there has not been any study evaluating both of these polymorphisms in Turkish postmenopausal women with regards to osteoporosis or vertebral fractures. Our study is a preliminary study investigating the frequencies of Bsm I polymorphisms in the VDR gene and Sp1 polymorphisms in the COL Ia1 gene while also evaluating the associations between VDR, COL Ia1 polymorphisms and BMD and bone fractures in postmenopausal Turkish women with osteopenia or osteoporosis. Study subjects The study included 100 postmenopausal Turkish women, aged 48-86 years (mean age 63.4±8.7 years), who were referred to the Department of Physical Medicine and Rehabilitation Clinic for the measurement of BMD.A detailed medical history was obtained from all of the women.Each patient was examined clinically, and routine biochemical tests were performed on all patients in order to exclude any underlying secondary causes of osteoporosis (systemic and metabolic bone disease).Patients who had diseases capable of influencing calcium and phosphorus metabolism, such as hyperparathyroidism, renal failure, liver diseases, hyperthyroidism, hyper/hypocortisolism, diabetes, or other chronic illnesses, were excluded.Height and weight were measured at the time of BMD measurement.The body mass index (BMI) was calculated as weight (in kilograms) divided by height (in meters) squared.Study participants consented to participate, and the local ethics committee approved the study. Bone mineral density measurement Bone mineral density at the lumbar spine and femoral neck was measured by dual-energy X-ray absorptiometry (DXA) using a Hologic DXA system (Discovery QDR, W S/N 81754) and was reported as grams per square centimeter.Based on the BMD Vitamin D and Collagen Genetics in Osteoporosis measurement, which was calculated using the WHO (World Health Organization, 1994) criteria, [20] the study population was divided into three groups according to the T score value: >-1.0 as the "control", <-1.0−>-2.5 as osteopenic, and <-2.5 as osteoporotic.For comparison of genotype frequencies, we selected an age-matched subgroup of women (30 women; mean age 62.4±8.7 years) as the control group. Fracture assessment Lateral lumbar and thoracic spine radiographies were taken for all of the individuals; however, eight were excluded when evaluating VDR, Bsm I, and COL Ia1 Sp1 polymorphisms in terms of vertebral fracture risk because these radiographies were not available.The lateral lumbar and thoracic spine radiographies were evaluated in terms of vertebral fracture with the method defined by Genant et al. [21] in which torso heights of the vertebras between T4-L4 are evaluated by visual determination.We excluded vertebral fractures that occurred because of major trauma and vertebral deformities due to causes other than osteoporosis. Analysis of the VDR gene, Bsm I, and COL Ia1 gene Sp1 polymorphisms The blood samples gathered for gene polymorphism analyses were stored at +4 °C for deoxyribonucleic acid (DNA) isolation.The DNA was isolated from whole blood within the first 24 hours.Genomic DNA was extracted from samples of peripheral venous blood using a commercially available kit (Invisorb ‚ Spin Blood Mini kit, Invitek, Germany).The DNA was kept at -80 °C until the analysis. In order to analyze the genetic polymorphisms in the genes for the human COL Ia1 and VDR, the "Genetics Risk Factors for Osteoporosis: Collagen type Ia1 S/s and Vitamin D receptor B/b alleles, Reverse hybridization kit for the detection of the most important genetic risk factors for osteoporosis" was used (GenID ‚ GmbH, D -72479 Straßberg, Germany; Cat.No: RDB2055 12 Tests).This kit detects the Sp1 (Ss) polymorphism in the COL Ia1 gene and the Bsm I (B/b) polymorphism in the VDR gene by two parallel polymerase chain reactions (PCR) and subsequent hybridization.Using the DNA isolated from whole blood, two PCRs were first carried out.In this way, one fragment of the COL Ia1 and one fragment of the VDR gene were amplified with specific biotin-labeled primers.The characterization of the amplified gene fragments was carried out in a hybridization reaction with sequence-specific oligonucleotide probes (SSOP) which were immobilized on nitrocellulose strips (reverse hybridization).The nitrocellulose strips had gene probes for the wild type and mutated alleles of both gene loci as well as various control zones.During hybridization, the denatured amplified DNA, mixed from both PCRs with PN-VDR and PN-COLIA, binded to the gene probes attached to the strips.A highly specific washing procedure ensured that the hybrids would only survive if the probe's sequence was 100% complementary to that of the amplified DNA.Streptavidin-coupled alkaline phosphatase binded to the hybrids of the gene probe and biotin-labeled amplified DNA.This complex then was detected by a color reaction of BCIP/NBT at the alkaline phosphatase.The band pattern was analyzed using the supplied template. The washing and incubation steps were carried out on a horizontal shaker at 70-80 rpm in order to get optimized results.The washing procedures were very carefully carried out, and smooth, blunt plastic forceps were used in order to move the strips. For the evaluation of the results, we used the kit-specific evaluation sheet on which the reaction zones were marked (Figure 1).After drying, the strip was laid with the bottom end line onto the template of the evaluation sheet.A reaction zone was identified if a marked position on the evaluation sheet corresponded exactly to that reaction zone on the strip.Each strip had a conjugate, specificity, and two sensitivity control zones.The conjugate and sensitivity control zones had to be completely developed during the test.If those control zones were not developed, there would be a false negative result, and, in that case, the test was repeated.If the specificity zone was developed, there was an incorrect positive result.If that occurred, the test was also repeated.Probe signals were only interpreted as positive if they were at least as intensive as the respective sensitivity control. Statistical analyses Statistical analyses were performed using "SPSS 15.0 version for Windows" (SPSS Inc., Chicago, Illinois, USA).The chi-square test was used to confirm that the VDR gene Bsm I and COL Ia1 gene Sp1 polymorphisms were in Hardy-Weinberg equilibrium.Differences between genotype groups were examined using one-way analysis of variance (ANOVA), Student's t-test, and the chisquare test.Metric variables were given as mean±SD.Differences in mean or genotype frequencies were considered statistically significant for p values <0.05. RESULTS The comparisons of demographic and clinical properties separated by the groups of patients according to their VDR, Bsm I, and COL Ia1 Sp1 genotypes are shown in Table 1.The genotype frequencies in our total population for the VDR gene Bsm I site polymorphism were 13% BB, 58% Bb, and 29% bb.For the COL Ia1 gene Sp1 site polymorphism, they were 64% SS, 32% Ss, and 4% ss.The distribution of both VDR and COL Ia1 genotypes was consistent with the expected frequency by the Hardy-Weinberg equilibrium law (p>0.05).The chi-square test was used to compare the frequency of the genotypes.There were no statistically significant differences in BMD or age at menopause among the VDR and COL Ia1 genotypes (Table 1).We observed no association between VDR and COL Ia1 genotypes and BMD. The VDR, Bsm I, and COL Ia1 Sp1 genotype distributions and frequencies among the control, osteopenia, and osteoporosis groups are shown in Table 2.The allelic and genotype distributions showed no significant difference among the osteoporotic patients, osteopenic patients, and controls.There was no statistical difference among the three groups for age, menopausal age, or BMI (p>0.05).The mean T scores of the groups were as follows: control group -0.4±0.6 (n=30), osteopenic group -1.8±0.4 (n=30), and osteoporotic group -3.2±0.5 (n=40). The genotype frequencies of BB, Bb, and bb in the VDR gene were not statistically different among the control, osteopenic, and osteoporotic groups (p>0.05).Neither were the genotype frequencies of SS, Ss, and ss in the COL Ia1 gene found to be statistically different among the three groups (p>0.05). We did not include eight individuals when evaluating VDR, Bsm I, and COL Ia1 Sp1 polymorphisms in terms of vertebral fracture risk because we did not have their lateral, lumbar, and thoracic spine radiographies.(Table 3).The distribution and frequencies of VDR, Bsm I, and COL Ia1 Sp1 genotypes among individuals with vertebral fracture and without fracture are shown in Table 3. No significant difference was found between individuals with the vertebral fracture and "no fracture" groups in terms of BB, Bb, and bb genotypes in the VDR gene (p>0.05),nor was there a significant difference found between individuals in the vertebral fracture and "no fracture" groups in terms of COL Ia1 genotypes (p>0.05).In addition, the relationship between S and s allele frequencies, B and b allele frequencies, and vertebral fracture was analyzed, and no connection was found. There was no significant difference between the VDR mutant genotype and the other bb genotypes and COL Ia1 mutant ss genotypes when compared with the other ss types (p>0.05).Also, there was no difference between the vertebral fracture group and the "no fracture" group when examining the VDR, Bsm I, and COL Ia1 Sp1 genotypes (p>0.05). Additionally, the genotype frequency was compared in patients with and without a history of fracture, and no difference was found in the frequency of the VDR and COL Ia1 genotypes (p>0.05). Finally, no statistically significant difference was found among the VDR and COL Ia1 genotypes and the BMD when compared with the control, osteopenic, and osteoporotic groups (p>0.05). DISCUSSION Most of the studies on the genetics of osteoporosis have been based largely on research into candidate genes relevant to bone metabolism.The VDR gene, the COL Ia1 gene, and the estrogen receptor-alpha (ER-a) gene are among those studied most frequently. [5]e VDR and COL Ia1 gene polymorphisms have been associated with low BMD and an increased risk of osteoporotic fracture in several studies. [13,15]owever, some studies and meta-analyses have shown no association between these polymorphisms and BMD or osteoporotic fractures. [9,16,17,22]Therefore, we need some additional information in order to discover the role of VDR and COL Ia1 genes in osteoporosis pathogenesis. There have also been some studies suggesting that there was a relationship between the "bb" genotype of VDR gene and low BMD. [23,24]ngdahl et al. [15] found that the VDR Bsm I "B" allele had a strong relationship with increased fracture risk in a case-controlled study.In some other studies, the relationship between VDR polymorphisms and fracture risk was shown to be independent from BMD. [13] Garnero et al., [25] found an association between VDR genotypes and fracture risk that was independent from BMD in a study on postmenopausal women.No relationship between VDR genotypes and whole body BMD values was shown in this study.They claimed that the Bsm I polymorphism had a correlation with the "BB" genotype and non-vertebral fracture; however, there was no evidence to prove that kind of a relationship existed. A recent meta-analysis showed no relationship between VDR Bsm I polymorphisms and fracture risk. [26]Another recent meta-analysis published by Uitterlinden et al. [9] documented no significant association between fracture risk and BMD and VDR gene Bsm I, Apa I, Taq I polymorphisms.That study, which has been the most detailed report on the topic so far, was conducted by "The Genetic Markers for Osteoporosis" (GENOMOS) consortium. Studies that claimed to identify the genetic determiner of BMD have produced contradicting data.In the GENOMOS study, they obtained some evidence that showed the effects of the VDR gene Cdx2 polymorphism on vertebral fracture risk and they concluded that there was a need for more study. [9]e findings of our study were in concordance with the findings of the GENOMOS study. [9]Our study showed no significant association between the groups in terms of the VDR Bsm I polymorphism and BMD or this polymorphism and the risk for vertebral fracture in postmenopausal women.We examined the VDR mutant BB genotype and other Bb and bb genotypes and their relationship to the osteopenic and osteoporotic groups when compared with the control group.However, we could not find any statistically meaningful correlation. In 1996, Grant et al. [27] defined the G-T polymorphism at the first base of the binding site for the transcription factor Sp1 in the first intron of the COL Ia1 gene and found a relationship between low bone density and osteoporotic fracture formation.They announced that the T allele (= s allele) was more prevalent in patients with osteoporosis than in the controls.Some other studies have shown that the GT (Ss) and TT (ss) genotypes have a correlation with low BMD and increased fracture risk. [27,28]However, Lidén et al. [22] and Hustmyer et al. [29] claimed that there was no difference.Bernad et al. [30] found no relationship between these genotypes and lumbar and femur BMD. In the GENOMOS study published by Ralston et al. [10] in 2006, they could not find a relationship with BMD at the GG (SS) homozygote and GT (Ss) heterozygote, but they found a relationship with the TT (ss) genotype similar to what was discovered in previous studies.In those studies, it was emphasized that, unlike previous meta-analysis, there actually was a relationship with fracture, and they mentioned an increase independent from BMD at vertebral fracture risk, especially in women. In our study, the relationship of the COL Ia1 mutant ss genotype and other Ss and SS genotypes (compared with the control group) between osteopenic and osteoporotic groups was investigated.We could not find a statistically meaningful correlation.Moreover, when we studied those genotypes for their relation to vertebral fracture risk, we also could find no statistically meaningful correlation.Ultimately, we could not find any relationship between Sp1 genotypes and BMD and vertebral fracture risk.The finding of our study was not in accordance with other studies that have shown an association with the COL Ia1 Sp1 polymorphism.The reason for the differences between the findings may stem from the relatively small number of cases and the study plan (properties of working population, vast samples, and data analysis), or it may be a result of the interaction of genetic and environmental factors.Although our study has preliminary data from a relatively small number of individuals, we compared the results of frequencies of polymorphisms with two other studies from Turkish postmenopausal women (Table 4).Uysal et al. [18] investigated the VDR, Bsm I, Taq I and Apa I polymorphisms in a group of postmenopausal Turkish women with and without osteoporosis.The Bsm I genotype frequencies estimated in our total study group were similar to the findings of that study.Şimsek et al. [19] evaluated the effects of HRT on BMD in a group of osteopenic, postmenopausal Turkish women with and without the COL Ia1 Sp1 binding site polymorphism.The SS and Ss frequencies estimated from our data were similar to the findings of Şimsek et al. [19] However, the ss frequency of our data was higher than that in the Şimsek et al. [19] study. When we compared our findings with other frequencies from several other populations, we observed that the ss genotype frequency was similar to those from European populations.The ss genotype was 4% in French women, 3.3% in Dutch women, and 5.5% in Danish women. [31]The ss genotype frequency of the COL Ia1 Sp1 polymorphism in our total study group was 4%. The most important limitation of our study was the small number of vertebral fracture cases for a genetic association study.This resulted in a less precise estimate.Another limitation of this study was that some samples of each genotype were not able to be confirmed with PCR, restriction fragment length polymorphism (RFLP), and/or sequence analysis. In conclusion, our preliminary data couldn't show an association among the VDR gene Bsm I and COL Ia1 gene Sp1 polymorphisms and low BMD or vertebral fracture in postmenopausal Turkish women. Broadening the study with a wider number of cases would be a more suitable approach.Moreover, because the VDR and COL Ia1 genotypes show population specificity, it would be necessary to conduct extensive population studies in order to identify the genotypic endowment in Turkish society by confirming the genotypes with PCR, RFLP, and/or sequence analysis. Declaration of conflicting interests The authors declared no conflicts of interest with respect to the authorship and/or publication of this article.This study Uysal et al. [18] Şimşek et al. [19] (n=100) (n=246) (n=111) n % n % n % Table 1 . Baseline demographic and clinical characteristics of the study group according to the VDR and COL Iα1 genotypes VDR: Vitamin D receptor; SD: Standard deviation; BMD: Bone mineral density; COL Ia1: Collagen Iα1. Table 2 . Distribution of VDR Bsm I and COL Ia1 Sp1 genotypes among healthy controls and patient groups SD: Standard deviation; VDR: Vitamin D receptor; BMD: Bone mineral density; COL Ia 1: Collagen Iα1. Table 3 . Distribution of VDR, Bsm I, and COL Iα1 Sp1 genotypes among individuals with and without vertebral fracture Table 4 . Vitamin D receptor Bsm I, and COL Iα1 Sp1 genotype frequencies in Turkish postmenopausal women VDR: Vitamin D receptor; COL Iα1: Collagen Iα1.
2017-08-27T15:38:03.586Z
2011-12-28T00:00:00.000
{ "year": 2011, "sha1": "5c94bab212c4b03e3a978a0c1df6a6ca969d6194", "oa_license": "CCBYNC", "oa_url": "https://archivesofrheumatology.org/full-text-pdf/431", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "5c94bab212c4b03e3a978a0c1df6a6ca969d6194", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
259857557
pes2o/s2orc
v3-fos-license
Effects of multigrain rice and white rice on periodontitis: an analysis using data from the Korea National Health and Nutrition Examination Survey 2012-2015 OBJECTIVES Numerous studies have investigated the efficacy of whole grains; however, research on multigrain remains limited. Grains exhibit combined positive effects against various diseases. The purpose of this study was to examine the impact of multigrain and white rice consumption on periodontitis. METHODS We analyzed data from the Korea National Health and Nutrition Examination Survey V-3 and VI, collected between 2012 and 2015, which included 12,450 patients (4,859 male and 7,591 female) aged 19-64 years. The World Health Organization’s Community Periodontal Index (CPI) was utilized to assess the presence of periodontitis, with periodontitis defined as a CPI index score of ≥3. Multivariable logistic regression analysis was performed after adjusting for potential confounding variables. RESULTS The group that consumed only multigrain rice was less likely to have periodontitis than the group that consumed only white rice (odds ratio [OR], 0.80; 95% confidence interval [CI], 0.69 to 0.93). When stratified by sex, the risk of periodontitis demonstrated a 24% decrease in female who consumed only multigrain rice (OR, 0.76; 95% CI, 0.62 to 0.93). A similar result was observed in the age group of 40-64 years (OR, 0.84; 95% CI, 0.71 to 0.99). In the diabetes stratification model, the normal group that consumed only multigrain rice exhibited a 25% decrease in the odds of periodontitis (OR, 0.75; 95% CI, 0.62 to 0.91). CONCLUSIONS Our findings suggest that the prevalence of periodontitis may vary depending on the type of rice consumed. levels, body weight, and inflammation, while also reducing the risk of early mortality [9].Consuming whole grains may decrease the likelihood of periodontitis due to its impact on inflammation and glycemic control [10].Periodontitis is a chronic inflammatory condition that gradually leads to the deterioration of the tissues supporting the teeth [11].This prevalent and severe disease is found globally and is expected to increase alongside the growing aging population [12].A low intake of grains has been associated with periodontal diseases, and individuals who consume limited amounts of whole grains are more prone to developing severe periodontitis [13].The protective effect of whole grains on the progression of periodontal disease can aid in managing glucose intolerance and reducing the risk of insulin resistance [14,15]. Whole grains are nutritionally superior to refined grains [16].During the refining process, white rice, a refined grain, loses numerous protective components [17].The primary sources of whole grains are ready-to-eat, cooked, and processed grains, such as cereals and bread [18].Multigrain rice, a type of cooked grain, shares characteristics with whole grains [19] and is more effective in preventing diseases compared to refined grains [20,21]. Numerous studies have examined the impact of whole grains on various diseases [5,10,22], but none have explored the effect of multigrain rice on periodontitis.Consequently, we aimed to investigate the association between the prevalence of periodontitis and the type of rice consumed.We hypothesized that the intake of multigrain rice may help prevent periodontitis due to its influence on complex factors, such as glycemic control, akin to the effects observed from whole grain consumption. Data source and study population In this study, we utilized data from Korea National Health and Nutrition Examination Surveys (KNHANES) V-3 (2012) and VI (2013)(2014)(2015).The KNHANES is a cross-sectional survey that assesses the overall health and nutritional status of a representative Korean population, conducted by the Korea Disease Control and Prevention Agency (KDCA).The survey's sampling protocol involves a multistage probability group, stratifying representative samples of non-institutional civilians in Korea.This study utilized individual data from 18,152 Koreans aged 19-64 years who participated in the KNHANES from 2012 to 2015.Participants who did not complete the oral examination (n = 2,540) or food intake frequency survey (n = 3,162) were excluded.Consequently, we enrolled 12,450 individuals in the study, including 4,859 male and 7,591 female (Figure 1).Data on socio-demographic characteristics, oral health-related variables, general health status indicators, and consumption frequencies of white and multigrain rice were collected from the KNHANES (2012)(2013)(2014)(2015). The proportion of multigrain to white rice In KNHANES V and VI, "multigrain rice" was defined as rice composed of brown rice, barley, beans, and red beans.The food intake frequency in KNHANES was assessed by asking participants the following question: "What is your average intake frequency over the past year?"Participants' responses were categorized into 9 groups: rarely, once a month, 2-3 times a month, once a week, 2-4 times a week, 5-6 times a week, once a day, twice a day, and three times a day.Only those who consumed multigrain or white rice at least twice per day were included, while those who consumed other foods and not rice daily were excluded.Participants were then divided into different groups based on the ratio of multigrain and white rice they consumed.The groups are as follows: 100% multigrain rice; multigrain rice ≤ 50% and white rice > 50%; multigrain rice > 50% and white rice ≤ 50%; and 100% white rice. Socio-demographic and health behavior variables The following socio-demographic characteristics were identified as confounders: sex, age, household income, and education level.Participants were divided into 4 groups based on household income.Additionally, 4 groups were established based on education level according to the Korean education level classification code, which includes: below elementary school graduation, middle school, high school, and above university graduation.Oral health-related variables encompassed tooth brushing frequency, interdental brush use, dental floss use, and Community Periodontal Index (CPI).Tooth brushing frequency was categorized as either less than or at least twice a day.Interdental brush and dental floss use were classified as "yes" or "no" depending on their usage.General health status indicators included smoking habits, diabetes mellitus, hypercholesterolemia, hypertension, and body mass index (BMI).Participants were classified as current smokers, ex-smokers, and non-smokers based on their smoking habits.Regarding the presence or absence of diabetes, participants were classified as normal (fasting glucose level < 100 mg/dL), impaired fasting glucose (fasting glucose level ≥ 100 and ≤ 125 mg/dL), and diabetes (fasting glucose level ≥ 126 mg/dL, use of antidiabetics, administration of insulin injections, or diagnosis by a physician).Hypercholesterolemia was defined as a total cholesterol level ≥ 240 mg/dL or use of cholesterol medication.Hypertension was classified as normal (systolic blood pressure [SBP]/diastolic blood pressure [DBP] < 120/80 mmHg); prehypertension (SBP ≥ 120 and < 140 mmHg and DBP ≥ 80 and < 90 mmHg); and hypertension (SBP/DBP ≥ 140/90 mmHg or use of antihypertensive drugs).BMI was classified as normal (BMI ≥ 18.5 and < 25.0 kg/m 2 ), low (BMI < 18.5 kg/m 2 ), and high (BMI ≥ 25.0 kg/m 2 ). Periodontal index The CPI, initially developed by the World Health Organization, is utilized to evaluate periodontal status [23].It serves as an indicator of the need for periodontal treatment among residents or specific groups within a community.Periodontitis is defined as a CPI score of ≥ 3. The KDCA carries out investigations through public health dentists who have received training and field quality management education to conduct standardized assessments.This training takes place over three days, consisting of theoretical education, photography education, and a mock examination.Notably, periodontal tissue tests are conducted twice daily for four days, with a total of ≥ 8 repetitions, to ensure proper pressure is maintained during periodontal probing.Between 2012 and 2015, the CPI's kappa index ranged from 0.692 to 0.799 in the KN-HANES [24]. Statistical analysis All research data were analyzed using SPSS version 26.0 (IBM Corp., Armonk, NY, USA).Complex sample survey data were employed to account for multilevel, stratified, unequal selection probabilities, or clustered sample designs related to KNHANES (2012)(2013)(2014)(2015).Appropriate sample weights were applied for each data collection.Multivariable logistic regression analysis was utilized to calculate the association between consumption of white or multigrain rice and periodontitis, as determined by adjusted odds ratios (ORs) and 95% confidence intervals (CIs).The logistic regression analysis was adjusted for potential confounders.Chi-square tests were employed to compare the prevalence of periodontitis among patients with and without diabetes.Statistical significance was set at a p-value of < 0.05. RESULTS Table 1 presents the characteristics of participants grouped by the presence or absence of periodontitis.Out of the 22,601 participants, 6,189 (27.4%) had periodontitis, while 16,412 (72.6%) did not.Regardless of periodontitis status, those who consumed only multigrain rice were the most prevalent in both groups, with 52.0% in the periodontitis group and 51.2% in the non-periodontitis group.Participants who consumed only white rice accounted for 28.8% and 28.0% of the population in the periodontitis and nonperiodontitis groups, respectively.The distribution of rice consumption showed similar patterns, regardless of periodontitis status (Table 1). The results of logistic regression analysis for the effect of rice dietary patterns on periodontitis prevalence are shown in Table 2.In model 3, after adjusting for all variables, participants who consumed only multigrain rice were 20% less likely to have periodontitis than those who consumed only white rice (OR, 0.80; 95% CI, 0.69 to 0.93).Additionally, a trend analysis was conducted to iden-tify trends between rice intake patterns and periodontitis prevalence.In all models, since the p-value for trend was < 0.05, the prevalence of periodontitis according to rice intake patterns showed a consistent direction (Table 2). Table 3 presents the results of the logistic regression analysis stratified by sex, age, and diabetes.Female who consumed only multigrain rice were 24% less likely to have periodontitis than those who consumed only white rice (OR, 0.76; 95% CI, 0.62 to 0.93).Similarly, this result was found in the age group of 40 years or older.The age group of 40-64 years that consumed only multigrain rice showed a 16% reduced risk of periodontitis (OR, 0.84; 95% CI, 0.71 to 0.99).In the stratification model classified by diabetes status, a similar decrease was observed only in the normal group.In the normal group, participants who consumed only multigrain rice were 25% less likely to have periodontitis than those who consumed only white rice (OR, 0.75; 95% CI, 0.62 to 0.91) (Table 3). The distribution of sex, age, and diabetes according to rice dietary patterns is displayed in Table 4.When classified by sex, 39.2% 4).Table 5 shows the average blood glucose level stratified by rice intake type according to the presence of diabetes and periodontitis.The diabetic group exhibited a difference in blood glucose levels between participants who consumed only white rice and those who consumed both white and multigrain rice among patients with periodontitis.The blood glucose level did not differ among stratified groups (Table 5). DISCUSSION In this study, we investigated the effects of dietary patterns involving white and multigrain rice on periodontitis in Koreans aged 19-64 years.While previous studies have explored the relationship between whole grains and periodontitis, this is the first study to examine the impact of rice type consumption on periodontitis.We discovered that the risk of periodontitis decreased in the group that consumed only multigrain rice compared to the group that consumed only white rice.This difference in periodontitis prevalence based on the type of rice consumed could be attributed to the rice refining process.White rice is considered a refined grain with a high content of starch, which is a carbohydrate polymer that remains after the bran and germ of the whole grain are removed [19].The refining process eliminates most of the naturally occurring vitamins, minerals, dietary fiber, lignans, phytoestrogens, phenolic compounds, and phytic acid [25].As a result, unrefined whole grains are more nutritious than refined white rice [5]. To understand the mechanism, the effect of whole grains on glycemic control was observed.High fasting blood glucose levels are associated with C-reactive protein (CRP) levels and the risk of inflammation [26].An elevation in CRP is linked to insulin resistance [27].Tumor necrosis factor-α and interleukin (IL)-6 are derivatives of acute-phase proteins, including CRP, and both potentially contribute to insulin resistance by affecting intracellular insulin signaling [28].Compared to healthy controls, patients with periodontitis exhibit elevated serum IL-6 and CRP levels [29].Inflammatory disorders in hyperglycemia lead to both microvascular and macrovascular complications [30].Complications arising from hyperglycemia cause damage to capillary function, decreased blood flow to tissues and organs, oxidative stress, elevated inflammatory processes through cytokine exposure, and severe periodontal disease [31].Hyperglycemia is a major determinant of the risk, severity, and extent of periodontitis [32].However, whole grains can attenuate the blood glucose response after a meal [33].Since whole-grain starch is more resistant to digestion compared to refined starch [34], whole grains slow the digestion and absorp- tion of carbohydrates in the intestine [35].Dietary fiber also promotes satiety and increases digestion time [36], thus lowering blood glucose and insulin levels [37].Therefore, the consumption of multigrain rice may be closely associated with a decreased risk of periodontitis.Although this study showed no significant difference in blood glucose levels between groups (Table 5), it was a cross-sectional study and only represented fasting glycemic results at the time of the examination.In reality, elevated postprandial glycemia may vary between groups.Moreover, studies have examined the direct impact of whole grains on periodontitis.One study, which involved 34,160 male health professionals aged 45-75 years, suggested that consuming more than four servings of whole grains per day (1 serving = 3/4 cup whole-grain cereal or 1 slice whole-wheat bread) could lower the risk of periodontitis [10].However, some research has argued that whole grains do not have any significant effect on periodontal disease.A randomized controlled trial investigated the effects of tailored dietary interventions, such as fruit, vegetable, and wholegrain consumption, on chronic periodontitis in 51 hospital participants aged 30-65 years.The results demonstrated that wholegrain intake significantly increased the total antioxidant capacity in the intervention group six months after the intervention; however, there was no significant difference in the periodontal index [38].The study attributed the lack of significant changes in the periodontal index to the possibility that the extent of dietary changes may not be enough to affect these indices, and that the number of participants and the duration of the dietary intervention may not be sufficient to produce significant differences. The effect of the type of rice consumed on periodontitis was also examined in relation to sex, age, and the presence of diabetes.Upon hierarchical analysis, the risk of periodontitis was found to be lower in the group that consumed only multigrain rice, among female, in the age group of 40 years to 64 years, and in the normal fasting glucose level group (Table 3).In Korea, the age of 40 is considered a "life transition period, " a time when bodily changes necessitate management [39].Given that dietary strategies change with age due to health concerns [40], the reduced odds could be attributed to the health behavior of consuming only multigrain rice.The 40-64-year-old group had a higher proportion of participants who consumed only multigrain rice compared to the 19-39-year-old group (Table 4).This was also true for female, as the proportion of those who consumed only multigrain rice was higher among female than male (Table 4).The lower risk of periodontitis in the multigrain rice-consuming group among female was thought to be due to the presence of phytoestrogens in whole grains.A high intake of phytoestrogens, which are phenolic compounds, has significant estrogenic/antiestrogenic effects in both animals and humans [41].Epidemiological, laboratory, and clinical evidence suggest that phytoestrogens have a positive impact on bone mineral density [42,43].There is a negative correlation between periodontal disease and bone density in postmenopausal female [44,45], and consuming multigrain rice may help alleviate this issue.However, while phytoestrogens seem to improve bone densi-ty, there is not enough research on their long-term effects [46]. When categorized by diabetes status, the risk of periodontitis decreased only in the normal group (Table 3).However, blood glucose levels significantly varied depending on the type of rice consumed in the normal group (Table 5).Thus, in the normal group, multigrain rice was estimated to have a more substantial direct effect on periodontitis compared to the effect of glycemic control.The proportion of participants consuming only multigrain rice was higher in the diabetic group than in the normal group (Table 4).For patients with diabetes, the high proportion of participants consuming only multigrain rice may be due to their adherence to doctor-recommended health behaviors, such as consuming multigrain rice.Furthermore, the American Diabetes Association emphasized the intake of fiber and whole grains as positive health behaviors to improve diabetes management in 2023 [47].In diabetics, the health behavior of consuming multigrain rice may have a positive effect on blood glucose reduction (Table 5).Regardless of the presence or absence of periodontitis, the group that consumed only multigrain rice had lower average blood glucose levels than the group that consumed only white rice (Table 5).However, multigrain rice intake did not affect the relationship between diabetes and periodontitis (Table 3). The association between diabetes and periodontal disease influenced the results.Diabetes has a bidirectional relationship with periodontal disease.According to a study that analyzed the relationship between the two diseases from an epidemiological perspective, patients with diabetes tended to have a higher prevalence, greater severity, and faster progression of periodontal disease than controls.Treating periodontal infections can help with blood glucose management and minimize the burden of diabetes complications [48].A study that used large sample data to investigate the association between type 2 diabetes mellitus and periodontal disease in 4,343 United States adults aged 45 years reported a significantly higher prevalence of severe periodontitis in those with diabetes than in those without [49].Another similar study confirmed that periodontal treatment affects metabolic regulation and reduces systemic inflammation in type 2 diabetes mellitus.Moreover, periodontal therapy is a necessary component in the development of treatment approaches that can reduce complications of diabetes [50]. The strength of this study lies in its large-scale analysis confirming the relationship between periodontitis and the consumption of multigrain and white rice.To minimize bias caused by the consumption of foods other than rice as a staple, we analyzed participants who ate rice more than twice a day, considering rice as their primary food source.We calculated statistical results by adjusting for oral health-related variables that act as significant confounding factors in periodontal disease in the final model of all tables.Furthermore, we computed the average value to investigate the effect of blood glucose levels, which are related to periodontitis, on dietary patterns.The blood glucose levels of diabetics were lower in the group that consumed only multigrain rice, and the risk of periodontitis decreased in the normal group without changes in blood Figure 1 . Figure 1.Flow chart of sample selection.KNHANES, Korea National Health and Nutrition Examination Surveys. Table 1 . Characteristics of participants according to the presence/ absence of periodontitis (n=22,601) Values are presented as number (weighted %). 1 Chi-square test.2 Table 2 . Logistic regression analysis for periodontitis according to the proportion of white rice and multigrain rice consumption1Model 1 was adjusted for age and sex; Model 2 was additionally adjusted for sociodemographic status variables (level of education and household income); Model 3 was additionally adjusted for general health indicators, such as smoking habit, diabetes, hypercholesterolemia, hypertension, and body mass index, and oral health-related variables such as frequency of toothbrushing and use of oral hygiene devices such as floss and interdental brush. Table 3 . Logistic regression analysis of the effects of consuming rice according to the presence of periodontitis by sex, age and diabetes1 1Adjusted for age, sex, socio-demographic status variables (level of education and household income), general health indicators (smoking habit, diabetes, hypercholesterolemia, hypertension, and body mass index), and oral health-related variables (frequency of toothbrushing and use of oral hygiene devices such as floss and interdental brush).* Table 4 . Results of the chi-square test for the effects of consuming rice according to sex, age and diabetes Table 5 . Average blood glucose level according to the type of rice intake stratified by diabetes and periodontitis status
2023-07-15T06:17:36.397Z
2023-07-03T00:00:00.000
{ "year": 2023, "sha1": "26591597f1cd34d26bbbac4e392a09de481882fc", "oa_license": "CCBY", "oa_url": "https://www.e-epih.org/upload/pdf/epih-e2023063-AOP.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04d9c9ec55aa070175aeab9047addf239b8a978b", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
249015267
pes2o/s2orc
v3-fos-license
A Critical Role for p53 during the HPV16 Life Cycle ABSTRACT Human papillomaviruses (HPV) are causative agents in ano-genital and oral cancers; HPV16 is the most prevalent type detected in human cancers. The HPV16 E6 protein targets p53 for proteasomal degradation to facilitate proliferation of the HPV16 infected cell. However, in HPV16 immortalized cells E6 is predominantly spliced (E6*) and unable to degrade p53. Here, we demonstrate that human foreskin keratinocytes immortalized by HPV16 (HFK+HPV16), and HPV16 positive oropharyngeal cancers, retain significant expression of p53. In addition, p53 levels increase in HPV16+ head and neck cancer cell lines following treatment with cisplatin. Introduction of full-length E6 into HFK+HPV16 resulted in attenuation of cellular growth (in hTERT immortalized HFK, E6 expression promoted enhanced proliferation). An understudied interaction is that between E2 and p53 and we investigated whether this was important for the viral life cycle. We generated mutant genomes with E2 unable to interact with p53 resulting in profound phenotypes in primary HFK. The mutant induced hyper-proliferation, but an ultimate arrest of cell growth; β-galactosidase staining demonstrated increased senescence, and COMET assays showed increased DNA damage compared with HFK+HPV16 wild-type cells. There was failure of the viral life cycle in organotypic rafts with the mutant HFK resulting in premature differentiation and reduced proliferation. The results demonstrate that p53 expression is critical during the HPV16 life cycle, and that this may be due to a functional interaction between E2 and p53. Disruption of this interaction has antiviral potential. IMPORTANCE Human papillomaviruses are causative agents in around 5% of all cancers. There are currently no antivirals available to combat these infections and cancers, therefore it remains a priority to enhance our understanding of the HPV life cycle. Here, we demonstrate that an interaction between the viral replication/transcription/segregation factor E2 and the tumor suppressor p53 is critical for the HPV16 life cycle. HPV16 immortalized cells retain significant expression of p53, and the critical role for the E2-p53 interaction demonstrates why this is the case. If the E2-p53 interaction is disrupted then HPV16 immortalized cells fail to proliferate, have enhanced DNA damage and senescence, and there is premature differentiation during the viral life cycle. Results suggest that targeting the E2-p53 interaction would have therapeutic benefits, potentially attenuating the spread of HPV16. senescence, and accumulated DNA breaks as evidence by single-cell gel electrophoresis assay (COMET). When subjected to differentiation via organotypic raft culturing, these mutant cells had reduced proliferation leading to marked reduction in raft thickness. There was also a reduction in viral replication markers in the mutant cells. These results suggest that although p53 is downregulated by E6 in high-risk HPV infection, p53 is necessary to permit HPV induced proliferation and that the interaction with E2 plays an important role in the requirement for p53 expression. RESULTS Tumor suppressor p53 is expressed in HPV16 immortalized cells and is critical for their optimal growth. Previous studies demonstrated that alternative splice variants (E6*) are the dominant E6 transcripts in HPV associated head and neck cancer, preventing E6-E6AP-p53 complex formation and inhibiting p53 degradation (36)(37)(38)(39)(40)(41). We confirmed the presence of p53 in a series of HPV16 positive cell lines (Fig. 1). Expression of the entire HPV16 full genome in N/Tert-1 cells results in partial reduction in p53 compared to near complete abrogation by expression of HPV16 E6 and E7 ( Fig. 1A compare lanes 2 and 4 to lane 1). Moreover, human tonsil cells immortalized by HPV16 retain p53 expression similar to N/Tert-1 cells (compare lane 1 to 3). To investigate these findings further, we studied two independent donors of human foreskin keratinocytes (HFK) immortalized with HPV16, each grown as pools. In both donor p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum lines, p53 levels were less reduced compared to HFK immortalized by HPV16 E6 and E7 overexpression (lanes [5][6][7][8]. It is noticeable that the p53 in the HPV16 immortalized HFK (lanes 6 and 7) have reduced mobility suggesting differential posttranslational modification of p53 compared with N/Tert-1 cells. Such modifications are known to alter p53 function, therefore although p53 is expressed in these cells modifications may be altering its function (47). To determine whether this expression is affected by tumor microenvironment, we surveyed p53 expression in 8 patient derived xenografts (PDX) from oropharyngeal and oral cavity carcinomas (four HPV16 positive and four negative) (45,46). All HPV16 positive PDX samples and 3 out of 4 HPV negative retained detectable p53 expression illustrating no clear association between HPV status and p53 expression (Fig. 1B). HPV-negative PDXs 1-3 have mutant p53 while PDX 4 is wild type for p53. All HPV positive PDXs retained wild-type p53. Platinum based DNA-damaging agents such as cisplatin are critical in the treatment of late-stage systemic head and neck cancers (48)(49)(50)(51). Because DNA-damage is known to stabilize and activate p53, and p53 is most often wild-type in HPV-positive cancers, we predicted that in HPV1 head and neck cancer cell lines the expression of active wild-type p53 can be promoted by cisplatin treatment. We confirmed dose-dependent cisplatin induced p53 expression in SCC-47 and SCC-104 cells (Fig. 1C). These results demonstrate that p53 expression is retained in many cell lines immortalized by the HPV16 genome and can be induced following DNA-damage. This suggests that although E6 degrades p53 to help promote cell immortalization and carcinogenesis, HPV16 retains p53 expression, indicating that p53 may play an important role in the HPV16 life cycle. To determine whether reduction of p53 compromises the growth of HPV16 immortalized cells we introduced full-length E6 (using a retroviral delivery of the E6 gene which does not allow alternative splicing) into N/Tert-1 (foreskin keratinocytes immortalized by telomerase) and HFK1HPV16 cells. Fig. 2A demonstrates that the additional expression of E6 in N/Tert-1 cells result in significantly increased cellular proliferation as has been described (52). However, introduction of E6 into HFK1HPV16 resulted in an attenuation of cell growth (Fig. 2B). Because E6 possesses several mechanisms for regulating cellular proliferation independent from p53 degradation, we attempted to isolate these other mechanisms by expressing an E6 mutant unable to promote degradation of p53 but retaining all other known functions. The "8S9A10T" mutant (designated E6Dp53 in this study for clarity) is deficient in p53 binding but can still immortalize cells and activate telomerase as efficiently as E6 wild type (53). In this mutant, residues Arg 8, Pro 9 and Arg 10 are replaced with Ser, Ala and Thr, respectively. This mutant did not have a deleterious effect on cell growth indicating that it is E6 targeting of p53 attenuates cellular proliferation. Additionally, we found that these proliferation rates inversely correlated with senescence levels ( Fig. 2C and D). In the HFK1HPV161E6 cells, we noticed that over time the cells began proliferating once again. To determine whether the recovered cells had a restoration of p53 protein levels we carried out Western blots of HFK1HPV161E6 cells at different stages following E6 introduction (Fig. 2E). Lane 4 demonstrates that there is an initial reduction in p53 protein levels in these cells immediately following selection compared with control cells (compare lane 4 with lane 3). However, following 13 days of culturing (when we noticed proliferation begin to restore to that of the control cells) there is a restoration of p53 protein expression (compare lane 7 with lane 4). These results suggest that reduction of p53 protein may lead to growth attenuation and enhanced senescence of HFK1HPV16 cells. They also suggest that to begin to proliferate again, restoration of p53 likely helps promote growth in the HFK1HPV161E6 cells. We monitored the exogenous E6 RNA levels (Fig. 2F). There is a clear reduction in the E6 RNA expressed from the exogenous vector between days 0 and 13 correlating with the restoration of p53 protein expression and cellular proliferation. When we analyzed for E6 protein expression via Western blot, while we did not notice an appreciable change in E6 protein levels, we found that there was a significant increase of E6Dp53 expression compared to wild-type E6 at both time points (Fig. 2G). This supports our FIG 2 p53 reduction in via introduction of full-length HPV16 E6 reduces cellular proliferation in HPV16 immortalized foreskin keratinocytes. (A) 11day growth curve of N/Tert-1 cells expressing exogenous HPV16 E6 compared to empty vector. (B) 13-day growth curve of human foreskin keratinocytes immortalized by HPV16 and stably expressing exogenous full-length E6, mutant E6 that does not bind and degrade p53 (E6Dp53) or GFP control vector. (C) Senescence staining of cells in B at day 11. Arrows indicate positively staining cells. (D) Quantification of senescence staining in C as percent positively stained per field. Western blot analysis of p53 expression following transfection of E6 plasmids (day 0) and after growth rate recovery of HFK1HPV161E6 (day 13). (E) Western blot analysis of p53 in HFKs with exogenous E6 and E6Dp53 expression at day 0 and day 13. GAPDH was used as internal loading control. (F) RT-qPCR analysis of exogenous GFP, E6 and E6Dp53 expression at day 0 and day 13 using primers against FLAG-HA tag. Relative quantity calculated by the DDC T method using GAPDH as an internal control. Bonferroni correction utilized when applicable. (G) Western blotting of the indicated extracts using FLAG antibody (the E6 is double tagged with HA and FLAG). p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum claim that expression of E6 is more deleterious to growth than E6Dp53 in HFK1HPV16 and this is likely due to p53 degradation. Overall, these results demonstrate that p53 is expressed in HPV16 immortalized cells, and that this expression may be critical for continuing proliferation of these cells. We next moved on to investigate possible reasons for the requirement of p53 expression in HFK1HPV16 cells. Disruption of p53 interaction with the HPV16 E2 protein attenuates cell growth and blocks the viral life cycle. A known and relatively understudied interaction of p53 with HPV16 is the direct physical interaction with E2 (43). We generated an E2 mutant predicted not to interact with p53 by performing site directed mutagenesis. Residues aspartate 388, tryptophan 341, and aspartate 344 were all mutated to alanine within the E2 plasmid, resulting in inability for E2 to interact with p53 (E2[-p53]). We generated stable N/Tert-1 cell lines expressing this mutant, as we have done for wild-type E2 (E2-WT) (43). There was robust, stable expression of E2-WT and E2(-p53) in N/Tert-1 cells (Fig. 3A, lane 3). Immunoprecipitation with a p53 antibody brought down p53, and E2-WT co-immunoprecipitated with p53 while E2(-p53) did not (Fig. 3B, compare lanes 2 and 3). To demonstrate that E2(-p53) was functional we carried out transcriptional studies in N/Tert-1 cells. Because the binding of p53 takes place in the DNAbinding domain (DBD) of E2, we confirmed that the mutant E2 retained DNA-binding function. Both E2-WT and E2(-p53) were able to repress transcription from the HPV16 long control region (LCR) efficiently and comparatively (Fig. 3C). We also measured that transcriptional activation function of E2-WT and E2(-p53) (Fig. 3D). While E2(-p53) is able to activate transcription, it was significantly compromised in this function compared with E2-WT (compare lanes 5-7 with lanes 2-4). We conclude from these results experiments that E2(-p53) is nuclear and able to bind to its DNA target sequences but that its transcriptional activation property (but not repression) is attenuated. We further explored the DNA binding and p53 interaction properties of the E2 (-p53) mutant. The experiment in Fig. 3A and B was repeated and the pull-down of the E2 proteins by p53 quantitated (normalized to the input protein levels). The interaction between E2 and p53 was significantly reduced with the E2(-p53) mutant compared with E2-WT (Fig. 4A). Given that the E2(-p53) protein was a poorer transactivator than E2-WT ( Fig. 3D) we wanted to confirm that this was not due to a reduction in DNA binding properties. To do this we transfected either the pHPV16LCR-luc (the luciferase plasmid used in the repression assays in Fig. 3C) or ptk6E2-luc (the plasmid used in the transcriptional activation assays in Fig. 3D) into N/Tert-11Vec, N/Tert-11E2-WT or N/ Tert-11E2(-p53). Three days following transfection chromatin was prepared from the transfected cells and E2 chromatin immunoprecipitation (ChIP) assays carried out, as we have described previously (54,55). Fig. 4B and C demonstrate that both E2-WT and E2(-p53) have no difference in their ability to bind to either pHPV16LCR-luc or ptk6E2luc, respectively. Recently we demonstrated that E2-WT can bind to and repress transcription from the TWIST1 promoter (20). Both E2-WT and E2(-p53) bound equivalently to this endogenous promoter in N/Tert-1 cells (Fig. 4D). Next, we investigated the DNA replication properties of E2(-p53). Ordinarily we perform these assays in C33a cells but these have a mutant p53 protein. Therefore, we used U2OS cells (that have wild type p53) that we have demonstrated supports E1-E2 mediated DNA replication (56). Fig. 3E demonstrates that the E2(-p53) mutant has a compromised interaction with p53 in U2OS cells compared with E2-WT and Fig. 3F illustrates that E2(-p53) and E2-WT have similar DNA replication properties. Moreover, we confirmed that this mutant does not have a disrupted affinity to the E1 helicase. Overall, these results demonstrate that the DNA binding properties of E2(-p53) are not compromised. Having confirmed that the E2(-p53) mutant was functional, we introduced identical mutations that abrogate E2-p53 interaction into the entire HPV16 genome (HPV16 [-p53]). We introduced the wild type and mutant HPV16 genomes into 2 independent primary human foreskin cell populations and grew each donor as pools. We recently used these methods to investigate the role of the E2-TopBP1 interaction in the viral life cycle (57). Both the wild type and mutant genomes efficiently immortalized both HFK For E2(-p53), residues W341, D344 and D338 were mutated to alanine as previously described (42,43). (B) Co-immunoprecipitation pull down of E2 using polyclonal antibody against p53. (C) HPV16 long control region repression assay of wild-type E2 and E2(-p53). N/Tert-1 cells were transiently transfected with 1 mg pHPV16-LCR-Luciferase reporter plasmid along with 10 ng, 100 ng, or 1000 ng of E2 or E2(-p53) plasmid. (D) E2 transcriptional activity assay of wild-type E2 and E2(-p53). Similar to LCR repression assay, N/Tert-1 cells were transiently transfected with 1 mg pTK6E2-Luciferase reporter plasmid along with increasing amounts of E2 wildtype and E2(-p53) plasmids. For (C) and (D), relative luminescence units were calculated by normalizing absolute luminescence readouts to input protein concentration. (E) U2OS cells stably expressing E2-WT and E2(-p53) were generated and a p53 co-immunoprecipitation carried (Continued on next page) p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum donor cells. We carried out Southern blotting on Sphl cut DNA (a single cutter for the HPV16 genome) (Fig. 3H). To further characterize the status of the genomes in these cells we used TV exonuclease assays (this assay is based on the fact that episomal HPV16 genomes are resistant to exonuclease digestion) (58,59). This assay demonstrated that the viral DNA in the immortalized donor cell lines retained a predominantly episomal status, irrespective of whether the viral genomes were wild-type or HPV16(-p53) (Fig. 3I). It is noticeable in Fig. 3H that there is a wide range of viral genome copy number in the HFK immortalized cells. In Donor 1 there is a high level of HPV16-WT DNA with a reduced level of HPV16(-p53), although the overnight exposure demonstrates the robust presence of both viral genomes in the immortalized cells. In Donor 2 there was a much reduced level of HPV16-WT DNA compared with HPV16 An over exposure of this blot indicated a band in Donor 2 wild-type cells that migrated around 7.5kbp, indicating a part of the genome may have been lost during immortalization. PCR demonstrates that viral DNA is in these cells, and they are immortalized. With donor 1 there is less DNA with the mutant genome than the wild type, the opposite of Donor 2. Therefore, the mutation did not trend toward influencing the levels of DNA in the immortalized HFK. (I) TV exonuclease digestion assay to determine viral genome status. We looked at GAPDH in this assay and called the DCt for GAPDH 100% degradation, then we estimated the resistance of both mitochondrial (mito) DNA and HPV16 (E6) to degradation. In all cases the HPV16 DNA is predominantly episomal. As an example, if the DCt for GAPDH was 10 following exonuclease treatment, and the DCt for mito and E6 equals 1, then they were estimated as 90% episomal DNA (mitochondria have circular genomes that are resistant to the exonuclease). Low pass indicates low passage, 7 or less. High pass indicates high passage, 12 or greater. This demonstrates that, even following prolonged culture, there is no shift toward integration of the HPV16 genomes. The results shown are from duplicate or triplicate experiments, and standard error bars are shown. There was a significant reduction in the ability of E2(-p53) to interact with p53 compared with E2-WT as indicated by *, P-value , 0.05. (B) The indicated N/Tert-1 cell lines were transfected with 1 mg of pHPV16-LCRluc. Three days following transfection chromatin was prepared and E2 chromatin immunoprecipitation carried out followed by detection of the luciferase gene present in pHPV16-LCR-luc. Results were standardized to input chromatin and then normalized to Vec = 1. There was a significant increase in signal in both E2-WT and E2 (-p53) binding compared with Vec, but no significant difference between E2-WT and E2(-p53) (^, Pvalue , 0.05). (C) The indicated N/Tert-1 cell lines were transfected with 1 mg of ptk6E2-luc. Three days following transfection chromatin was prepared and E2 chromatin immunoprecipitation carried out followed by PCR detection of the luciferase gene present in pHPV16LCR-luc. Results were standardized to input chromatin and then normalized to Vec = 1. There was a significant increase in signal in both E2-WT and E2(-p53) binding compared with Vec, but no significant difference between E2-WT and E2(-p53) (^, P-value , 0.05). (D) E2-WT and E2(-p53) have similar DNA binding properties to the endogenous TWIST1 promoter, and both are significantly higher than the signal obtained in Vec control cells (^, P-value , 0.05). (-p53), the opposite of Donor 1. In addition, there is a small deletion in the HPV16-WT DNA as demonstrated in the lower panel (it is less than 8kbp). However, all of the viral genomes are predominantly episomal in all of the lines as demonstrated in Fig. 3I. In the growth and life cycle studies described in Fig. 5, 6 and 7, the HPV16(-p53) genome containing cells both behave very similarly and very differently from the HPV16-WT cells, demonstrating that the differences observed are not due to the variation in copy number. Next, we investigated the expression of markers relevant to HPV infection in HFKs. Fig. 5A demonstrates that p53 levels are similarly reduced in HFK1HPV16 and HFK1HPV16(-p53) cells compared with N/Tert-1 cells (compare lanes 2-5 with lane 1). For comparison, cells immortalized with an E6/E7 expression vector had almost no p53 expression (lane 6), likely due to the inability of the E6 to be spliced to E6* variants with this expression vector. To further characterize these cell lines, we investigated whether the DNA damage response is turned on as HPV infections activate both the ATR and ATM pathways. We investigated the phosphorylation status of CHK1 and CHK2 as surrogate markers for activation of these DNA damage response kinases (Fig. 5B). Compared with N/Tert-1 cells there is an overall increase of CHK1 and CHK2 levels in cells immortalized with HFK1HPV16, HFK1HPV16(-p53) or E6/E7 expression. CHK1 and CHK2 phosphorylation is also elevated in the presence of all of the HPV16 positive cells compared with N/Tert-1 cells. It is important to note that E6 and E7 FIG 5 Generation and characterization of HPV16-p53 immortalized human foreskin keratinocytes (HFKs). p53 protein expression in two independent HFK donors immortalized by wild-type HPV16 (Lanes 3 and 5) and HPV16(-p53) (Lanes 2 and 4). N/Tert-1 and HFK immortalized by E6 and E7 are provided for reference (lanes 1 and 6, respectively). All lines were grown as pools. Activation of the ATR and ATM DNA-damage pathways in immortalized HFKs. ATR and ATM activation by HPV16 leads to phosphorylation of Checkpoint kinases 1 and 2, respectively, and serve as markers for HPV infection and replication. p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum immortalization of HFK induced phosphorylation of CHK1 but not CHK2 compared with the entire genome (Lane 6). This is likely due to the ATM pathway being largely activated by viral replication rather than by the viral oncogenes E6 and E7 which we have previously reported (60). Overall, these results suggest that markers of HPV16 infection are activated in HFK cells immortalized with HPV16 irrespective of the ability of p53 to bind E2. Even though the HFK1HPV16(-p53) cells had markers indicative of HPV16 immortalization, we noticed an aberrant growth phenotype in both foreskin donor cells (Fig. 6A and B). There was an initial enhanced proliferation of the HFK1HPV16(-p53) cells compared Fig. 5. HFKs were seeded onto collagen matrices at densities of 1 Â 10 6 (upper panels) and 2 Â 10 6 (lower panels). (B) The experiment in A was repeated in a second independent HFK donor and average raft areas were calculated for each donor using a Keyence imaging system. (C) HFK rafts stained using indicated antibodies as markers of keratinocyte differentiation. (D) DNA damage and viral replication marker g-H2AX was stained for in HPV16 and HPV16(-p53) HFK rafts. (E) g-H2AX staining was repeated in a second HFK donor and quantified using a Keyence imaging system. p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum with HFK1HPV16. However, around the 3-4 week mark, the HFK1HPV16(-p53) cells began to slow their growth and eventually stopped proliferating. To determine the mechanism of the attenuation of cell growth we investigated senescence in N/Tert-1, HFK1HPV16 and HFK1HPV16(-p53) cells by staining for beta-galactosidase following the end of the growth curve in donor 2 where the HFK1HPV16(-p53) had attenuated proliferation (day 40). There was a significantly increased number of senescent cells with the p53 mutant cells, and this was quantitated (Fig. 6C). Senescence can be induced by increased DNA damage, particularly double-strand breaks (DSB) (25,26). Because CHK1 and CHK2 pathway activation was not noticeably different between HFK1HPV16 and HFK1HPV16 (-p53), we decided to look at DSBs more directly using single-cell gel electrophoresis (COMET assay) in low-passage-number HFK donor 2 cells. Because the HFK1HPV16(-p53) cells have attenuated proliferation at higher passage, we utilized early passage donor 2 cells corresponding to day 3 on the growth curve in Fig. 6A for these experiments. As expected, the expression of wild type or mutant HPV16 genomes in HFKs led to increased formation of DSBs as indicated by olive tail moment (OTM) compared to HPV negative N/ Tert-1 cells (61) (Fig. 6D). However, the mutant HFKs consistently exhibited larger OTM values compared to HFK1HPV16 ( Fig. 6D and E). As the expression of full-length E6 from a exogenous vector attenuates the growth of HFK1HPV16 wild-type cells (Fig. 2) we rationalized that expression of E6 should not alter the growth of HFK1HPV16(-p53) cells. Stable expression of exogenous full-length E6 or the E6Dp53 mutant had no additional effect on the proliferation of low-passage-number HFK1HPV16(-p53) cells, illustrating that the drastic differences in proliferation are likely due to the E2-p53 interaction (Fig. 6F). HFK+HPV16(-p53) cells have an aberrant life cycle in differentiating epithelium. We organotypically rafted HFK1HPV16 and HFK1HPV16(-p53) cells. Both lines were placed on collagen plugs at early passage when the HFK1HPV16(-p53) cells retained proliferative capacity. Due to the large difference in growth rates between the wild type and mutant cells, the original plating was performed with both 1 Â 10 6 and 2 Â 10 6 cells to promote production of a monolayer on the collagen plugs prior to lifting to the liquid-air interface for differentiation. Fig. 7A demonstrates an aberrant differentiation process with the HFK1HPV16(-p53) cells compared with HFK1HPV16 cells at both cell densities. It is noticeable that at the lower cell density (1 Â 10 6 ) there was a failure to form a monolayer prior to induction of differentiation (as evidenced by gaps between keratinocyte cell clusters on the collagen plug). Using a seeding density of 2 Â 10 6 eliminated the formation of gaps but did not improve the proliferation. A representative of two independent donors is shown, both donors had identical phenotypes. Fig. 7B quantitates the results from two independent rafts from two independent donors; the mutant genomes have dramatically lower raft area compared with wild-type genomes. To investigate whether differentiation has occurred in these cells we stained with involucrin and keratin 10 (Fig. 7C). The mutant genome cells stained positive for both differentiation markers demonstrating that, even though raft growth is markedly attenuated, differentiation still occurs. We also stained for viral replication using the DNA-damage marker g-H2AX. Recently we reported that an E2 mutant that failed to interact with TopBP1 results in degradation of E2 during organotypic rafting; this degradation would block viral replication and indeed these cells had no g-H2AX staining (57). This demonstrates that the g-H2AX staining indicates the occurrence of viral replication. Fig. 7D demonstrates that there is abundant nuclear g-H2AX staining throughout HFK1HPV16 cells, indicating replication is occurring. The HFK1HPV16(-p53) cells also support viral replication although there is a reduction in the number of rafted cells stain positively for g-H2AX (Fig. 7D). DISCUSSION The HPV E2 protein is essential for viral genome replication, segregation of viral episomes into daughter cells following cell division and can transcriptionally regulate both virus and host genomes (12,19,20). E2 interacts with a variety of host factors to promote progression of the viral life cycle, many of which are essential such as interactions with p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum TopBP1 and BRD4 (12,19,23,55,57). In this report, we propose that E2 binding to p53 is also an essential interaction as abrogation of the interaction leads to catastrophic failure of the viral life cycle. In the classical high-risk HPV model, upon initial infection, the viral oncogenes E6 and E7 inhibit and degrade tumor suppressor proteins p53 and pRb, respectively, promoting hyperproliferation, unregulated DNA replication, mutation accumulation and potentially eventual carcinogenesis. Therefore, immortalization of cell lines can be achieved with overexpression vectors of E6 and E7 (Fig. 1A, Lanes 4 and 8). Previous studies suggest that E6 splice variants and their action on E6-E6AP-p53 complex disruption is cell cycle dependent (38). In HPV18 cell lines, E6*I shows marked upregulation and restored p53 expression during G2/M (38). We have previously illustrated that E2 is stabilized during mitosis which is important for its association with TopBP1 and its role as a segregation factor (57). It is entirely possible that the cell cycle mediated p53 restoration corresponds with E2 stabilization allowing these proteins to interact and could play an important role in genome segregation. E2 can regulate host transcription in a multitude of ways. We recently reported that E2 can epigenetically repress the TWIST1 at the histone level, inhibiting EMT and promoting a less aggressive cellular phenotype (20). E2 can also promote the recruitment of DNA methyltransferase 1 to interferon response genes, resulting in DNA base methylation and global innate immunity downregulation (62). It is currently unclear how E2 recruits epigenetic modifiers to these genes and p53 may play an important role. DNA methyltransferases (DMNTs) are often part of large multimeric complexes and use transcription regulatory proteins to help target specific genes undergoing epigenetic silencing (63,64). p53 is known to also interact with DMNT1 resulting in the methylation of antiapoptotic genes (65). It is possible that the interaction between E2 and p53 is important for the rerouting of DNMTs to different genes whose regulation is important for a healthy viral life cycle. It is also noticeable that the mutant E2 has an attenuated ability to activate transcription (Fig. 3), indicating that regulation of host gene transcription by E2 may require co-operation with p53 in some cases. These mutations in E2 would also potentially prevent interaction of p53 with E8^E2. This protein controls replication of episomal viral genomes and p53 may play a role in this E8^E2 function that is disrupted by the p53 interaction mutations (66)(67)(68)(69)(70). Future studies will focus on determining whether the E8^E2 interaction with p53 regulates the function of the viral protein. The results from Fig. 6D suggest that additional double-strand breaks play a role in the enhanced damage and proliferation rate of HFK1HPV16(-p53) mutant cells compared to wild type immortalized HFKs. HPV uses homologous recombination (HR) factors to assist in viral replication (71)(72)(73). Conversely, p53 binds to replication protein A (RPA) resulting in repression of HR, reducing DSB repair and promoting apoptosis during catastrophic genome instability (27,29). It is possible that E2 helps regulate this activity of p53 and inability to do so results in accumulation of DSBs as seen in Fig. 6D. In conclusion, this report indicates that p53 expression is retained in HPV16 positive cell lines and tumor samples under a variety of conditions. Knockdown of this residual p53 by full-length E6 results in significant reduction in proliferation and enhanced senescence in cells immortalized with HPV16 which we attribute to loss of E2 interaction with p53. Human foreskin cells immortalized by HPV16 where E2 can no longer bind to p53 exhibit aberrant phenotypes, including dysregulated proliferation, enhanced levels of DSBs and overall failure of the viral life cycle during organotypic raft culturing. Due to the importance of p53 in the context of HPV related cancers as well as the profound phenotypes demonstrated in this report, further investigation on the interaction between E2 and p53 is warranted. MATERIALS AND METHODS Cell culture. N/Tert-1 cells and head-and-neck cancer lines UMSCC47 and UMSCC104 were cultured as previously described (20,62,(74)(75)(76). Immortalization and culturing of human foreskin keratinocytes with HPV16 are described below. All cell lines were grown as pools, incubated at 37°C and 5% CO2 with p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum media changed every 3 days. For cisplatin treatment, cells were incubated with indicated concentrations of drug dissolved in DMF or DMF vehicle control for 24-h. Immortalization of human foreskin keratinocytes (HFK). The HPV16 mutant genome (HPV16 [-p53], which contained an E2 unable to bind p53) was generated and sequenced by Genscript (42,44,77). Residues Aspartic acid 388, Tryptophan 341 and Aspartic acid 344 were all mutated to alanine resulting in inability for E2 to interact with p53 in a similar method to the generation of the E2(-p53) plasmid as described below. The HPV16 genome was removed from the parental plasmid using Sphl, and the viral genomes isolated and then recircularized using T4 ligase (NEB) and transfected into early passage HFK from three donor backgrounds (Lifeline technology), alongside a G418 resistance plasmid, pcDNA. Cells underwent selection in 200 mg/mL G418 (Sigma-Aldrich) for 14 days and were cultured on a layer of J2 3T3 fibroblast feeders (NIH), which had been pretreated with 8 mg/mL mitomycin C (Roche). Throughout the immortalization process, HFK were cultured in Dermalife-K complete media (Lifeline Technology). The experiments in Fig. 6B to F were performed using donor 2. Western blotting. Protein from cell pellets was extracted with 2Â pellet volume protein lysis buffer (0.5% Nonidet P-40, 50 mM Tris [pH 7.8], and 150 mM NaCl) supplemented with protease inhibitor (Roche Molecular Biochemicals) and phosphatase inhibitor cocktail (Sigma). Protein extraction from patient derived xenografts was performed as previously described (45,46). The cells were lysed on ice for 30 min followed by centrifugation at 18,000 rcf (relative centrifugal force) for 20 min at 4°C. Protein concentration was estimated colorimetrically using a Bio-Rad protein assay and 25 mg of protein with equal volume of 2Â Laemmli sample buffer (Bio-Rad) was denatured at 70°C for 10 min. The samples were run on a Novex WedgeWell 4% to 12% Tris-glycine gel (Invitrogen) and transferred onto a nitrocellulose membrane (Bio-Rad) using the wet-blot method, at 30 V overnight. The membrane was blocked with Li-Cor Odyssey blocking buffer (PBS) diluted 1:1 vol/vol with PBS for 1 h at room temperature and then incubated with specified primary antibody in Li-Cor Odyssey blocking buffer (PBS) diluted 1:1 with PBS. Afterwards, the membrane was washed with PBS supplemented with 0.1% Tween20 and further probed with the Odyssey secondary antibodies (IRDye 680RD Goat anti-Rabbit IgG (H1L), 0.1 mg or IRDye 800CW Goat anti-Mouse IgG (H1L), 0.1 mg) in Li-Cor Odyssey blocking buffer (PBS) diluted 1:1 with PBS at 1:10,000 for 1 h at room temperature where applicable. After washing with PBS-Tween, the membrane was imaged using the Odyssey CLx Imaging System and ImageJ was used for quantification. Plasmids. The following plasmids were used the completion of these studies: pMSCV-N-FLAG-HA-GFP, pMSCV-N-FLAG-HA-HPV16E6, pMSCV-IP-N-FLAG-HA-16E6 8S9A10T ("E6Dp53") where residues Arg 8, Pro 9 and Arg 10 are replaced with Ser, Ala and Thr, respectively. Wild-type 16E2 (E2-WT) or E2(-p53) (Mutated residues W341A, D344A, D338A) were cloned into pcDNA3.0 vector for confirmation of p53 interaction in N/Tert-1 cells. pcDNA3.0 was used for empty vector control. Real-time qPCR. RNA was isolated using the SV Total RNA isolation system (Promega) according to manufacturer's instructions. 2 mg of RNA was reverse transcribed into cDNA using the high-capacity reverse transcription kit (Applied Biosystems). The PowerUp SYBR green master mix (Applied Biosystems) was used along with cDNA and gene specific primers and real-time PCR was performed using a 7500 Fast real-time PCR system as previously described. (20,62,74). Expression was quantified as relative quantity over GAPDH using the 2 -DDC T method. Primers used are as follows. FLAG-HA Tag fwd 59-GACTACAAGGATGACGATG-39, FLAG-HA Tag rev 59-GCGTAATCTGGAACATCG -39. Immunoprecipitation. Primary polyclonal antibody against p53 (Invitrogen; PA5-27822) or a HA-tag antibody (used as a negative control) was incubated in 200 mg of cell lysate (prepared as described above), made up to a total volume of 500 mL with lysis buffer (0.5% Nonidet P-40, 50 mM Tris [pH 7.8], and 150 mM NaCl), supplemented with protease inhibitor (Roche Molecular Biochemicals) and phosphatase inhibitor cocktail (Sigma) and rotated at 4°C overnight. The following day, 50 mL of prewashed protein A-Sepharose beads per sample was added to the lysate/antibody solution and rotated for 4 h at 4°C. The samples were gently washed with 500 mL lysis buffer by centrifugation at 1,000 rcf for 2-3 min. This wash was repeated 4 times. The bead pellet was resuspended in 4Â Laemmli sample buffer (Bio-Rad), heat denatured and centrifuged at 1,000 rcf for 2-3 min. Proteins were separated using an SDS-PAGE system and transferred onto a nitrocellulose membrane before probing for the presence of E2 or p53, as per Western blotting protocol. For the E1-E2 immunoprecipitation in Fig. 1 and 3, mg HA-HPV16E1 was transfected into U2OS cells stably expressing E2, E2(-p53) or empty vector. 48h later, the cells were harvested for protein as described above and anti-HA was used to co-immunoprecipitate E2 with HA-E1. Anti-FLAG antibody was used for negative antibody control in this experiment. Chromatin immunoprecipitation. N/tert-1 cells expressing either pcDNA3.0 (vector control), wildtype E2 (E2-WT) or E2(-p53) were transfected with 1 mg of pHPV16LCR-luc or 1 mg of pTK6E2-luc, using a (20). Southern blotting. Total cellular DNA was extracted by proteinase K-sodium dodecyl sulfate digestion followed by a phenol-chloroform extraction method. 5 mg of total cellular DNA was digested with either SphI (to linearize the HPV16 genome) or HindIII (which fails to cut HPV16 genome). All digestions included DpnI to ensure that all input DNA was digested. All restriction enzymes were purchased from NEB and utilized as per manufacturer's instructions. Digested DNA was separated by electrophoresis of a 0.8% agarose gel, transferred to a nitrocellulose membrane, and probed with radiolabeled (32-P) HPV16 genome as previously described. This was then visualized by exposure to film for 1 to 24 h. Images were captured from an overnight-exposed phosphor screen by GE Typhoon 9410 and quantified using ImageJ. Exonuclease V assay. PCR based analysis of viral genome status was performed using methods described by Myers et al. (59). Briefly, 20 ng genomic DNA was either treated with exonuclease V (RecBCD, NEB), in a total volume of 30 mL, or left untreated for 1 h at 37°C followed by heat inactivation at 95°C for 10 min. 2 ng of digested/undigested DNA was then quantified by real-time PCR using a 7500 FAST Applied Biosystems thermocycler with SYBR green PCR Master Mix (Applied Biosystems) and 100 nM primer in a 20 mL reaction. Nuclease free water was used in place of the template for a negative control. The following cycling conditions were used: 50°C for 2 min, 95°C for 10 min, 40 Senescence staining. 7.5 Â 10 4 cells were seeded in 6-well plates. The following day, cells were stained for senescence using the Cell Signal Senescence b-Galactosidase Staining kit according to manufacturer's instructions (9860). Randomly selected images were taken using the Keyence imaging system at 10Â. Positively stained cells were counted by a blinded observer and average number of positively stained cells per field were calculated. The senescence staining in Fig. 2C corresponds with day 11 on the growth curve in 2 A/B. In Fig. 6, HFK lines were stained at the last point in the growth curve in donor 2 which was day 40. Single-cell gel electrophoresis (COMET) assay. 1 Â 10 4 cells were plated in 24-well plate with 1 mL media 1 day prior to harvest. The next day, cells were trypsinized and resuspended in a mixture 0.5% wt/vol Low molecular weight agarose (Lonza, cat. No. #50101) and PBS at a ratio of 10:1. Suspension was immediately pipetted onto Trivegen COMET Slides TM (4250-004-03) and allowed to dry for 30 min at 4°C. Slides underwent lysis for 90 min at 4°C in the dark (Lysis buffer: 10 mM Tris, 100 mM EDTA, 2.5 M NaCl, 1% TritonX100, 10%DMSO titrated to pH 10.0). Afterwards slides were placed in Alkaline buffer for 25 min at 4°C in the dark (Alkaline buffer: 1 mM EDTA, 200 mM NaOH, pH .13.0). Slides were transferred to an agarose gel electrophoresis box filled with additional alkaline buffer. Electrophoresis was performed at 25V for 20 min at room temperature in the dark. Slides were then washed 2Â in dd (double distilled) H2O for 5 min at RT and then placed in neutralization buffer for 20 min at RT in dark (Neutralization buffer: 400 mM Tris-HCl titrated to pH 7.5). Neutralized slides were then left to dry at 37°C in the dark. Dried slides were stained with DAPI (1:10,000 in dd H2O) for 15 min at RT then washed 2Â with dd H 2 O for 5 min. Stained and rinsed slides were left to dry overnight. Slides were imaged using the Keyence imaging system at 20Â with .5 images taken per replicate. Quantization of olive tail moments (OTM) was achieved using the CASPLab COMET Assay imaging software by Ko nca K et al., 2003 (78). Organotypic raft culture. Keratinocytes were differentiated via organotypic raft culture as described previously (62,76,79). Briefly, cells were seeded onto type 1 collagen matrices containing J2 3T3 p53 Is Essential for the HPV16 Life Cycle Microbiology Spectrum fibroblast feeder cells. Cells were cultured to confluence atop the collagen plugs, lifted onto wire grids and cultured in cell culture dishes at the air-liquid interface. Media was replaced on alternating days. Following 14 days of culture, rafted samples were fixed with formaldehyde (4% vol/vol) and embedded in paraffin. Multiple 4 mm sections were cut from each sample. Sections were stained with hematoxylin and eosin (H&E) and others prepared for immunofluorescent staining via HIER. Fixing and embedding services in support of the research project were generated by the VCU Massey Cancer Center Cancer Mouse Model Shared Resource, supported, in part, with funding from NIH-NCI Cancer Center Support Grant P30 CA016059. Fixed sections were antigen retrieved in citrate buffer and probed with the following antibodies for immunofluorescent anaylsis: phospho-yH2AX 1/500 (Cell Signaling Technology; 9718), Involucrin 1/1000 (abcam; ab27495), and Keratin 10 1/1000 (SigmaAldrich; SAB4501656). Cellular DNA was stained with 4',6-diamidino-2-phenylindole (DAPI, Santa Cruz sc-3598). Microscopy was performed using the Keyence imaging system,
2022-05-25T06:23:38.082Z
2022-05-24T00:00:00.000
{ "year": 2022, "sha1": "7b5f09b4d2991c9bb692ae05a1cbabccfbfd1337", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ASMUSA", "pdf_hash": "41e358ef13577f2917ee300db2dbd43762852525", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
238959751
pes2o/s2orc
v3-fos-license
Administrative mechanisms for resolving individual labour disputes in foreign countries . According to international labor standards, the labor-management system covers all public administration bodies responsible for and/or involved in labor-management, whether they are ministerial departments or government agencies, including semipublic, regional, or local agencies, or any other form of decentralized administration, and any institutional framework for coordinating the activities of such bodies and for consultation and participation of employers and employees and their organization. In this regard, dispute resolution mechanisms through administrative departments and agencies, labor inspections, and voluntary compliance are most pronounced. The purpose of the study was to conduct a comprehensive analysis of administrative mechanisms for resolving individual labor disputes in foreign countries; to draw conclusions about the effectiveness, prospects, and legal clarity of coordination of labor disputes. When conducting research the author relies on foreign doctrine, the practice of the subjects involved in labor relations, acts of foreign legislation. Research methods: a dialectical approach to the knowledge of administrative mechanisms, allowing analyzing them in their practical development and functioning in the context of coordination of labor legal relations. The comparative legal method and dialectics determined the choice of specific research methods: comparative and formal-legal. The functions, jurisdiction, and procedures of individual labor dispute resolution mechanisms and labor inspectorates are the subject of comprehensive research because of their effectiveness in protecting workers’ rights. The article provides a detailed comparative legal analysis of the specifics of dispute resolution through administrative departments and agencies, the role of labor inspections/law enforcement, and access to justice for workers in unclear or hidden employment relationships. On the basis of a large array of regulative sources, the author concludes about the importance of administrative mechanisms in the proper enforcement of labor laws abroad. Introduction In many countries, labor-management systems play an important role in the effective organization and operation of individual labor dispute prevention and resolution systems. They are not only responsible for the mechanisms in place to prevent and resolve disputes, but also for providing free dispute resolution services, such as conciliation/mediation, and for providing a range of preventive services through information, advice, and education that encourage voluntary dispute resolution and voluntary compliance with agreements. In some countries, this includes court decisions. In many countries, such services are provided by labor departments or state administrative agencies. Methods The functions, jurisdiction, and procedures of individual labor dispute resolution mechanisms and labor inspectorates in the countries under consideration differ considerably. In particular, the approach to non-payment or underpayment of wages is radically different: in some countries, the recovery of unpaid wages is essentially the subject of an individual labor dispute (civil action), while in others it becomes the subject of enforcement through the inspectorate. The diversity of national perspectives on this issue indicates the importance of establishing a balanced relationship between dispute resolution mechanisms and labor inspectorates to ensure respect for the rule of law and the proper functioning of individual labor dispute resolution systems. Results and discussion Obviously, most administrative services filter a large volume of individual labor disputes out of litigation and offer a free, affordable, and much quicker settlement. However, the growth and development of these services raise questions about the degree of state intervention required to prevent and resolve labor disputes, especially when collective expression mechanisms ("collective voice" mechanisms) are absent or dysfunctional, or their role in addressing individual grievances is legally limited. For a correct comparison of the studied administrative mechanisms, the following components shall be distinguished. Resolution of disputes through administrative departments and agencies In Japan, for example, the Labor Administration offers three basic services free of charge: (a) unified counseling offices located in each prefecture that offer counseling and information services (dispute resolution options, settlement procedures, applicable laws and regulations); (b) administrative guidance; and (c) conciliation by dispute adjustment commissions (DACs) created in each prefecture and composed of three neutral experts in labor and employment law. These services appeared relatively recently, in the early 2000s, due to a dramatic increase in the number of cases in the civil courts involving individual labor disputes. Dispute adjustment commissions (DACs) provide voluntary conciliation if both parties agree [1]. Employers, for example, often refuse to participate in this process. This procedure is used more often by non-standard workers (part-time workers, freight forwarders, or fixed-term contract workers) than by standard workers [2]. In the United States, the Equal Employment Opportunity Commission (EEOC) is an administrative agency that provides pre-trial mediation and conciliation for discrimination claims. The EEOC was established on July 2, 1965, and operates under Section VII of the Civil Rights Act of 1964 [3]. Most discrimination claims shall first be filed with the EEOC before they can be filed in federal court. Upon filing an accusation, an investigator is assigned and the employer is notified. Some cases are deemed not worthy of attention and are terminated almost immediately. Otherwise, many EEOC offices will invite parties to participate in a voluntary mediation process. If the case is not successfully resolved, an investigation is conducted in which both parties will be required to present information relevant to the lawsuit. The EEOC reports that the investigation process takes almost ten months on average [4]. Once completed, the EEOC will determine whether there is "reasonable cause" to believe that unlawful discrimination has occurred. If not, the charge is dismissed and the worker is notified that he or she has 90 days to file suit to federal court. Statistics show a low degree of effectiveness of this administrative agency in protecting the rights of U.S. workers: in 2015, the EEOC resolved nearly 64,000 Section VII complaints, with 67% of them found to be "frivolous" and another 16% withdrawn by complainants, without receiving any relief, or closed by the agency for administrative reasons [5]. Thus, on the one hand, the EEOC's mandatory labor dispute process is positive for the employee, since the commission undertakes to mediate and investigate the circumstances of the dispute, but, on the other hand, this body is an incumbrance to access to justice, acting as a barrier to workers' filing claims in the federal courts. In Germany, claims with regard to discrimination and "harassment" in the workplace have been handled by the Federal Anti-Discrimination Agency since 2006 (on the basis of the federal General Equal Treatment Act dd. August 14, 2006) [6]. In Spain, pre-trial administrative conciliation is mandatory for individual labor disputes in the private sector, with some exceptions for certain jurisdictions. Unjustified absence on the part of either party will result in a fine. The conciliation procedure does not take more than 10-15 minutes. In Spain, the process is used for the bureaucratic administrative registration of settlement agreements or for access to unemployment benefits or to apply to the courts. In Spain there has been a significant increase in the use of conciliation since the reform of labor law in 1994 and changes in administrative conciliation in 2011. In 2013, of all the individual cases referred to administrative mediation, most were abandoned or cases were withdrawn (39.5 percent) or closed without an agreement (37 percent), and only 23.5 percent of cases were resolved [7]. The limited functioning of administrative conciliation in Spain, however, has served as an incentive for the social partners to encourage bilateral voluntary settlements, which have long been limited by legal constraints [8]. The role of labor inspections / law enforcement agencies Regardless of the various services that may exist to provide access to dispute resolution mechanisms, there are many workers who will not pursue claims themselves, even if they work under abusive and inhumane conditions [9]. Thus, labor inspections are usually given broad powers, including the right to enter premises day and night and to impose or initiate sanctions. However, the ultimate goal of labor inspections is usually not to punish conscientious employers who are unaware of their legal responsibilities but are willing to comply with labor protection laws. The general purpose of labor inspections is to promote compliance with labor law, and enforcement actions are used primarily where necessary to achieve this goal. Approaches to encourage voluntary compliance with labor law Various approaches, including preventive measures, inspection visits, "remedial" recommendations or orders, are used to encourage compliance with labor law, in order to give the employer the opportunity to correct actions related to violations and at the same time speed up the process. The latter approach is consistent with informal dispute resolution in some circumstances [10]. In some of the countries described in this paragraph, such approaches include the use of conciliation/mediation, further blurring the lines between law enforcement and dispute resolution. This is especially true when complex scenarios arise in which it is not easy to clearly define violations or the distinction between them and disputes [11]. However, coercive measures in their full force are applied only in cases of serious violations, abusive and exploitative working conditions, and employers' refusal to comply with recommendations or orders of the labor inspectorate. Sometimes employers are offered the opportunity to correct violations. The Labor Inspectorate of Japan, when certain violations are found, gives administrative instructions or recommendations that require employers to correct violations and report them to the Labor Inspectorate. Although there is a clear separation of jurisdiction between the labor inspectorate and individual labor dispute resolution procedures, the inspectors' recommendations for remediation often result in dispute resolution [12]. In Australia, Canada, Spain, and the United States, dispute resolution options are included in labor inspectorate procedures as an important step before enforcement. In Australia, the Fair Work Ombudsman (FWO) investigation process involves three steps: (1) assessment of the complaint; (2) resolution of disputes by the Labor Ombudsman's mediators (mediators), primarily through telephone services; and (3) review of enforcement options by labor inspectors [13]. "Naming and shaming" employers is another approach used to encourage labor law compliance, including in the United Kingdom. In the latter case, it is usually used because enforcement mechanisms exist only in very limited areas and are usually weak. But even this approach is limited in its application: in 2015, only 37 employers were "named and shamed" in a UK Government press release [14]. According to British legal scholars Jones and Prassle, despite various efforts to encourage labor law compliance, the vast majority of minimum wage violations stay undetected [15]. Conclusion In conclusion, it should be pointed out that both dispute resolution agencies and/or labor inspections are increasingly focusing on information and consultation in their services. The goal is to encourage voluntary compliance with labor law and voluntary dispute resolution. The number of inquiries implies a broad user demand for such services. Cooperation between the individual labor dispute resolution mechanisms and the labor inspectorate is also well established through counseling offices. For example Inspections in Canada and the United States often provide information in several languages. Information is also disseminated through educational events and public awareness campaigns.
2021-08-27T16:35:26.807Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "670e793d0ccd25aadf7660af17c123eb333bc3cc", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2021/29/shsconf_rudnltmrp2021_03011.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d978475a2a7a1eff2486897ac0535acaedbb8583", "s2fieldsofstudy": [ "Law", "Political Science" ], "extfieldsofstudy": [ "Business" ] }
17784000
pes2o/s2orc
v3-fos-license
Serial measurement of lipid profile and inflammatory markers in patients with acute myocardial infarction Serum concentration of lipids and lipoproteins changes during the course of acute coronary syndrome as a consequence of the inflammatory response. The objective of this study was to evaluate the effect of acute myocardial infarction (AMI) on the levels of lipid profile and inflammatory markers. We investigated 400 patients with AMI who were admitted within 24 h of onset of symptoms. Serum levels of total cholesterol (TC), triglyceride (TG), low density lipoprotein (LDL) and high density lipoprotein (HDL) were determined by standard enzymatic methods along with high sensitive C-reactive protein (hs-CRP) (latex enhanced immunoturbidimetric assay) and cytokines, interleukin (IL)-6 and IL-10 (quantitative ''sandwich'' enzyme-linked immunosorbent assay). The results indicate a trend of reduced TC, LDL, and HDL, and elevated TG levels, along with pro- and anti-inflammatory markers (p < 0.001), between day 1 and the day 2 serum samples of AMI patients. However, corrections in the serum levels have been observed at day 7. Our results demonstrate significant variations in the mean lipid levels and inflammatory markers between days 1, 2 and 7 after AMI. Therefore, it is recommended that the serum lipids should be assessed within 24 hours after infarction. Early treatment of hyperlipidemia provides potential benefits. Exact knowledge regarding baseline serum lipids and lipoprotein levels as well as their varying characteristics can provide a rational basis for clinical decisions about lipid lowering therapy. INTRODUCTION For many years, the proclamation that coronary heart disease (CHD) was not attributable to traditional risk factors in up to 50 % of cases was finally submerged after careful and detailed re-analysis (Miller, 2008). Dyslipidemia is still a major risk factor for CHD. Epidemiological studies have conclusively linked high levels of total cholesterol (TC) and low-density lipoprotein cholesterol (LDL-C) and low levels of highdensity lipoprotein cholesterol (HDL-C) with CHD incidence and mortality (Yokokawa et al., 2011). It is well known that early treatment of hyperlipidemia following acute myocardial infarction (AMI) provides potential benefits and reduces the morbidity and mortality of CHD. However, the levels of lipid and lipoproteins change during acute illnesses that cause delay in treatment choice (Balci, 2011). During tissue necrosis, acute phasic changes occur that alter the lipid profile levels post acute coronary events. Modifications of serum lipids after AMI include reductions in TC, LDL and HDL, in the range of 10 -20 %, with reciprocal increases in triglyceride (TG) approximating 20 -30 % (Miller, 2008). Several mechanisms accounting for these changes include the acute phase response associated with up-regulation of LDL-receptor (R) activity and reduction in several pivotal HDL regulatory proteins. From many clinical studies it is clear that phasic changes do occur in patients following AMI and therefore there is a recommendation for detection of hyperlipidemia in patients with AMI that the serum lipids should be assessed either within 24 hours after infarction or after 2-3 months of AMI (Nigam, 2007). Accurate knowledge of baseline lipid levels may affect the initiation of lipidlowering therapy, the selection of a specific statin and its dosage, and recognition of the potential need for adjunctive lipid therapy, and may influence the patient's willingness to adhere to a recommendation for long-term lipid-lowering therapy (Pitt et al., 2008). Besides alterations in the lipoproteins, acute-phase response is also associated with changes in serum concentration of inflammatory markers. There is an intra-cardiac inflammatory response in AMI that appears to be the result of the evolution of myocardial necrosis, as shown by higher C-reactive protein (CRP) and interleukin (IL)-6 levels in patients with major adverse cardiac events (Raposeiras Roubin et al., 2013). Several population based prospective studies of CHD have reported a close association of subtle, prolonged increases in baseline high sensitive (hs)-CRP levels with cardiovascular risk (Casas et al., 2008). The majority of authors concur in that the admission hs-CRP concentration reflects the baseline inflammatory status of the patient; thus, patients with AMI and high hs-CRP levels at admission usually experience more cardiovascular complications during follow-up (Bursi et al., 2007). IL-6 is a multifunctional cytokine regulating humoral and cellular responses and playing a central role in inflammation and tissue injury. Similarly to CRP, whose synthesis is stimulated by IL-6, high circulating concentrations of IL-6 are associated with increased risk of cardiovascular events (Swerdlow et al., 2012). On the other hand, IL-10 is a centrally operating anti-inflammatory cytokine that plays a crucial role in the regulation of the innate immune system and can suppress the production of a variety of proinflammatory molecules (Biswas et al., 2010). The expression of IL-10 has been demonstrated in both coronary arteries and atherosclerotic plaque. Furthermore, serum levels of IL-10 have been shown to be greater in individuals with atherosclerosis compared to controls, suggesting IL-10, as an anti-inflammatory molecule, may be elevated in response to the pro-inflammatory environment of atherosclerosis (Lakoski et al., 2008). The main aim of the present study was to examine the changes in serum lipid profile in AMI patients in different time intervals. To analyze the association of lipid profile with inflammation followed by acute coronary events, we also estimated the levels of inflammatory markers hs-CRP and IL-6 along with anti-inflammatory marker IL-10. MATERIAL AND METHODS A total of 520 patients with suspected AMI consecutively admitted to emergency department in which 400 patients with confirmed AMI were recruited from the coronary care unit and cardiology department of a tertiary care hospital in Gurgaon, India. We included patients if they met all the following criteria: (a) all patients had AMI at baseline; (b) blood sample for lipid profile and inflammatory markers estimation was ob-tained within 24 h from the onset of symptoms. Individuals with rheumatic disease, chronic liver diseases, renal disorders, cancer, sepsis and patient critically ill with less than one month duration infectious diseases and surgical procedure in 3 month duration, AMI and stroke within the past six months, severe congestive heart failure or cardiogenic shock, regular or chronic use of antiinflammatory drugs in the previous two months were excluded from the study. Clinical history and physical examination data, focusing on characteristics of chest pain and presence of cardiovascular risk factors were recorded for every patient. Serial electrocardiograms and cardiac enzymes were also obtained in all patients. MI was defined by detection of rise in cardiac biomarkers of necrosis (cTroponin I) with at least 1 value above the 99 th percentile upper reference limit, together with evidence of myocardial ischemia with at least 1 of the following: electrocardiographic changes indicative of new ischemia (new ST-T changes or new left bundle branch block), new pathological Q waves in at least 2 contiguous leads, imaging evidence of new loss of viable myocardium, or new wall motion abnormality (Thygesen et al., 2007). Diabetes was defined as a previous diagnosis, use of anti-diabetic medicines, or a fasting venous blood glucose level ≥ 126 mg/dL. Hypertension was defined as patient systolic blood pressure > 140 mmHg and/or diastolic blood pressure > 90 mmHg at rest, over a series of repeated measurements, or on treatment with antihypertensive medications. Body mass index (BMI) was calculated as weight (kg) divided by the square of height (m 2 ) and a patient with a value above 30 was categorized as obese. Level of TC more than 200 mg/dL was used for the identification of hypercholesterolemia. 150 controls were also selected from blood donor, hospital staff and from the health check up individuals who met the matching criteria of age, sex, and smoking status. These healthy controls were screened for diabetes, hypertension and dyslipidemia. Subjects were informed about the study in detail and written consent of patients was also taken before starting the study. All ethical measures were taken prior to start the study. Serum lipid profile along with inflammatory markers was measured only on the fasting blood samples within the first 24 h of the onset of symptoms of MI and again at day 2 and day 7 post-MI. Serum TC, TG, HDL-C and LDL-C levels were measured by an enzymatic colorimetric method using reagents of VITROS chemistry products on automated clinical chemistry analyzers. Serum levels of hs-CRP were determined by latex enhanced immunoturbidimetric assay with the use of reagents and calibrators from Roche diagnostics. The levels of IL-6 and IL-10 were estimated by means of commercially available quantitative "sandwich" enzymelinked immunosorbent assay (ELISA) kits obtained from R&D Systems, according to the instructions of the manufacturer. Statistical Package for the Social Sciences 21 (SPSS 21) was used for all statistical analyses. All the descriptive variables were expressed as the mean ± standard deviation (SD). Independent sample t-tests were used to compare the mean values of variables between the AMI and control groups, whereas chi-square test was used for association between two categorical variables. To evaluate the association between serum samples of day 1, day 2 and day 7, the mean values were compared by one way analysis of variance (ANOVA) followed by Tukey's post hoc test. A probability value p < 0.05 was considered statistically significant. Table 1 shows the baseline clinical and laboratory characteristics of study participants. The mean age of 400 patients with AMI was 59.07 ± 7.34 years and the patient group was comprised of 155 females and 245 males, whereas, mean age of 150 healthy controls was 59.48 ± 7.73 years and it was comprised of 57 females and 93 males (both the groups have a same sex distribution ratio). The proportion of hypertension was 42 % and diabetes mellitus was 25 % for AMI patients. 140 and 85 AMI patients had already diagnosed hypertension and diabetes, respectively, whereas 32 (hypertension) and 13 (diabetes) patients were diagnosed during their investigations in the hospital. There were no significant differences between AMI patients and controls in terms of age, gender, diabetes, hypertension, smoking and drinking habits. Table 2 shows mean values and SD of all parameters studied in both groups with test of significance using SPSS 21 statistical software. Baseline serum levels of TC (174.81 ± 24.14 vs. 169.35 ± 16.34 mg/dL, p = 0.003), TG (157.51 ± 45.67 vs. 149.65 ± 27.18 mg/dL, p = 0.014) and LDL-C (111.35 ± 25.81 vs. 105.55 ± 30.39 mg/dL, p = 0.039) were significantly higher in AMI patients as compared to the controls, whereas HDL-C was significantly higher (43.14 ± 11.34 vs. 49.21 ± 9.14 mg/dL, p < 0.001) in the latter group. Inflammatory markers hs-CRP (9.82 ± 5.63 vs. 1.03 ± 0.67 mg/L), and IL-6 (50.80 ± 25.12 vs. 14.81 ± 6.65 pg/mL) were found to be significantly higher (p < 0.001) and IL-10 (11.65 ± 8.95 vs. 13.92 ± 6.06 pg/mL, p = 0.002) were significantly lower in AMI patients than controls. In AMI patients, all serum lipid levels changed significantly between day 1 post-MI (i.e., within 24 h) -day 7 post-MI (Table 3, Figure 1). From day 1 to day 2 post-MI, serum TC levels (174.81 ± 24.14 vs. 161.68 ± 30.77 mg/dL), LDL-C levels (111.35 ± 25.81 vs. 102.28 ± 23.23 mg/dL), and HDL-C levels (43.14 ± 11.34 vs. 36.78 ± 10.31 mg/dL) decreased significantly (p < 0.001). On the contrary, the serum TG levels increased significantly (p < 0.001) from 157.51 ± 45.67 on day 1 to 173.30 ± 48.79 mg/dL on day 2. Although, there were some improvements in lipid profile levels on day 7, they failed to reach the baseline levels. Laboratory findings demonstrated there were significant fluctuations in the levels of inflammatory markers of AMI patients from day 1 -day 7 ( Figure 2). Serum levels of hs-CRP and IL-6 that were measured at day 2 (17.70 ± 8.49 & 66.99 ± 24.35) after AMI were significantly higher (p < 0.001) than at day 1 (9.82 ± 5.63 & 50.80 ± 25.12) and day 7 (13.78 ± 7.54 & 53.56 ± 25.77). On the other hand, antiinflammatory cytokine IL-10 levels seemed to increase from day 1 to day 2 (11.65 ± 8.95 vs. 17.29 ± 13.54 pg/mL, p < 0.001) and to decrease again by day 7 (13.40 ± 10.97 pg/mL) (Table 3). DISCUSSION A series of changes in lipid metabolism occur during acute phase response. As a result, plasma TG level increases, while HDL, LDL and TC levels decrease, demonstrated by many studies (Wattanasuwan et al., 2001). There is no consensus with respect to timing of lipid and lipoprotein measurements in terms of proximity to the baseline values, the magnitude of the changes and when these changes reach maximum and basal values. A reduction in the magnitude of these changes is seen over time. First time Biorck et al. (1957) reported that serum cholesterol levels decreased during MI. Since then, a wide range of changes in the serum lipid and lipoproteins following acute coronary events have been reported. In the present study, we found significant changes in TC levels throughout the study period in AMI patients. There are several reports indicating that cholesterol reduction takes place in the initial phase of an acute coronary event; thus, plasma levels determined at this point should be interpreted with caution. This reduction may be just a consequence of the inflammatory response, or it may be related to an increase in cellular uptake of cholesterol for tissue repair and hormonal synthesis (Correia , 2004). A previous report by Khan et al. (2013) showed significantly decreased level of TC in AMI patients. Results of this analysis also suggest that directly measured serum LDL-C after admission for an AMI changes in a statistically way from day 1 to day 7. During acute phase reaction, LDL synthesis is increased. Despite that, LDL level decreases due to up-regulation of LDL-R activity (Balci, 2011). Moreover, LDL particle size is smaller in patients with AMI as compared to non-AMI patients. In addition, the decrease in LDL-C concen-tration on day 2 of hospitalization may reflect causes related to hospitalization, such as altered oral intake or intravenous hydration (Pitt et al., 2008). Ko et al. (2005) in a large-scale review of patient records of admissions for MI also found decreased LDL-C between samples taken < 24 h (120 mg/ dL) and > 24 h (116 mg/dL) after admission. These results show agreement to the LATIN (Lipid Assessment Trial-Italian Network) study, where patients admitted within 12 h of symptom onset for MI or unstable angina found a mean 7 % (unstable angina) to 10 % (MI) decrease from admission to the next day in direct-measured LDL-C that persisted until discharge (Fresco et al., 2002). HDL-C levels, in our study, started falling from day 2 onwards. Similar results were reported by Nigam (2007) and Kumar et al. (2009) in previous Indian studies. Rosoklija et al. (2004) concluded in their study that the optimal time for determining the HDL level were the first 24 hours of the actual event. In AMI, acute phase response has quantitative and qualitative effects on HDL and its contents. Inflammation decreases the level of HDL by increasing the activity of endothelial lipase and soluble phospholipase A2 and replacing the apolipoprotein-A1 in the HDL with serum amyloid A. Moreover, inflammation leads to changes in the size and function of the HDL (Ansell et al., 2005). There is a decrease in the levels of several plasma proteins included in HDL-mediated reverse transport of cholesterol and inhibition of lipid oxidation during inflammation. Therefore, this remodeling creates functional alterations, including a decrease in cholesterol efflux capacity (Tsompanidi et al., 2010). TG levels were also significantly changed during the study period in the patient group. In some previous studies Nigam (2007) and Pitt et al. (2008) also reported an increased level of TG after AMI. Hypertriglyceridemia is caused by increased lipoprotein production and decreased lipoprotein clearance. Increase in TG-rich lipoproteins is secondary to the re-esterification of plasma fatty acids. Clearance decreases mainly secondary to the inhibition of lipoprotein lipase activity (Navab et al., 2009). Myocardial damage-induced stress increases the adrenergic-mediated lipolysis of the adipocytes, which leads to an increase in free fatty acids, TGs and lipoproteins. The mobilization of free fatty acids and hepatic secretion of very low density lipoprotein also increases the TG levels (Balci, 2011). Based on these acute changes, the American College of Cardiology/American Heart Association (ACC/AHA) have supported a Class I recommendation for a fasting lipid profile analysis to be obtained within 24 h of admission for ACS (acute coronary syndrome) (Anderson et al., 2007). As expected, levels of hs-CRP and IL-6 were significantly higher in AMI patients as compared to the controls, and in patient's group levels were increased from day 1 to day 2, and then decreased from day 2 to day 7. In our patients, inflammatory markers level on day 2 were close to the peak of response, representing that the inflammatory process associated with myocardial necrosis was still ongoing and at its height at our second measurement. Similar results were shown by Yip et al. (2004), Sheikh et al. (2012 and Fan et al. (2011). AMI is a multifactorial disease, in which inflammatory processes play a central role (Biswas et al., 2010). In this regard, CRP and IL-6 are considered to be the most important markers and have been extensively studied in recent years (He et al., 2004;Tan et al., 2008). CRP might not only mirror an inflammatory stimulus, but also have direct effects promoting atherosclerotic propagation and destabilizing plaque (Yip et al., 2004). Our finding suggests that these substantially increased serum hs-CRP and IL-6 levels in the clinical setting of MI are due to the results of myocardial damage. Tissue necrosis is a potent acutephase stimulus, and following MI, there is a major CRP response, the magnitude of which reflects the extent of myocardial necrosis (Pepys and Hirschfield, 2003). Plasma CRP concentration increases following the cytokines activation in the initial hours of MI. CRP binds to phosphocholine group of necrotic myocardial cell membranes, facilitating complement activation, and thus promoting further inflammatory response, injury of myocardial cells, and expansion of necrosis (Swiatkiewicz et al., 2012). In some recent studies, Khan et al. (2013) and Raposeiras Roubin et al. (2013) reported significantly increased levels of hs-CRP in AMI patients. IL-6 is a pleiotropic cytokine with a broad range of humoral and cellular immune effects related to inflammation. Elevated IL-6 levels may contribute to the development and instability of atherosclerotic plaques by activation of leukocytes and endothelial cells or by the induction of various cytokines (Shinohara et al., 2012). Furthermore, IL-6 decreases lipoprotein lipase (LPL) activity and monomeric LPL levels in plasma, which increases macrophage uptake of lipids (Fan et al., 2011). In post-AMI patients, the activation of proinflammatory cytokines leads to high concentrations of inducible nitric oxide synthase, nitric oxide, and peroxynitrite, which have multiple harmful effects. In a recent study, Lopez-Cuenca et al. (2013) reported that high IL-6 on day 1 to be associated with poor long-term outcomes in MI patients, which reaffirms the prognostic significance of the proinflammatory status during the initial phase of an ACS. Consistent with some previous studies (Biswas et al., 2010), we found significant low levels of IL-10 in patients with AMI as compared to controls at day 1 serum samples. Less clinical data is available regarding the role of anti-inflammatory cytokines in AMI. It recently has been demonstrated that IL-10 may act as a protective factor in atherosclerosis and suppresses the synthesis of proinflammatory cytokines (Krishnamurthy et al., 2009). Moreover, IL-10 is expressed in both early and advanced human atherosclerotic plaques and inhibits many cellular processes including metalloproteinase production and tissue factor expression, which may play a role in the clinical expression of atherosclerotic plaque rupture or erosion (Lakoski et al., 2008). Additionally, it may influence antigen presentation (including oxidized lipids) from macrophages and dendritic cells, and even stabilize rupture prone plaques by suppressing apoptotic pathways in foam cells (Welsh et al., 2011). The mechanisms leading to increased release of IL-10 at day 2 in AMI patients remain unclear; however, it may be assumed that in severe inflammatory processes occurring in cardiovascular events, more IL-10 is produced as a compensatory phenomenon to inhibit continued pro-inflammatory cytokine production and inflammatory propagation, resulting in elevated levels of IL-10. Consistent with this, circulating IL-10 is positively associated with IL-6 and hs-CRP (Welsh et al., 2011). In conclusion, the results indicate a trend of reduced TC, LDL-C, and HDL-C and elevated TG levels along with inflammatory markers between day 1 and day 2 sample. However, corrections in the serum levels have been observed at day 7. Although the measurement of serum lipids is recommended after the admission of patients with ACS, serum lipid levels are measured in less than half of the patients. However, considering that phasic changes in serum lipid and lipoprotein levels occur after 24 hours of ACS, the findings of this study emphasize the need for assessment of the lipid profile of these patients to be made at admission, so as to identify patients at a higher potential risk. Exact knowledge regarding baseline serum lipids and lipoprotein levels as well as their varying characteristics can be used to guide selection of lipid lowering medication.
2017-08-15T00:31:09.495Z
2015-04-10T00:00:00.000
{ "year": 2015, "sha1": "eb14591b9e6339120bc3510f27e973d89db2e123", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "eb14591b9e6339120bc3510f27e973d89db2e123", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262541301
pes2o/s2orc
v3-fos-license
Etiological analysis of infection after CRS + HIPEC in patients with PMP Background Cytoreductive surgery (CRS) plus hyperthermic intraperitoneal chemotherapy (HIPEC) is the standard treatment for pseudomyxoma peritonei (PMP). It can significantly prolong the survival of patients, but at the same time may increase the risk of postoperative infection. Method Patients with PMP who underwent CRS + HIPEC at our center were retrospectively analyzed. According to PMP patients, basic clinical data and relevant information of postoperative infection, we analyzed the common sites of postoperative infection, results of microbial culture and the antibiotics sensitivity. Univariate and multivariate analysis were performed to explore infection-related risk factors. Result Among the 482 patients with PMP, 82 (17.0%) patients were infected after CRS + HIPEC. The most common postoperative infection was central venous catheter (CVC) infection (8.1%), followed by abdominal-pelvic infection (5.2%). There were 29 kinds of microbes isolated from the culture (the most common was Staphylococcus epidermidis), including 13 kinds of Gram-positive bacteria, 12 kinds of Gram-negative bacteria, and 4 kinds of funguses. All the antibiotics sensitivity results showed that the most sensitive antibiotics were vancomycin to Gram-positive bacteria (98.4%), levofloxacin to Gram-negative bacteria (68.5%), and fluconazole to fungus (83.3%). Univariate and multivariate analysis revealed the infection independent risk factors as follow: intraoperative blood loss ≥ 350 mL (P = 0.019), ascites volume ≥ 300 mL (P = 0.008). Conclusion PMP patients may have increased infection risk after CRS + HIPEC, especially CVC, abdominal-pelvic and pulmonary infections. The microbial spectrum and antibiotics sensitivity results could help clinicians to take prompt prophylactic and therapeutic approaches against postoperative infection for PMP patients. Introduction Pseudomyxoma peritonei (PMP) is a malignant clinical syndrome characterized by the accumulation and redistribution of mucus produced by mucinous tumor cells in the peritoneal cavity [1].Most of PMP originate from mucinous tumors of the appendix, and a few originate from primary mucinous tumors of ovaries, colons and other organs [2].The incidence of PMP is approximately 2-4 cases in 1 million per year [3][4][5], and the prevalence is approximately 25.1 cases in 1 million; the male/female ratio is 1: (1.2 to 3.4) [3,4,6], the median age of onset was 62-63 years [7][8][9][10]. Nowadays, the integrated treatment which focuses on CRS + HIPEC is the main strategy for the treatment of PMP [11,12].For the selected PMP patients, standardized CRS + HIPEC can significantly improve their overall survival up to 103.4-196 months, and the 5-year and 10-year survival rates can reach 92.1% and 80.8% [13]. CRS is a complex surgical procedure, lasts a long time, and generally has a wide range of excision, which may have a great impact on the patients.The drugs of HIPEC may expose patients to the risk of immunosuppression, and splenectomy for some patients due to local invasion will also increase the risk of immunosuppression [14].Thus, PMP patients are potentially at high risk for postoperative infection.According to literature reports, the incidence of postoperative infection adverse events after CRS + HIPEC is about 21.0% ~ 43.0% [15][16][17][18][19]. This study aims to analyze the common infection sites, microbes and corresponding antibiotics sensitivity result of PMP patients after CRS + HIPEC.So as to provide reference for the treatment of patients with such postoperative infections. Clinical data This study was approved by the institutional review board of Beijing Shijitan Hospital, Capital Medical University (2015- [20]).All patients signed an informed consent to receive CRS + HIPEC and for the use of their clinicopathological data for further research and academic publications. This retrospective study included 482 patients with PMP treated with CRS + HIPEC at Beijing Shijitan Hospital from May 2015 to April 2022.Data regarding the basic clinicopathological characteristics, CRS + HIPEC related information, and postoperative infection related information (results of microbial culture and antibiotic sensitivity test, etc.) were collected. Patient selection All patients met the criteria for CRS + HIPEC surgery [21], and the inclusion criteria were as follows: (1) Karnofsky performance status score > 60; (2) normal peripheral blood white blood cell count ≥ 3,500/mm 3 and platelet count ≥ 80,000/mm 3 ; (3) acceptable liver function, with total bilirubin ≤ 2 × the upper limit of normal (ULN) and aspartic aminotransferase and alanine aminotransferase ≤ 2 × ULN; (4) acceptable renal function, with serum creatinine ≤ 1.5 mg/dL; and (5) other major organ functions can tolerate a major operation.Major exclusion criteria include: (1) preoperative examination revealing distant metastases; (2) imaging examination indicating mesenteric contracture; and (3) the performance status and function of vital organs that cannot tolerate major surgery. CRS + HIPEC All CRS + HIPEC procedures were performed by the peritoneal metastasis specialist team of our center.After successful general anesthesia, a midline incision was made in the upper abdomen from the xiphoid process to the pubic symphysis to expose the abdominal cavity fully.And then, peritoneal cancer index (PCI) score was comprehensively evaluated.After CRS, the completeness of cytoreduction (CC) score was evaluated based on the residual tumor size.Open HIPEC was administered after completion of CRS, with 120 mg cisplatin + docetaxel 120 mg, or 120 mg cisplatin + mitomycin 30 mg at 43℃ for 60 min.Subsequently, functional reconstruction of digestive tract and abdominal closure were performed. Postoperative infection Clinical infection should be suspected in patients with postoperative symptoms such as dyspnea, painful urination, suppurative discharge in the wound or drainage tube, or fever > 38℃.Persistent increases in neutrophil counts, procalcitonin and or C-reactive protein levels 48 h after CRS + HIPEC were also considered suspected factors for infection.For patients suspected of infection, microbial culture and antibiotics sensitivity test should be carried out on any samples obtained clinically, in order to detect infection as early as possible and take targeted treatment. CVC infection CVC infection was defined as positive CVC tip microbial culture accompanied by chills and fever (> 38℃) or positive venous blood microbial culture (consistent with the results of CVC tip microbial culture) within 30 days after CRS + HIPEC. Abdominal-pelvic infection Postoperative abdominal-pelvic infection was defined as positive microbial culture of the patient's abdominal-pelvic drainage, accompanied by signs of peritonitis, fever or other infection-related symptoms within 30 days after CRS + HIPEC. Pulmonary infection Postoperative pulmonary infection was defined as positive microbial culture of the patient's sputum within 30 days after CRS + HIPEC, accompanied by imaging signs of infection or symptoms such as fever, cough and sputum. Other types of infection Postoperative surgical wound infection was defined as positive microbial culture of the patient's incision exudate within 30 days after CRS + HIPEC, accompanied by infection symptoms such as skin swelling, heat and pain around the incision.Postoperative urinary system infection was defined as positive microbial culture of midstream urine accompanied by urinary tract irritation or systemic infection symptoms such as fever and chills within 30 days after CRS + HIPEC.Postoperative positive blood culture of bacteria/fungi (infection site unknown) was defined as that the patients showed systemic infection symptoms such as fever and chills within 30 days after CRS + HIPEC with the blood culture of microbe was positive, but the infection site was still unclear after various examinations and physical examination. Statistical analysis Microsoft Excel 2016 and IBM SPSS Statistics for Windows, version 26.0 were used for data analysis.Measurement data were presented as median (range) or mean ± SD and analyzed by t-test or rank-sum test.Enumeration data were presented as frequencies and analyzed using the χ2 and Fisher's exact tests.Univariate and logistic regression analysis were used to analyze the independent factors influencing postoperative infection.The Kaplan-Meier method and log-rank test were used for survival analysis.Statistical significance was set at P < 0.05. Overall survival Overall survival (OS) was defined as the time interval from the date of clinical diagnosis to the date of death or last follow-up. Types of infected microbe and drug susceptibility Types of infected microbe The 82 patients who had postoperative infection were infected with a total of 29 types of bacteria and fungi, including 13 4). Specific infectious bacteria Among the 82 PMP patients with postoperative infection, 6 were infected with multidrug-resistant bacteria (all occurred in pulmonary infection), of which 2 were Klebsiella pneumoniae and 4 were Acinetobacter baumannii.The antibiotics sensitivity test shows that 2 cases of Klebsiella pneumoniae were only sensitive to chloramphenicol and amikacin, respectively.2 cases of Acinetobacter baumannii were all antibiotics-resistant and the other 2 cases were only sensitive to minocycline. The factors above with P < 0.05 were incorporated into the binary Logistic regression model, and the results of multivariate analysis showed that intraoperative blood loss ≥ 350 mL (P = 0.019) and ascites volume ≥ 300 mL (P = 0.008) were independent risk factors for postoperative infection.PMP patients with intraoperative blood loss ≥ 350 mL had a 2.454 times risk of postoperative infection than those with intraoperative blood loss < 350 mL (P = 0.019, OR = 2.454, 95%CI: 1.157-5.203);For PMP patients with ascites volume ≥ 300 mL, the risk of postoperative infection was 2.192 times than those with ascites volume < 300 mL (P = 0.008, OR = 2.192, 95%CI: 1.233-3.897)(Table 7). Discussion In this study, the infection rate of PMP patients after CRS + HIPEC was about 17.0%.The most common infection was CVC infection (8.1%), followed by abdominal-pelvic infection (5.2%) and pulmonary infection (4.8%).Antibiotics sensitivity test revealed vancomycin as the most sensitive antibiotic for Gram-positive bacteria (98.4%), levofloxacin as most sensitive antibiotic for Gram-negative bacteria (68.5%), and fluconazole as the most sensitive antibiotic for fungi (83.3%).Univariate and multivariate analysis revealed that ascites volume ≥ 300 mL and intraoperative blood loss volume ≥ 350 mL were independent risk factors for postoperative infection. CRS + HIPEC is the standard treatment for PMP, which can significantly improve the survival of patients with acceptable safety [11,12].However, PMP patients treated with CRS + HIPEC usually had received several operations and multicycle chemotherapy.Most of these patients have poor physical condition and are at high risk of adverse events after invasive multi-organ resection such as CRS [22].In addition, HIPEC drugs can not only kill residual tumor cells in patients' abdominal cavity, but also have drug toxicity and immunosuppressive effects, making PMP patients potentially high-risk for postoperative infection [23,24]. Previous studies have shown that the infection rate of PM patients after CRS + HIPEC was about 21.0% ~ 43.0% [15][16][17][18][19]25].The study of Arslan et al. [22] on 169 PM patients showed that the postoperative infection rate of CRS + HIPEC was 27.8%, and the most common was surgical site infection (21.3%).Smibert et al. [19] analyzed 100 patients treated with CRS + HIPEC, and the results showed that the postoperative infection rate was 43.0%, with surgical site infection being the most common (27.0%).At our center, the postoperative infection was most frequently observed in the colorectal cancer peritoneal metastases (24.3%), and least common in PMP (17.0%) (Table 8).The overall postoperative infection rate among PM patients was 20.1% which was lower than the postoperative infection rate reported in previous studies (Table 9).This could be attributed to the mature CRS + HIPEC treatment system of our center (the center has successfully completed more than 2000 CRS + HIPEC operations so far).Each PM patient will receive adequate preoperative preparation, lung function exercise, and enteral or intra intestinal nutrition support according to the nutritional status of the patient.Previous studies also identified major microbial pathogens for postoperative infection.Arslan et al. [22] analyzed 47 infected patients after CRS + HIPEC, and the microbe culture results showed that Escherichia coli (47.1%) was the most common bacteria.Valle et al. [18] studied 78 patients infected after CRS + HIPEC and found that the most common bacteria infected was Staphylococcus epidermidis (16.7%).In this study, the most common postoperative infection of PMP patients was Staphylococcus epidermidis (25.6%).Meanwhile, according to the antibiotic sensitivity test, the most common microbe infection of CVC was Staphylococcus epidermidis (16.7%), which was highly sensitive to vancomycin, linezolid, tigecycline and so on.Enterococcus faecalis (9.8%) was the most common microbe isolated from abdominal-pelvic infection, with high sensitivity to vancomycin, penicillin G, ampicillin.The most common microbe isolated from pulmonary infection was Acinetobacter baumannii (15.9%), while minocycline and sulfamethoxazole were the only relatively sensitive antibiotics.In this study, 6 cases of PMP patients infected with multi-drug resistant bacteria, all of which were pulmonary infections, suggesting that after CRS + HIPEC, it is especially necessary to pay much attention to the pulmonary function of patients, promote sputum discharge and reduce the risk of pulmonary infection.Some studies had shown that [22,26], the main cause of death due to infection in patients after CRS + HIPEC is Candida albicans infection.In this study, none of PMP patients died or became critically ill due to Candida albicans infection, and antibiotics tests showed that Candida albicans was very sensitive to fluconazole and voriconazole.There were several studies about postoperative infection of CRS + HIPEC revealed the following 5 major risk factors, including colorectal resection, small intestine resection, intraoperative blood loss, operation duration > 10 h, and preoperative nutritional status [18,19,25].In comparison, our study only found 2 independent risk factors for postoperative infection, intraoperative blood loss ≥ 350 mL and ascites volume ≥ 300 mL.Patients with ascites tended to have abdominal distension, poor appetite and other gastrointestinal symptoms, and the nutritional status of such patients were usually poor.However, nutritional status had been shown to have a significant impact on the immune system, and patients with impaired immune response were more likely to develop postoperative complications after gastrointestinal surgery [20,27].There was a bacteria hypothesis in the mucin formation and tumor progression in PMP, Semino-Mora et al. [28].found that the overall bacterial density of appendixes in PMP patients was much higher than in healthy people.That is probably an explanation to why ascites is associated with the infection risk, because ascites could be produced partially by bacteria.CRS + HIPEC often involved partial resection of the invaded gastrointestinal tract, for which ERAS guidelines recommend preoperative mechanical gastrointestinal preparation with or without oral antibiotics to reduce postoperative infection rates [29]. Postoperative infection was the main cause of increased length of stay and perioperative mortality in patients treated with CRS + HIPEC, as well as increased treatment costs for patients [8,30].However, no studies have shown that postoperative infection of CRS + HIPEC was associated with long-term outcome of patients.The results of survival analysis in this study also showed that there had no difference on median overall survival between PMP patients with or without postoperative infection. This study has the following limitations: First, the types and characteristics of microbes isolated from the infected patients may be different in different CRS + HIPEC treatment centers, so it is necessary to combine infectionrelated data from multiple centers in the future to make a summary.Second, this was a single-center retrospective case-control study with a moderate sample size, and higher-level studies must verify the conclusions. In conclusion, PMP patients may have increased infection risk after CRS + HIPEC, especially CVC, abdominalpelvic and pulmonary infections.This study analyzed the common infection sites and microbes in PMP patients after CRS + HIPEC, as well as the corresponding antibiotics sensitivity test results, which may provide reference for the early clinical empirical antibiotics use in patients with such CRS + HIPEC postoperative infection. informed consent to receive CRS + HIPEC and for the use of their clinicopathological data for further research and academic publications. Table 3 The types and proportion of the microbes isolated from PMP patients with postoperative infection Table 4 The common microbes isolated from the postoperative infection sites of PMP patients Table 5 The results of antibiotics sensitivity to microbes isolated from PMP patients with postoperative infection Table 6 The univariate analysis of PMP patients with postoperative infection PCI Peritoneal cancer index, CC Completeness of cytoreduction, RBC Red blood cells Table 7 The multivariate analysis of PMP patients with postoperative infection Fig. 1 Survival analysis.A Overall survival analysis of all PMP patients; (B) Survival curve analysis of infected group and non-infected group Table 8 The comparison of postoperative infection rate after CRS + HIPEC for different PM patients at our center
2023-09-26T14:18:18.987Z
2023-09-26T00:00:00.000
{ "year": 2023, "sha1": "13b1b32274819f4e8430df9bcdff7d20aa36adcf", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/counter/pdf/10.1186/s12885-023-11404-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2478c3703f73501ae6533903c690abef72750904", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254871448
pes2o/s2orc
v3-fos-license
Effect of additional carrageenan concentration on the characteristics of wet noodles based on mangrove fruit flour variation In general, wet noodles are made from tapioca flour, but the weakness of noodles made from tapioca flour is the lack of nutrients, high carbohydrate content, low protein content, low vitamin content and high gluten content. Making wet noodles with the fortification of mangrove fruits flour and carrageenan is expected to produce wet noodles with physical, chemical, and nutritional characteristics. The method used in this study was an experiment with a factorial randomized block design (RBD). The test was carried out with three replications (triple) with two test parameters, the concentration of carrageenan flour (8%, 12%, 15%) and three mangrove fruit species, namely, Avicennia marina , Bruguierra gymnorrhiza , and Sonneratia caseolaris . Based on the results, the addition of various concentrations of carrageenan flour in wet noodles and using different types of mangrove fruit flour gave significantly different effects (P<0.05) on several test parameters, namely proximate test, crude fibre content, antioxidant activity, water absorption, cooking time, power dropping of noodles, cooking loss, and sensory testing. The addition of Introduction Noodle is a source of carbohydrate that is widely liked and consumed by the public. In general, wet noodles are made from tapioca flour, but the weakness of noodles made from tapioca flour is the lack of nutrients, high carbohydrate content, low protein content, low vitamin content and high gluten content. Making wet noodles with the fortification of mangrove fruits flour and carrageenan is expected to produce wet noodles with physical, chemical, and nutritional characteristics. Therefore, several studies researched producing wet noodles with non-gluten raw materials and adding some fortification to the composition of ingredients that were high in nutrients. The fortification of sweet potatoes for making wet noodles can improve the nutritional value, particularly antioxidants obtained from anthocyanin pigments (Mahmudatussa'adah et al., 2021), and using red seaweed as raw material for gluten-free pasta (Sholichah et al., 2021). Mangrove fruits have quite complete nutrition, Sonneratia caseolaris contains vitamin A 221.97 IU, vitamin B 5.04 mg, vitamin B2 7.65 mg, and vitamin C 56.74 mg (Putri et al., 2015). In addition, besides having high vitamin content, it contains other nutrients such as carbohydrates (76.56 g), fat/glycerol (0.9 g/fruit), protein (4.83 g) and mineral substances (Rahman et al., 2016). Sonneratia caseolaris has been a food ingredient in a variety of products, such as brownies (Sumartini et al., 2020;Harahap et al., 2020), fruit leather (Rahman et al.,2016), jam, food bars (Basuki et al., 2017), chocolate (Wintah et al., 2018), syrup (Rajis et al., 2017), biscuits (Jariyah et al., 2020) and a chocolate bar (Ratrinia and Sumartini, 2021). Several studies using edible mangrove fruit as an ingredient aim to maximize the potential for nutritional content such as fibre, carbohydrates, vitamins, minerals, and antioxidants. In addition, A. marina, B. gymnorrhiza, and S. caseolaris are edible mangrove fruits used as raw materials for mangrove fruit flour (Rout et al., 2015). They are usually used with ingredients that contain starch to be used for various food preparations. Therefore, it is possible to mix starchcontaining ingredients with mangrove fruit flour for a wide variety of products with the added functional benefit of mangrove fruit properties (Jariyah et al., 2014). Compared with dry noodles, wet noodles are one of the foods that are lack nutrients but stronger in gluten. In terms of sensory quality, preferred wet noodles are those with a chewy texture. One of the factors that affect the chewiness of the noodles is the gluten content found in wheat flour protein, on the other hand, gluten has a worse health impact when consumed exaggeratedly. The production of noodles made from mangrove fruit flour with carrageenan flour fortification is expected to produce wet noodles with physical, chemical, and nutritional characteristics as well as a better source of chewing agent compared to commercial instant noodles on the market. Research method The method used in this study was an experiment with a factorial randomized block design (RBD). This research is carried out with three replications (triple) with two test parameters. The parameters are the concentration of carrageenan flour (8%, 12%, 15%) and the difference addition of three mangrove fruit species, namely, A. marina, B. gymnorrhiza, and S. caseolaris. According to the regulation of the National Food and Drug Agency of Indonesia (2019), there is no maximum limit for adding carrageenan to pasta and noodle products and similar products. Sample preparation The fruit of Avicennia marina is peeled, and then sorting and removing the pistil is carried out. Then boil it in 800 mL of distilled water at 90°C for 60 mins Subsequently, A. marina fruit was soaked in ash water suspension for 24 hrs. According to Perdana et al. (2012), boiling treatment with several levels of rubbing ash and long water immersion had a significant effect on the decrease in fruit tannin levels. The next step was drying in a drying oven for 10 hrs (70°C). The fruit flour of A. marina is done by blending it for 3 mins. The fruit flour of the A. marina is soft and light green. Sifting is done to get a soft texture using a sieve (filter size 60-80 mesh). Furthermore, the proximate test of A. marina fruit flour was carried out. Bruguierra gymnorrhiza fruit flour is produced according to the method of Sulistyawati et al. (2012) with slight modifications. The fruit is boiled at a temperature of 80 -90°C for 60 mins, and then the fruit is peeled and sliced using an aluminium knife then soaked in 10% (w/v) ash water suspension until the fruit is completely immersed for 24 hrs. The fruit is dried in the sun until it is completely dry. The dried fruit is then milled and sieved with a 100-mesh filter (Amin et al., 2018). Furthermore, the proximate test of B. gymnorrhiza fruit flour was carried out. Sonneratia caseolaris fruits were collected and randomLy selected from various parts of the mangrove tree. Sonneratia caseolaris was transferred to the laboratory, then peeled and blended with distilled water (1:3). The resulting dispersion was sieved with a 50 mesh sieve to remove the seeds, then dried in an oven/ drying cabinet for 15-18 hrs at 50-60°C, and sieved with an 80 mesh sieve (Jariyah et al., 2014). Wet noodle production The research was conducted by making control treatment wet noodles with a 100% wheat flour recipe ( Table 1). The process of making wet noodles consists of mixing, resting, milling/printing the noodles, and cutting. The mixing process takes about 15-25 mins. This time is needed to form the dough matrix and homogenize the ingredients. The resting purpose is for water dispersion and gluten formation in the dough. Resting the dough for a long time will result in softer noodles and dough that can be stretched. The break time is usually 30-60 mins. The moulding of the noodles is carried out by mechanical means, and the thickness of the ends of the noodles is 1.2-2 mm. In this process, the soft gluten fibre can be expanded while it is being formed. The proper temperature for this process is 25°C or higher so that the dough does not turn coarse, harden and spoil. The noodles are then cut to 0.5-1 m long. Control wet noodles compared to some noodle formulations. Using the formulas in Table 1, the study consisted of three main variations in the types of mangrove flour and three types of variations in the concentration of carrageenan. The variations of the ingredients consist of variations of basic flour (mangrove fruit flour, wheat flour); variations in the amount of carrageenan flour (8%, 12%, and 15%), 1 egg, and salt. Formula A Statistical analysis The research used a factorial randomized block design (RBD) experimental design method with a 5% test level. The variables used in this study were differences in the concentration of carrageenan flour (8%, 12%, and 15%) and the types of mangrove fruit species (B. gymnorrhiza, S. caseolaris, and A. marina). The parameters tested were the proximate value (AOAC, 2005), crude fibre content (AOAC, 2005), antioxidant activity (Tristantini et al., 2016), development power of noodles, as well as water absorption capacity (Kohn et al., 2015), cooking time (Wandee et al., 2015), cooking loss (Tan et al., 2009), and sensory testing (Litaay et al., 2022). Moisture analysis The water content measurement was carried out using the thermogravimetric method (AOAC, 2005) with slight modifications. First, the cup used in the measurement was dried in an oven (Memmert UN30) at a temperature of 100-105ºC until a constant weight was obtained, then cooled in a desiccator (Duran DN200) and weighed. Next, the sample was weighed 5 g in the cup and dried in an oven (Memmert UN30) at a temperature of 100-105ºC until a constant weight was obtained. Finally, the sample is cooled in a desiccator and then weighed. The principle of the water content analysis method is based on the evaporation of water contained in the sample. Therefore, weight reduction occurs due to the evaporation of water contained in the sample. Fat content Fat content analysis was carried out using the Soxhlet method (AOAC, 2005) with slight modifications. The principle of this analysis is to extract fat using hexane solvent. When heated, the hexane solvent will evaporate to calculate the fat content. Measurement of fat content begins with drying the fat flask (Pyrex) using an oven (Memmert UN30) at 105ºC for 30 mins, then cooled in a desiccator (Duran DN200) for 15 mins and weighed. A total of 5 g of the sample was wrapped in filter paper, put into a fat sleeve, covered with fat-free cotton, and doused with hexane solvent. The following procedure is distillation until the hexane solvent evaporates. The extracted flask was then heated in an oven (Memmert UN30) at 105ºC until the weight was constant. Finally, the dried sample was cooled in a desiccator (Duran DN200) and weighed. Protein content The method of measuring protein content was carried out using the Kjeldahl (AOAC, 2005) with slight modifications. The principle of analysis of this method includes destruction, distillation, and titration. Protein content analysis uses the Kjeldahl method to determine protein from carbon-containing materials and convert nitrogen into ammonia. Ammonia reacts with an acid to form ammonium sulfate; then ammonia is absorbed in boric acid solution (Merck). The HCl titration step can determine the amount of nitrogen in the sample using a burret (Duran). Ash content Measurement of ash content was carried out according to (AOAC, 2005) with a slight modification and using a furnace (Nabertherm LT 15/14/B410) with a temperature of about 550°C (dry ashing method). The ash content was determined by heating at a temperature of 550°C by oxidizing organic matter and then weighing the remaining substances. Carbohydrate content Calculating carbohydrate content refers to (AOAC, 2005) slightly modified. In the proximate analysis, it is calculated using the by-difference method. Calculation of carbohydrate analysis is 100%-(water content + ash content + fat content + protein content). Carbohydrates are obtained by subtracting the number 100 from the percentage of water content, ash content, fat content, and protein. Water absorption analysis WAC analysis refers to research (Kohn et al., 2015). First, a 5 g sample was put into a centrifuge tube (Falcon type plastic tube with a capacity of 50 mL), added 32 mL of distilled water, and agitated manually for 1 min. Then the tube was allowed to stand for 10 mins and centrifuged for 25 mins at 2,900×g. Next, the supernatant was discarded, and the tubes were dried in an aircirculating oven (50ºC for 20 mins in an inclined condition). Finally, the tube was weighed, and the WAC was calculated for each sample as a percentage. Sensory evaluation Sensory evaluation procedures were based on the research method (Litaay et al., 2022). First, all noodle samples were boiled using the optimal cooking time. Then, the sample was evaluated for colour, texture, flavour, taste, and overall acceptability by 30 untrained panellists using a scale where 9 = enormously liked and 1 = intensely disliked. Dietary fiber Analysis of dietary fiber was carried out using the enzymatic method (AOAC, 2005) Cooking time The cooking time testing procedure refers to the research method (Wandee et al., 2015). A total of 5 g was cut into a length of 4-5 cm and cooked in 200 mL of boiling distilled water in a closed glass. The optimal cooking time was evaluated by observing when the white core disappeared from the noodle strands every 30 s by pressing the cooked noodles between two transparent glass slides. Cooking loss The cooking loss test was carried out using the research method (Tan et al., 2009). The noodles were weighed as much as 5 g (W0) and then cut into 5 cm lengths. Noodles are cooked in 200 mL of boiling distilled water in a beaker with a lid for 1 min or the optimal cooking time. The cooked noodles are then rinsed with cold water and dried using filter paper. Cooking loss (CL) was determined by evaporating the water used for cooking and washing at a temperature of 110ºC. The residue (W1) obtained was then weighed and determined as a percentage of cooking loss (CL). CL (%) = W1/W0 × 100% Antioxidant activity Prepare a sample of wet noodles. Then make a mother liquor of each sample of 100 ppm by dissolving 10 mg of extract in 100 mL of methanol PA. Furthermore, dilution using methanol PA solvent by varying the concentration of 5 ppm, 6 ppm, 7 ppm, 8 ppm and 9 ppm for each sample. Prepare 50 ppm DPPH stock solution. DPPH stock solution is prepared by dissolving 5 mg of DPPH solids into 100 mL of methanol PA. Then a comparison solution was designed, namely a control solution containing 2 mL of methanol PA and 1 mL of 50 ppm DPPH solution. Every 2 mL of sample solution and 2 mL of DPPH solution were prepared for the test sample. Then, they were incubated for 30 mins at a temperature of 27℃ until a color change from DPPH activity occurred. All samples were made triple. All samples, namely extract samples that have been incubated, are tested for absorbance values using a UV-vis spectrophotometer at a wavelength of 517 nm (Tristantini et al., 2016). Characteristics of mangrove noodles during cooking Based on the experiments of a factorial randomized block with a test level of 5%, the results of processing mangrove noodle data during cooking, namely the parameters of cooking time, cooking loss, and power dropping showed that there were significant differences (P<0.05) on the addition of carrageenan treatment. The effect of different types of mangrove fruit flour did not show significant differences ( Table 2). The addition of carrageenan flour affects the cooking time. The higher the carrageenan flour concentration, the longer the cooking time will be. This result is possible because of the high water and fibre content contained by the noodles with the addition of carrageenan flour compared to the control because of its ability to bind water. According to Rahmi et al. (2018), cooking time is influenced by differences in the concentration of seaweed pulp. According to Husna et al. (2017), the more the addition of seaweed pulp, the longer the cooking time for the noodles. The seaweed pulp still contains fibre, which hinders the cooking process of the noodles. Cooking loss is one of the parameters to determine the quality of the noodles after cooking. Cooking loss is a test to find out how many raw noodles are lost due to the cooking process. Carrageenan has a high water binding ability or is a gelling agent because of its hydrophilic nature. Based on the results (Table 2), the higher the concentration of carrageenan will reduce the cooking loss value. Based on these results, it means that the addition of carrageenan flour can improve the quality of wet noodles by reducing the yield produced. According to Salma et al. (2018), the higher the addition of carrageenan, the lower the cooking loss value of the wet noodles. The possibility of low cooking loss is due to the high viscosity of carrageenan and its high gel strength, it takes a longer time to break down the starch molecules contained in the resulting wet noodles. The lower the cooking loss value, the better quality of the wet noodles (Ratnawati and Afifah, 2018). According to Setyani et al. (2017), the differences in the value of cooking loss are due to the amylose content of mangrove fruit flour as raw material. The higher the amylose level, the stronger the gel structure is formed. Table 2 shows that the higher the carrageenan concentration added, the higher the power dropping of the noodles. The results showed that commercial noodles made from wheat flour had the lowest power dropping. This is due to the presence of gluten which is owned by a protein that is not owned by wet noodles with raw materials of mangrove fruit flour and carrageenan. Irsalina et al. (2016) state that the protein content in the flour is correlated with the texture of the noodles because the processing of dry noodles with heat cause the protein to denature and the protein become rigid. This rigid protein makes the texture of the noodles hard, and the force required to break the noodles is high. In addition, Rahmi et al. (2019) reported that the addition of 1% carrageenan to the noodle formula can improve the power dropping of Moringa noodles. The results showed that the variation of carrageenan concentration gave a significant difference in the absorption power of the resulting mangrove noodles. The absorption power of carrageenan noodles is higher when compared to control noodles. The results of the water absorption test showed that the higher the addition of the carrageenan flour concentration, the higher the absorption capacity of the noodles. Sensory analysis Based on the results (Table 3), shows that the texture value has a significant difference (P<0.05) to the addition of carrageenan flour. The different types of mangrove fruit flour did not have a significant effect (P<0.05) on the texture value. The relationship between the addition of carrageenan flour and the texture value is that the higher the addition of carrageenan, the chewier the texture is, and the panellists favour it, but to a certain degree, the texture tends to be disliked by the panellists (3.56±0.10). According to Ratnawati and Afifah (2018), Gum/hydrocolloid is widely used in starch-based products to increase stability, modify texture, and facilitate processing. Hydrocolloids used in the formulation of gluten-free food products come from various sources such as seeds, fruit, plant extracts, seaweed, and microorganisms. The hydrocolloid protects the starch granules against stirring during cooking and improves the product's texture. According to Ba'ari et al. (2020), one of the factors and mechanisms of texture formation is the heating process and the duration of cooking, where when boiling occurs the starch gelatinization process and protein coagulation make the chewy texture of the wet noodles. In terms of colour parameters, it shows that the mangrove fruit flour noodles are less preferred by panellists. This may be because the colour of the mangrove noodles tends to be darker than the control. The noodles produced by the control using wheat flour as raw materials tend to give a brighter colour than the control. According to Jaziri et al. (2018), the low degree of whiteness in the seaweed flour used causes the colour of the dry Euchema cottonii noodles to be a darker colour. Following the opinion of Santoso et al. (2006), the decrease in the value of this colour is due to the low value of the degree of whiteness in the seaweed flour so that the colour of the dry noodles becomes darker yellow. Based on the results (Table 3), the taste has a significant effect on the addition of mangrove fruit flour, panelists prefer mangrove fruit flour noodles with the addition of carrageenan concentration by 12%. The addition of carrageenan flour does not affect the resulting taste because carrageenan has no specific taste. According to Nurhuda et al. (2017), the addition of carrageenan flour did not affect the taste of the sea catfish meatballs, presumably because carrageenan flour had a neutral or bland taste it did not affect the taste of the resulting sea catfish meatballs. Based on Table 3 shows that the higher the addition of carrageenan concentration, the taste parameter value will decrease, the results of this study are in line with the research of Atiqoh et al. (2021), the more seaweed is added, the tasteless it will be. The aroma produced by mangrove fruit noodles shows a significant difference (P<0.05), the aroma produced by wet noodles tends if more carrageenan flour is added, the noodles will be more preferred, but if too much carrageenan flour is added, aroma tends to be disliked by the panellist. This is probably because the texture is too rigid/stiff, which affects the perception of the taste of wet noodles. According to Kaudin et al. (2019), the quality of taste reception is influenced by texture, namely smoothness, thickness, elasticity, and hardness. With the higher concentration of carrageenan, the panellists liked the wet noodle taste, because carrageenan can form a gel in making wet noodles. Moisture content There is a significant difference (P<0.05) in the value of water content with the treatment of different carrageenan concentrations (Table 4). The treatment using different types of mangrove fruit flour did not show a significant difference (P<0.05) in the water content. The results showed that the higher the concentration of carrageenan affects higher the water content value. Carrageenan flour is a gelling agent that has hydrocolloid properties produced from the ingredients of red seaweed. According to Waqiah et al. (2019), the higher the concentration of seaweed addition, the water content also increases, this is because seaweed has properties that can trap (adsorb) water in the wet noodle mixture. Carrageenan is a hydrocolloid compound that can bind water. According to Gomez-Guillen et al. (2006), the value of high water binding capacity is due to carrageenan swelling, thereby increasing elasticity by reducing water content and increasing density around the protein matrix. If the water https://doi.org/10.26656/fr.2017.6(6).709 © 2022 The Authors. Published by Rynnye Lyan Resources FULL PAPER holding capacity of carrageenan is high, it will hold water in the formed matrix space. Table 4 shows that there is a significant difference (P<0.05) in the value of the ash content with the treatment of different carrageenan concentrations. The treatment using different types of mangrove fruit flour did not show a significant difference (P<0.05) in the ash content. The results showed that the higher the concentration of carrageenan, the higher the ash content of the noodles. According to Kaudin et al. (2019), the more carrageenan added can increase the mineral content in wet noodles. Santoso et al. (2003) reported that the high mineral content in seaweed is due to the adaptation to marine environmental conditions that contain various minerals with high concentrations. Protein content The results showed that the variation in carrageenan concentration gave different protein values to the mangrove noodles. Based on the data (Table 4), the value of protein content with the treatment of variation carrageenan concentrations showed significant results (P<0.05). The protein content of mangrove noodles tends to be lower than the control noodles. The high protein content of noodles in the control was caused by wheat flour-based noodles containing gluten. While the noodles with the basic ingredients of mangrove fruit flour and the addition of carrageenan flour have higher carbohydrate content than protein, this is possible that the content with the basic ingredients of mangrove fruit flour is richer in fibre, carbohydrates, and antioxidants. According to Abubakar (2011), protein content is influenced by the amount and type of flour used as raw material as well as the protein content of the additives used. In this case, carrageenan does not affect protein levels in wet noodles because it is a polysaccharide. Fat content Fat content values treated with different carrageenan concentrations showed significant results (P<0.05). The fat content of mangrove noodles is lower than control noodles. The results of this study are the same as the results of research by Kaudin et al. (2019), that carrageenan addition affects the reduced fat content of wet noodles. Added by Nugroho et al. (2014) in their research said that the addition of carrageenan shrimp meatballs with a concentration of 8% of 0.22% showed that the addition of carrageenan decreased fat content of shrimp meatballs. Carbohydrate content The results showed that the difference in carrageenan concentrations gave different carbohydrate values to the mangrove noodles (Table 4). The carbohydrate content of mangrove noodles tends to be higher than control noodles. The results of the carbohydrate content test showed that the higher the addition of carrageenan flour concentration, the lower the carbohydrate value. This is in line with the research of Nafiah et al. (2012), where fat and carbohydrate levels will decrease with the greater concentration of carrageenan that is added. Fibre content The value of fibre content with the treatment of different carrageenan concentrations showed significant results (P<0.05) ( Table 4). The results of the fibre content test showed that the higher the addition of carrageenan flour concentration, the higher the fibre content value. Carrageenan is a commercial hydrocolloid compound from red seaweed (Rhodophyceae) which is widely used in food and industrial products such as in the manufacture of chocolate, milk, pudding, instant milk, canned food, and bread. Carrageenan can change the desired functional properties of the product. Some of the roles of carrageenan in food products include emulsifying, stabilizing, gelling, and coagulating. E. cottonii is a carrageenan producer that has high fibre content. Usual wet noodles are produced from wheat flour as raw material but lack fibre content (Billina et al., 2014). The addition of seaweed in making wet noodles can increase the total food fibre content (Murniyati et al. 2010). Antioxidant activity The antioxidant activity of mangrove noodles tends to have a higher value of antioxidant activity compared to control noodles (Table 4). The results of the antioxidant activity test showed that the higher the addition of carrageenan flour concentration would increase the antioxidant value. According to Harsyam et al. (2020), carrageenan extracted from red seaweed E. cottoni has high antioxidant content. Carrageenan has more hydroxyl groups which can form a double helix structure that is also higher and can protect antioxidant compounds in the three-dimensional matrix of the heat during cooking and of oxygen. In addition, the higher antioxidant activity value was also influenced by the raw materials used, namely mangrove fruit flour, which is rich in antioxidants. According to Nawaly et al. (2013), antioxidant compounds derived from seaweed extract are important compounds in protecting cells against free radicals. The application of seaweed extract in human https://doi.org/10.26656/fr.2017.6(6).709 © 2022 The Authors. Published by Rynnye Lyan Resources FULL PAPER and fish food can increase the antioxidant value of these foods which can function to maintain food nutrition and provide health impacts for consuming subjects. After all, based on the results of the proximate analysis of mangrove fruit flour in (Table 5) shows that the composition of mangrove fruit flour produced by the four species of mangrove fruit is similar to wheat flour. The resulting mangrove fruit flour is richer in fiber and antioxidants than wheat flour. However, it has a lower protein content when compared to mangrove fruit flour. This is because mangrove fruit flour does not have gluten like wheat flour. Based on the characteristics presented in (Table 5), shows that mangrove fruit flour has the potential to be used as a substitute for wheat flour. Conclusion Based on the results of the study, the addition of variations in the concentration of carrageenan flour in wet noodle products using different types of mangrove fruit flour gave significantly different results (P<0.05) on several test parameters, namely testing the proximate value, crude fibre content, antioxidant activity, water absorption, cooking time, breaking strength of noodles, cooking loss, and sensory testing. The best treatment that affected the sensory and nutritional characteristics was wet noodles with the addition of mangrove fruit flour S. caseolaris and 12% carrageenan. The addition of carrageenan flour affected the physical and sensory properties of mangrove flour noodles. In addition, the use of mangrove fruit flour has the effect of increasing the nutritional characteristics and antioxidant activity of noodle products.
2022-12-20T16:03:00.229Z
2022-12-18T00:00:00.000
{ "year": 2022, "sha1": "b1d4e8833ebd23c8cd84092b291a1d8b7b0db899", "oa_license": "CCBY", "oa_url": "https://doi.org/10.26656/fr.2017.6(6).709", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7980a056634fba54f604816886fcef6bd779472a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
235369167
pes2o/s2orc
v3-fos-license
Whole-Genome Sequencing and Machine Learning Analysis of Staphylococcus aureus from Multiple Heterogeneous Sources in China Reveals Common Genetic Traits of Antimicrobial Resistance ABSTRACT Staphylococcus aureus is a worldwide leading cause of numerous diseases ranging from food-poisoning to lethal infections. Methicillin-resistant S. aureus (MRSA) has been found capable of acquiring resistance to most antimicrobials. MRSA is ubiquitous and diverse even in terms of antimicrobial resistance (AMR) profiles, posing a challenge for treatment. Here, we present a comprehensive study of S. aureus in China, addressing epidemiology, phylogenetic reconstruction, genomic characterization, and identification of AMR profiles. The study analyzes 673 S. aureus isolates from food as well as from hospitalized and healthy individuals. The isolates have been collected over a 9-year period, between 2010 and 2018, from 27 provinces across China. By whole-genome sequencing, Bayesian divergence analysis, and supervised machine learning, we reconstructed the phylogeny of the isolates and compared them to references from other countries. We identified 72 sequence types (STs), of which, 29 were novel. We found 81 MRSA lineages by multilocus sequence type (MLST), spa, staphylococcal cassette chromosome mec element (SCCmec), and Panton-Valentine leukocidin (PVL) typing. In addition, novel variants of SCCmec type IV hosting extra metal and antimicrobial resistance genes, as well as a new SCCmec type, were found. New Bayesian dating of the split times of major clades showed that ST9, ST59, and ST239 in China and European countries fell in different branches, whereas this pattern was not observed for the ST398 clone. On the contrary, the clonal transmission of ST398 was more intermixed in regard to geographic origin. Finally, we identified genetic determinants of resistance to 10 antimicrobials, discriminating drug-resistant bacteria from susceptible strains in the cohort. Our results reveal the emergence of Chinese MRSA lineages enriched of AMR determinants that share similar genetic traits of antimicrobial resistance across human and food, hinting at a complex scenario of evolving transmission routes. IMPORTANCE Little information is available on the epidemiology and characterization of Staphylococcus aureus in China. The role of food is a cause of major concern: staphylococcal foodborne diseases affect thousands every year, and the presence of resistant Staphylococcus strains on raw retail meat products is well documented. We studied a large heterogeneous data set of S. aureus isolates from many provinces of China, isolated from food as well as from individuals. Our large whole-genome collection represents a unique catalogue that can be easily meta-analyzed and integrated with further studies and adds to the library of S. aureus sequences in the public domain in a currently underrepresented geographical region. The new Bayesian dating of the split times of major drug-resistant enriched clones is relevant in showing that Chinese and European methicillin-resistant S. aureus (MRSA) have evolved differently. Our machine learning approach, across a large number of antibiotics, shows novel determinants underlying resistance and reveals frequent resistant traits in specific clonal complexes, highlighting the importance of particular clonal complexes in China. Our findings substantially expand what is known of the evolution and genetic determinants of resistance in food-associated S. aureus in China and add crucial information for whole-genome sequencing (WGS)-based surveillance of S. aureus. [MSSA]) and sequenced them together with 142 isolates (18 MRSA and 124 MSSA) obtained from 53 healthy and 89 infected people in Shanghai between 2015 and 2017 (see Table S1 in the supplemental material). Of these, 673 resulted in high-quality genomes, 343 being MRSA and 330 being MSSA (see Fig. S1 and Table S1). The 673 isolates were tested for susceptibility against a panel of 13 antimicrobials: 97% showed resistance to at least one, and 72% showed resistance to at least three ( Fig. S1d and Table S1). As in previous studies in China (25,26), high prevalence of resistance to penicillin (93%), erythromycin (67%), cefoxitin (58%), oxacillin (57%), and clindamycin (55%) was found. More than 99% of isolates were found to be susceptible to drugs of last resort, vancomycin and linezolid (Fig. S1d). The phylogenetic tree was reconstructed with a set of 1,585 concatenated core genes from the 673 S. aureus isolates and showed strong clustering by clonal complex (CC) (27), sequence type (ST) (28), and agr subgroup (29) (Fig. 1). A total of 72 STs were identified, of these, 29 STs (from 47 isolates) were found to be novel (Tables S1 and 2), showing a single-base mutation in one or more alleles. Two STs (ST3656 and ST5713) of these 29 were identified from human-associated isolates. The other 27 STs were from food-associated isolates, most of which were animal food related (24 STs from 36 isolates) (Table S2). The resistance and virulence profiles of these novel STs were similar FIG 1 Sample information and phylogenetic tree of the whole cohort. Maximum likelihood phylogenetic tree based on 1,585 core genes of the 673 S. aureus isolates. Clonal complex clusters are distinguished by means of numbers and background colors. Methicillin-resistant and -susceptible phenotypes are indicated by the inner colored ring; agr subgroup, SCCmec type, PVL type, and the sample sources are color coded in the following rings. One branch consisting of 5 isolates was removed from the tree, as the length of the branch made visualization of the tree difficult. These isolates were all found to be S. aureus (subgroup S. argenteus). The full tree is shown in Fig. S2 in the supplemental material. to those of their clonal complexes. Of the 72 identified STs, 57 (representing 92% of the isolates) were related to 10 CCs, CC59 being the most prevalent (187 isolates) followed by CC5 (111 isolates). The spa typing (30) led to 114 different spa types, t437 being the most prevalent (157 isolates). The agr subgroups revealed that all 4 known agr subgroups are present in our cohort, agr group I being the most prevalent (n = 418) ( Fig. 1 and Table S1). Variants of type IV SCCmec carrying extra antibiotic resistance genes and a novel SCCmec type. Among the identified 343 MRSA isolates, nearly half of the isolates (169/343) carried a type IV SCCmec. However, in this study, by aligning reconstructed SCCmec with the reference cassettes (reference [Ref] IVa, AB063172.2; Ref IVc, B096217.1), we observed several variabilities inside the type IV cassette. Two subtype IVc isolates were found to have insertions of kanamycin and bleomycin resistance genes, as previously found in Italy by Manara et al. (35). Our isolates (11A1151 and 18A25) were almost identical to the cassette (MF062) found by Manara et al. (35), see Fig. 2. Isolate 11A1151 carried a cassette identical to MF062, while isolate 18A25 had a deletion of the gene uqpQ. We confirmed our variant to be an integrated plasmid, pUB110 (36), via a BLAST search against N315, which is a confirmed SCCmec II isolate with integrated pUB110 (accession number D81934.2; identity, 99.93%). The insertion of plasmid pUB110 carrying these resistance genes is frequently associated with type II SCCmec elements (37)(38)(39)(40). An SCCmec IVa variant was found, in two isolates from our cohort, to have genes conferring resistance to beta-lactams; blaI, blaR1, and blaZ were found between IS431 and the mec gene complex. When blasting this SCCmec variant against the NCBI nonredundant database as the subject sequence, we found that this SCCmec variant (99% identity and query cover) was recently also found in an SCCmec IVa clinical isolate from Wuhan, China (GenBank CP033086.1). The genome sequence located in the J3 region (40) of the cassette indicated an integrated plasmid as its structure, where Tn552-CadX-CadD is associated with plasmid pBORa53 (41). The plasmid-like region also shows high similarity (100% identity and 94% query cover) to S. aureus plasmid pPM1 (GenBank AB699881.1). According to the International Working Group on the Classification of Staphylococcal Cassette Chromosome Elements and recent reports, there are 14 types of SCCmec (31,(42)(43)(44). Three of our MRSA isolates were not attributed to any of the 14 known cassette types. These novel SCCmec elements carried the ccr gene complex 7 (A1B6) and the mec gene complex class A. Therefore, our study highlights the presence of further resistances and diversity within the same cassette type and the ongoing rearrangements in MRSA. Monophyletic clustering separated STs from reference genomes of other countries. Sequence types ST59, ST9, ST398, and ST239 are widespread in China and significantly present in our cohort and include the clones most enriched in methicillin resistance. Hence, we investigated the molecular evolution and global distribution of these isolates in our cohort. The STs were analyzed to reconstruct the geographical and temporal evolution. The STs were investigated alongside publicly available isolates (PATRIC database [45]) for contextualization against the global backdrop of S. aureus evolution. We first reconstructed the phylogenies of the selected STs using all available good-quality reference genomes for STs, ST59 (n = 92), ST9 (n = 79), ST398 (n = 158), and ST239 (n = 194), and a core genome maximum likelihood approach (Fig. 3 and Table S4). 7(A1B6) Class It is noteworthy that the Chinese ST59-SCCmec IV/V isolates showed monophyletic subtree profiles compared to those of the reference genomes for the same ST (Fig. 3a), suggesting a single introduction event in China followed by a spread in the Chinese territory. Exceptions were a few ST59 isolates coming from Japan (n = 3), Denmark (n = 5), and Italy (n = 2) that clustered with the Chinese ones, indicating potential effects of travelling and commerce. ST9 isolates from our cohort were found mainly in pork samples. They clustered with reference isolates recovered from China and a single isolate from the Netherlands ( Fig. 3b and Table S4). The only exception was an isolate from pork in Beijing, which clustered with pig isolates from the United States. One possible explanation for this U.S./China clustering is that China is one of the top three markets for U.S. pork production (46), and the MRSA isolates might be transported during pork trade. As previously observed, MRSA clones can transmit cross-regionally via food trading (47)(48)(49), although evidence of a risk to humans from this is lacking. ST398 (Fig. 3c) is commonly found in livestock-associated isolates from pig farming in Europe (50,51) and also in China (52), though at very low prevalence. It is possible that this isolate derives from ST398 sourced from swine in China. However, here, ST398 was found in higher numbers (5%, n = 34) from human and more diverse food sources, including pork and nonmeat products. Several studies have shown that ST398 is becoming commonly present in humans (35,53,54), suggesting that humans could be the most obvious source of contamination for food preparations. A total of 11 isolates (1.6% of all isolates, 78.6% of human MRSA isolates) were found to be ST239 within our cohort (Fig. 3d). Nine (with spa type t037) were human health care associated and clustered together, but the other two (t030) were food associated and clustered with reference Chinese human isolates (t030). The nine human isolates all carried the virulence gene sasX, widely associated with human-associated ST239 in China (53,55). None of the other isolates in this study carried this gene, including the two food-associated ST239 isolates. Globally, ST239 was previously found to be associated with food animals (56,57), but to our knowledge, this has not been found to be the case in China (58) and suggests that ST239 in China is not exclusively transmitted in clinical settings but has the potential to spread between food and humans. To better understand the evolution of the major MRSA lineages, Bayesian divergence analysis for MRSA isolates belonging to ST59-SCCmec IV/V, ST9-t899-SCCmec XII, ST398-t011-SCCmec V, and ST239-t037-SCCmec III was performed ( Fig. 4 and Table S4). Our isolates clustered away from those from other countries in the maximum likelihood trees and were characterized by different divergence times. The mean substitution rate was 2.118 Â 10 26 (highest posterior density [HPD] 95%, 1.958Â 10 26 to 2.523 Â 10 26 ) substitutions per site per year (59,60). For ST59-SCCmec IV/V, the non-Chinese isolates showed a pattern of divergence starting in the 1960s, with divergence from Chinese clones occurring in the 1940s, earlier than another study of East Asian clones (61); however, small sample sizes in both studies may explain the differences. On the contrary, Chinese isolates diversified from ;1980 onwards, with the majority of divergence occurring after ;1990, in agreement with another study of food-related S. aureus isolates in China (62). Our data included a small number of human samples from China within this clone (two from our cohort and three reference genomes). These isolates showed no differential evolution to the food-related S. aureus isolates and were most closely related to isolates from both meat and vegetable products, suggestive of CA (rather than HA) transmission. ST9-t899-SCCmec XII isolates showed much more recent diversification, with our isolates diversifying between 2004 and 2010 and the most recent common ancestor dating to the year 2000. Diversification of the Chinese ST398-t011-SCCmec V isolates dated back to ;2008, later than human ST398 isolates in a recent study in Taiwan (63), with their most recent common ancestor to the European strains dating back to approximately 1996. ST239-t037-SCCmec III showed a diverse pattern of evolution, with diversification starting in the 1950s. Our Chinese isolates diversified from 1995 onwards and were found most closely related to other samples from China. Two of our samples were more closely related to samples from Algeria with earlier diversification. The evolution of our Chinese cohort appears to differ from geographically local countries, such as Singapore (64), possibly due to different political and economic histories. Plots show all the isolates from this study together with all publicly available reference genomes for ST59, ST9, ST398, and ST239. Methicillin-resistant and -susceptible genotypes are indicated by the inner colored ring. Origin (sample sources) and region are color coded in the following rings. Isolates from this study and publicly available reference genomes are color coded in salmon and cyan, respectively. (a) ST59 isolates. The Chinese ST59 SCCmec V and ST59 SCCmec IV isolates clustered separately from the reference genomes collected elsewhere. ST59 isolates from our Chinese cohort were almost exclusively MRSA and spa type t437 and t441, in contrast to what was found elsewhere. (b) ST9 isolates. The ST9 isolates from our cohort clustered together with isolates from China and away from those collected elsewhere. ST9 isolates from our Chinese cohort were almost exclusively MRSA and spa type t899, in contrast to what was found elsewhere. (c) ST398 isolates. The Chinese ST398 clustered separately from the reference genomes collected elsewhere. The Chinese isolates are mainly MSSA and show a wide diversity of origin, unlike in Europe, where they are linked to pig farming. (d) ST239 isolates. The clustering indicates a close relationship between U.K. samples and samples from this study, possibly indicating international routes of transmission. Two food-associated MRSA isolates (t030) were found in this study and clustered far away from the other nine HA-MRSA isolates in this study (t037). Antimicrobial Resistance Analysis of S. aureus in China Overall, the molecular evolution of MRSA in China appears to be more recent than in other countries. Resistance to oxacillin and cefoxitin appears strongly associated with the clonal complexes CC59 and CC9. A search of features in the genome sequence of each isolate which could strongly correlate to resistance to each one of the 10 selected antimicrobials was implemented using multiple supervised machine learning methods. Support vector machines (SVMs) are powerful yet flexible supervised machine learning algorithms which aim to classify the data by finding an N-dimensional hyperplane to separate the data points. In this study, the best overall performance considering all antibiotic models was obtained using a linear radial basis function (RBF) SVM classifier: accuracy, 76.6% to 95.5% (range of means across all antibiotic models); area under the curve (AUC), 66.5% to 96.2% (range of means across all antibiotic models); sensitivity, 5.8% to 99.9% (range of means across all antibiotic models); specificity, 32.9% to 99.8% (range of means across all antibiotic models) ( Fig. 5a and Table S5). The best predictions of resistance/susceptibility were obtained for oxacillin (accuracy, 93.85%; sensitivity, 89.42%; specificity, 99.71%; and AUC, 96.24%) and cefoxitin (accuracy, 92.83%; sensitivity, 87.70%; FIG 4 Evolution of four major MRSA lineages. Bayesian evolutionary analysis of ST59-SCCmec IV/V (a), ST9-t899-SCCmec XII (b), ST398-t011-SCCmec V (c), and ST239-t037-SCCmec III (d). The lineages are evolutionarily distinct. In all CC subsets, East Asian isolates cluster separately and have generally evolved later than European and American branches. Isolates from this study and publicly available reference genomes are indicated by the inner ring. Origin (sample sources) and region are color coded in the following rings. The size of the red circles on trees represents the posterior probability of each node. specificity, 99.79%; and AUC, 95.26%). Overall, 8 antibiotics achieved an AUC of .80%; chloramphenicol and penicillin did not. For oxacillin, the isolates clustered strongly by resistant/susceptible phenotype and showed strong correlation with clonal complexes despite population structure correction (Fig. 5b). Specifically, resistant isolates came from two Prediction performance results of the RBF SVM classifier that achieved the best performance among the three investigated machine learning classifiers. The scores for each performance metric (y axes) are mean AUC, accuracy, sensitivity, and specificity from 30 training runs for each antimicrobial. Predictive models were generated to classify the resistance versus susceptibility profiles of 10 different antimicrobials (x axes). CFX, cefoxitin; CHL, chloramphenicol; CIP, ciprofloxacin; CLI, clindamycin; ERY, erythromycin; GEN, gentamicin; OXA, oxacillin; PEN, penicillin; SXT, trimethoprim-sulfamethoxazole; TET, tetracycline. (b) Hierarchical clustering of 673 isolates based on the 2,000 oxacillin-resistant genomic signatures (k-mers) recognized as the most significant by the trained classifier. Results have been data mined with respect to year of collection, ST, CC, type of sample, and source. (c) Hierarchically clustered heat map of the 10 antimicrobials based on the top 50 genes corresponding to the genomic features recognized as the most significant by the trained classifiers. Genes are color coded according to their function: resistance gene (red), virulence gene (blue), genes with function in horizontal gene transfer (HGT) (purple), genes with other functions (black). For each gene, the number of different k-mers present per gene per antibiotic model is shown on the plot, from a total of 2,000 recognized as the most significant by the trained classifier. major clonal complexes, CC59 and CC9. CC59 was found associated with a diverse range of food sources collected over all years. Given the correlation of both oxacillin and cefoxitin to the mecA gene, we would expect a high degree of similarity between the features of these two models. Of the 2,000 features (k-mers) considered for each model, 1,927 (96.35%) were the same for oxacillin and cefoxitin models, and patterns of resistance seen for oxacillin were also observed in the cluster map for cefoxitin (see Data Set S1). Moreover, the cluster maps for oxacillin and cefoxitin showed a higher normalized k-mer frequency for samples belonging to CC59 than for other clonal complexes in the resistant samples. To the best of our knowledge, this difference in terms of k-mer frequency of CC59 samples was not previously observed in any other work, and it further indicates the importance of analyzing CC59 samples, especially in China. The cluster maps of the other eight antimicrobials (Data Set S1) did not show such strong clustering by susceptible/resistant phenotype. Notably, trimethoprim-sulfamethoxazole, tetracycline, and ciprofloxacin showed very similar patterns of resistance, with most samples being susceptible to these antimicrobials but with a small cluster of resistant samples present in all three models. The resistant isolates in this cluster were primarily associated with two clones and sample types, CC9 pork isolates and CC630 infectious human isolates. Isolates exhibiting both trimethoprim-sulfamethoxazole and tetracycline resistance were also strongly correlated with MRSA, whereas ciprofloxacin resistance showed no association with MRSA. CC59 isolates, primarily susceptible to all three antimicrobials (trimethoprim-sulfamethoxazole, tetracycline, and ciprofloxacin) formed a large cluster on the trimethoprim-sulfamethoxazole and ciprofloxacin trees, while on the tetracycline tree, CC59 was more fragmented, showing that while there is clear correlation, there are important differences between these three models. Comparison of the important features of the three models showed an overlap of 79.9%. Both the erythromycin and clindamycin models showed two separate resistant clusters, one predominantly CC59 and one predominantly CC9. Other samples were also clustered by CCs but were mostly fragmented. Two antimicrobial models, gentamicin and chloramphenicol, had very imbalanced samples, with most isolates susceptible to these antimicrobials. In all these models, there was fragmented clustering by CC. Machine learning reveals robust prediction of known resistance genes correlated with phenotype and novel genetic determinants of AMR phenotype. To better understand the relationship between antimicrobial resistance phenotype and genotype, we cross-referenced the 2,000 significant k-mers for each antibiotic model (10 of the 13 antibiotics) to the pan-genome of the 673 isolates (see Table S6) and summarized the genes found in Fig. 5c. For each antibiotic except penicillin, we obtained a list of potential genetic determinants of antibiotic resistance, with importance measured as the number of unique k-mers mapped back to the gene. For penicillin, the small number of susceptible samples impeded machine learning. For oxacillin and cefoxitin resistance, as expected, the mecA gene previously recognized as conferring resistance (65)(66)(67) was the primary gene found by machine learning, with 100 and 98 different k-mers, respectively, mapping back to this gene. The genes maoC and ugpQ, previously shown to be SCCmec-associated elements (68), were also found to be highly discriminant between resistant and susceptible cefoxitin and oxacillin phenotypes. Interestingly, ugpQ was also found to be in the SCCmecs of 342 of the 343 MRSA strains, and maoC was located in the SCCmecs of 332 isolates (see Table S7). The identification of genes known and expected to be correlated with the selected resistance phenotypes indicates the robustness of the methods employed, as stressed by Jaillard et al. (69). The insertion sequence ISSau3 (IS1182 family) was found to be highly predictive for both antibiotics. This insertion sequence has been reported to be close to the SCCmec complex but has also been reported to inactivate the gene lytH, increasing resistance (70). Chloramphenicol resistance was highly associated with the known resistance gene cat (77 kmers). The chloramphenicol resistance gene cat and the plasmid replication gene rep (also correlated with chloramphenicol resistance) were found to cooccur in 63 isolates from both animal and nonanimal food sources but were not found in any human isolates. rep is an initiator protein in pT181 family plasmids, including pC221, known to typically carry chloramphenicol resistance (71), indicating the likely presence of this plasmid in these food isolates. Another gene, encoding tryptophan decarboxylase, was also significant for chloramphenicol resistance. This gene is a promising gene candidate as despite no previous known links to chloramphenicol resistance, the tryptophan biosynthesis pathway was previously associated with vancomycin (72). Interestingly, resistance to trimethoprim-sulfamethoxazole was highly associated with the gene tcaA. Inactivation of this gene was previously shown to increase resistance to teicoplanin and vancomycin (73), but no link has been found to trimethoprim-sulfamethoxazole. This would benefit from further experimental validation. Ciprofloxacin and tetracycline were primarily related to genes involved in horizontal gene transfer (IS256 and ISBli29 [ISNCY family]) (Fig. 5c), suggesting these resistances may be plasmid mediated. Resistance to ciprofloxacin is typically caused by point mutation in the chromosomal gyrA and parC genes; however, there is growing evidence of plasmid-mediated resistance (74). This likely indicates a prevalence of plasmid-mediated quinolone resistance in food-related S. aureus isolates in China, as was previously reported in a smaller-scale study in Escherichia coli from farmed fish (75). Transposases, ISSau3 (IS1182 family), ISBli29 (ISNCY family), and IS256, and the antimicrobial resistance genes aacA-aphD are present in many but not all isolates in our cohort from both human and animal food sources, suggesting the widespread prevalence of these genes correlated with multiple resistances in China. The machine learning approach also revealed the presence of significant associations between virulence genes and the antimicrobial resistance profiles. Several virulence genes (e.g., lpl2, essG, splF, sdrE, map, ssl, and others) (Table S6) were found to be associated with resistant phenotypes. Specifically, lpl2, a host invasion gene, was found to be a discriminant genetic feature of multiple resistant phenotypes (cefoxitin, oxacillin, ciprofloxacin, gentamicin, clindamycin, tetracycline, and erythromycin). The genes for type VII secretion (essG), serine protease (splF), serine-aspartate repeat protein (sdrE), and adhesin (map) were correlated with cefoxitin, clindamycin, erythromycin, gentamicin, and oxacillin resistance. Finally, toxin gene ssl7 was correlated with cefoxitin, ciprofloxacin, clindamycin, erythromycin, gentamicin, oxacillin, trimethoprim-sulfamethoxazole, and tetracycline. Other virulence genes also found to be associated with antimicrobial resistance (AMR) phenotypes can be seen in Fig. 5c and Table S6. DISCUSSION We have analyzed a large variety of S. aureus samples over a 9-year temporal window collected across China (27 provinces), considering both food and human samples. In our study, we identified 29 novel sequence types (27 in food, 2 in human) with no genomic sequence available in public databases. Most S. aureus sequences available so far are of clinically relevant strains, and a big gap exists for less-pathogenic ones; thus, the new STs can be of relevance for further epidemiological studies leading to a better understanding of emergence, reemergence, and spread of S. aureus diseases. Our refined analysis of the type IV SCCmec highlights the presence of further resistances and plasmid insertions within this short cassette. Variants of SCCmec IV hosting further metal (cadmium) and antimicrobial resistance genes (kanamycin and bleomycin), and of a novel SCCmec type identified in this study, provide further avenues of investigation into the epidemiology of MRSA. These findings suggested that the SCCmec and contained resistance determinants might be transmitted via horizontal gene transfer and thus have a separate epidemiology with respect to the rest of the genomes (76,77). Altogether, our observations shed more light on the complexity of S. aureus epidemiology and on the need for surveillance of the MDR and cross-host S. aureus to clarify the dissemination route and avoid the spread of specific genomic traits. This data set can be easily meta-analyzed and integrated with further studies, and this clearly could lead to a deeper and further understanding of the epidemiology of this bacterium and how to prevent and treat resistant infections, especially from an important area such as China. Using a comparative genomics pipeline inspired by Manara et al. (35), we considered our isolates in the context of other similar studies. Across all isolates in our study, 97% of the S. aureus isolates showed resistance to at least one antimicrobial, and 72% showed MDR, which was consistent with previous reports in China (.94% showed resistance to at least one antibiotic and .58% were MDR in food-based isolates) (78). The number of isolates carrying resistance to at least one antimicrobial found in this study was higher than in other food-based reports from Brazil (83%) (79), South Africa (71%) (80), and South Korea (51%) (81). Analogously, the rate of MDR in our study was higher than in India (53%) (82), South Korea (35%) (81), the United States (10%) (83), and Brazil (8%) (79). The recent surveillance study in Europe (53) revealed a high level of resistance among human-associated S. aureus, with 90% of isolates showing resistance to at least one antibiotic and 45% showing MDR, which is in accordance with our human data (84% showed resistance to at least one antibiotic and 31% were MDR). CC59 and CC5 were the most predominant clones in this study, which agrees with those previously reported in Asia (22,23). CC5 was also the most abundant CC type in a pan-European study of invasive S. aureus infections (53). Data from foodborne disease outbreaks and investigations showed that both CC59 and CC5 are common epidemic clones that cause staphylococcal food poisoning and infectious diseases (84)(85)(86). Our food isolates also showed a prevalence of SCCmec type IV (49.1%) and type V (25.4%), similar to previous findings in both China (78) and Germany (87). Additionally, SCCmec XII was also found in 63 of our food isolates. SCCmec XII mainly spreads as LA-MRSA in China (44) and several Asian countries, such as Japan, Malaysia, and Thailand (88)(89)(90). Recently, MRSA with SCCmec XII has been reported to lead to clinical infections, suggesting a potential pathogenic risk for humans (91). In our human isolates, SCCmec III was prevalent (9/14), which is similar to other hospitalbased studies in China (92,93), whereas SCCmec IV was prevalent in studies from Europe (35,94). The spa types in our cohort showed a different distribution to those in another Chinese food-based study (78), with t437 most prevalent in ours compared to t071 and t091 in the study by Liao et al. (78), although all of the most prevalent isolates in that study were also present in our cohort. In an Italian study (35), spa types t001 (4.9%), t002 (4.3%), t008 (3.3%), and t127 (2.7%) were prevalent. While t002 (5.8%) and t127 (5.8%) were present in our human isolates, t001 and t008 were absent and spa types t164 (11.6%), t189 (7.2%), t037 (6.5%), and t085 (6.5%) were the most prevalent. Our cohort has a low level of PVL 1 human isolates (2.17%) compared to that in other studies in China (95) (12.8%), Europe (35) (27.4%), and Africa (96) (17% to 80%). A comparison of the virulence factors present in our Chinese cohort with those of the Italian study by Manara et al. (35) shows many similarities, despite the differences in isolate hosts. Iron uptake, conserved antigen, and arginine catabolic mobile element (ACME) virulence factors found in the study by Manara et al. (35) were not present in our cohort. However, genes for immune evasion, toxins, ion transporter, adhesins, capsular polysaccharides, and serine proteases were found in similar proportions of isolates in both cohorts, though specific gene presence varied. Several immune evasion genes in our cohort are present in almost all isolates, including esxA/B/C/D, hld, and sbi, as found by Manara et al. (35). Additionally, the toxic shock syndrome toxin gene tsst-1 associated with ST22 in the study by Manara et al. (35) was present in 28 of our isolates; however, in our cohort, this toxin was associated with ST1 (n = 17, P , 0.0001) and ST30 (n = 4, P . 0.0001). The presence of these virulence factors increases the risk of isolates carrying these factors harming human health, and so the widespread existence of these in food-related isolates needs to be monitored. The CC59-t437-SCCmec IV/V, CC9-t899-SCCmec XII, and CC398-t011-SCCmec V clones were the most frequent food-associated MRSA clones in our study. It has been well documented that CC59-t437-SCCmec IV/V clones are major CA-MRSA clones in China and other Asian countries, threatening a vast population due to their epidemiological potential (22). However, they remain geographically confined and are seen in only low numbers in Europe (35,53), suggesting importation to Europe from Asia, as was noted previously (97). The considerable numbers of this CA-MRSA lineage from food in this study could indicate some human contamination, possibly a result of inadequate hygiene measures and improper handling of food (98). Additionally, most of the PVL-positive isolates (86.8%) belonged to CC59, demonstrating the pathogenic potential of these MRSA strains. CC9, the predominant LA-MRSA clone in Asia, was mostly identified from retail meat samples in this study. Elsewhere, CC398 is the prevalent LA-MRSA strain found in animals and humans across European countries and North America (23). Furthermore, in addition to meat, CC398-t011-SCCmec V MRSA isolates were detected from a greater variety of food samples, including nonanimal food such as cakes, noodles, fruits, and vegetables, suggesting a different epidemiological characteristic of these CC398 MRSA strains in China. Recently, this CC398-t011-SCCmec V MRSA clone was detected from humans with nonanimal contact in China and denominated as CA-MRSA (99). Thus, continued monitoring of this strain's epidemiology and preventing its widespread transmission are essential. MRSA belonging to CC5, CC22, and CC30 clones are major HA-MRSA clones in Europe, according to a previous pan-European study (53). In our study, while all three clones are present in human samples, they are in low numbers (CC5, n = 19; CC22, n = 4; CC30, n = 1), and only one of these isolates was MRSA (CC5), highlighting the importance of geographical isolation in MRSA dissemination. Moreover, an Italian study on human-associated S. aureus in a pediatric hospital showed that CC5, CC22, CC8, CC1, and CC121 were the prevalent clones and that all CC1 and most CC5 (.96%) isolates were MRSA (35). In contrast, in our study, more than 95% of the CC1 and CC5 isolates were MSSA. In addition, for our human MRSA isolates, the major SCCmec type was SCCmec III (9/14 [64%]), while SCCmec IV (54/83 [67%]) was frequently detected by an Italian study and SCCmec III was not detected in any samples (35). Similarly, Aanensen et al. (53) only detected SCCmec III in clones from ST239, and the authors suggested these were likely to have been largely imported from Asia, with SCCmec IV again the most prevalent type in the pan-European study. However, the presence of PVL-positive isolates was strongly associated with MSSA in our study, which agreed with the Italian results (35). CC630 has been circulating in Asia since the 1970s and is the prevalent HA-MRSA clone (22). Correspondingly, in our study, the CC630 (ST239-t037-SCCmec III) clone was the predominant MRSA clone among human samples (9/14), with the other isolates being ST59 (3), ST1 (1), and ST5 (1). While these are not prevalent clones in Europe, both have been found in clinical isolates. ST239 clones were present in 8 isolates in the pan-European collection (53), and ST59 was present in 3 of 86 MRSA isolates in the study by Manara et al. (35). As similarly reported by Aanensen et al. (53), the human ST239 isolates in our study carried the virulence gene sasX, reflecting the widespread distribution of this gene in ST239 clones in China. However, the two food-associated ST239 isolates did not carry this gene and clustered away from the other ST239 isolates in our study in the phylogenetic tree, possibly indicating that lineages not carrying this gene may be circulating in food. Additionally, other MRSA lineages, for instance, ST88-SCCmec IV/V, which is already known as the "African clone" (96), were also frequently detected in this study. This finding indicates that the MRSA clones have spread cross-regionally. As in a limited number of previous phylogenetic studies, the ST9, ST59, ST239, and ST398 (58, 78) clones from our cohort tended to form separate clusters from the reference group, hinting at geographical differentiation. Meanwhile, Bayesian divergence analysis pointed to a more recent clonal evolution of MRSA in China compared to that in other countries (62,63), potentially driven by increased economic growth and antimicrobial usage in China (100,101). Bayesian divergence analysis also showed that while the ST9 and ST398 MRSA isolates in China exhibited relatively independent phylogenetic evolutionary relationships compared to those of isolates from other countries, there were signs of mixing of strains, possibly linked to the importation of meat products. ST239 MRSA in China showed a more recent phylogenetic relationship with isolates from other Asian countries, suggesting an introduction of ST239 MRSA to China from neighbors (64). Several approaches to analyzing whole-genome sequences (WGS) against resistance phenotypes were previously published in the literature. These fall into two main branches, machine learning approaches, as we have implemented here, and genomewide association studies, a statistical association-based approach. For S. aureus, several groups have attempted to link resistance phenotypes to genes (53,69,102,103). For example, Aanensen et al. (53) used published resistance genes, with manual curation, to predict antibiotic resistance phenotypes from genotypes with high accuracy. Although their findings provided an in silico typing method comparable with phenotypic accuracy, our machine-learning based approach allows impartial identification of genetic determinants of resistance, and is not limited to those already known. This is particularly important for China, where the differential evolution may have allowed resistance determinants to arise that are not as well represented in the public databases. Machine learning (ML) offers a powerful opportunity to analyze entire genomes quickly and efficiently against selected phenotypes, allowing for the identification of arbitrary numbers of sequences and other genomic features ranked on strength of correlation with the phenotype. Sequences identified by ML may contain genes with a known functional relationship with the phenotype as well as genes with no previously known association with that specific phenotype, thus providing a significant advantage to conventional bioinformatics methods based on checking for presence/absence of known manually chosen genes. Here, we have shown, for a large number of antibiotics, genes that are significantly and predictively associated with antibiotic resistance phenotypes in isolates evolved in China. Moreover, to the best of our knowledge, for the first time, we have shown that CC59 isolates have a higher k-mer frequency (for most of the studied k-mers) than other resistant clonal complexes for oxacillin and cefoxitin. This result further indicates the importance of CC59 samples, especially in China, and demonstrates the difference between this clonal complex and the other resistant clonal complexes available in our data set. In agreement with previous studies in China (104) all our cohort isolates were susceptible to vancomycin and linezolid, while large numbers of isolates were resistant to widely used antibiotics such as penicillin, erythromycin, and cefoxitin. Thanks to ML, we were able to identify genes which, individually or in patterns, featured a strong correlation with resistance to multiple antimicrobials, regardless of the source of the isolate. As an example of the robustness of the methodology, the maoC and ugpQ genes previously found to be SCCmec-associated elements, in addition to mecA, were found strongly correlated with cefoxitin and oxacillin resistance. The identification of genes known and expected to be correlated with the selected resistant phenotypes indicates the robustness of the methods employed, as stressed by Jaillard et al. (69). ML also revealed a correlation between ISSau3 (IS1182 family) and cefoxitin and oxacillin. This insertion sequence has been reported to be close to the SCCmec element and has also been reported to inactivate the gene lytH, increasing resistance (70). The rep gene is an initiator protein in pT181 family plasmids, including pC221, known to typically carry chloramphenicol resistance (71), indicating the likely presence of this plasmid in these food isolates. Another gene encoding tryptophan decarboxylase was also found to be significant for chloramphenicol resistance (71). Although not previously linked to chloramphenicol resistance, this gene is a promising gene candidate, as the tryptophan biosynthesis pathway has been previously associated with vancomycin resistance (72). The tcaA gene was found to be associated with resistance to trimethoprim-sulfamethoxazole. Inactivation of this gene was previously shown to increase resistance to teicoplanin and vancomycin, but no link has been found to trimethoprim-sulfamethoxazole. This would benefit from further experimental validation. Additionally, resistance to ciprofloxacin is typically caused by point mutations in the chromosomal gyrA and parC genes; however, there is growing evidence of plasmid-mediated resistance. In this study, the ML result showed that ciprofloxacin resistance was linked to several insertion sequences, suggesting a potential rapid spread of resistance among isolates in food and humans. Notably, using ML, significant correlations between virulence genes (lpl2, essG, splF, sdrE, map, and ssl7, plus others less strongly associated) and antimicrobial resistance phenotypes were found. The correlation between the presence of antibiotic resistance genes (ARGs) and virulence factors was observed previously (105), and it has been proposed that an increase in virulence allows the bacteria to overcome the fitness costs associated with the carriage of AMR genes (106,107). Other published papers have applied machine learning to identify AMR genes associated with resistance phenotypes; however, there are many differences that make this work unique. Hyun et al. (102) employed an SVM approach to identify genes from single nucleotide polymorphisms (SNPs) based on the pan-genome of 288 S. aureus isolates. These isolates, all taken from human sources, were isolated in Singapore, the United States, and Russia and were primarily composed of ST239, ST22 and ST5. Hence, our study is very different; this is also highlighted by the fact that only 5 of the 13 gene candidates identified by Hyun et al. (102) (hylX linked to erythromycin, oppD and the gene for acyl coenzyme A [Acyl-CoA] linked to gentamicin, and the gene for Ptype ATPase and rep linked to tetracycline) were also found to be important in our primarily food-based work. In addition, in our study, as described above, novel determinants underlying resistance were found strongly associated with the AMR phenotypes, such as tcaA, the gene for tryptophan decarboxylase, and ISSau3 (IS1182 family), an original finding with respect to the literature. Both Jaillard et al. (69) and Wheeler et al. (103) used genome-wide association studies to make predictions about S. aureus phenotypes from the whole-genome sequences. Both studies used isolates from U.K.based humans, giving a very different population structure to that in this study. Jaillard et al. (69) made predictions against four of the antibiotics used in our study. They also tested methicillin, which we did not; however, we tested oxacillin, which is chemically very similar and has replaced methicillin in clinical use. For these antibiotics, the predicted genes associated with the resistance phenotype for methicillin (mecA), erythromycin (ermC), and one of the 2 gentamicin genes (aac) were in common with our work. While two genes predicted to correlate to trimethoprim resistance in the work by Jaillard et al. (69) (ybaK and mqo1) were associated with different antibiotics in our study (ybaK with cefoxitin, erythromycin, gentamicin, and oxacillin and mqo1 with cefoxitin, clindamycin, erythromycin, gentamicin, and oxacillin). This may be because, in our study, we tested trimethoprim in conjunction with sulfamethoxazole, as it is generally used clinically, rather than trimethoprim alone, as tested by Jaillard et al. (69). In the case of the study by Wheeler et al. (103), six antibiotics overlapped with our study (gentamicin, oxacillin/methicillin, erythromycin, tetracycline, ciprofloxacin, and clindamycin), and the genes found to be associated with resistance for five of these were also found by us (aacA-aphD linked to gentamicin, mecA linked to oxacillin/ methicillin, ermA and ermC linked to erythromycin, ermA linked to clindamycin, and tetK and tetM linked to tetracycline). However, differences were found in the case of ciprofloxacin, where distinct genes were identified in the two studies (chromosomal genes gyrA and parC were significant in the study by Wheeler et al. [103], while insertion element IS256 was found in this study). Discrepancies may reflect different resistance mechanisms, with this study indicating a prevalence of plasmid-mediated ciprofloxacin resistance in China, and the study by Wheeler et al. [103] suggesting a prevalence of chromosomal resistance mechanisms in their UK-based study. The strong overlap between our results and previous works in identifying genes known to be correlated with the selected resistance phenotypes indicates the robustness of the methods we have employed and gives confidence to the novel predicted genes that have arisen from our analysis. Our more heterogenous data set results in a slightly lower machine learning accuracy than that in other published works, 76.6% to 95.5% compared to 91.9% to 98.6% (Hyun et al. [102]) and 94.7% to 100% (Wheeler et al. [103]). Furthermore, we have also, for the first time, found a genetic determinant of AMR for chloramphenicol and cefoxitin in S. aureus by using machine learning. Our study has been able to assess the differentially evolved Chinese clones of S. aureus and show that while many resistance mechanisms align with those seen elsewhere globally, differences may have evolved. However, we would like to point out that approaches for AMR prediction models, which have already been developed from genome sequence collections of S. aureus and also for many other species, are often designed to maximize accuracy in predicting AMR phenotypes, emphasizing their diagnostic capabilities over their capacity to uncover genetic mechanisms for resistance. Many such models are also based on the detection of genes from a curated set of known AMR determinants, rendering them difficult to generalize to different treatments or organisms and unsuitable for discovering novel genes or interactions that drive resistance. In our case, we did not use known AMR determinants to search but rather the whole-genome information obtained by mapping back the k-mers on the pan-genome. Based on the concept of One Health, our study emphasizes the importance of a holistic working approach for food, human, food animals, and related sectors. The strong set of potential gene candidates identified in this work could provide new avenues of research to tackle the significant threat posed by antibiotic resistance in this part of the world. MATERIALS AND METHODS Sample collection and bacterial isolation. A total of 7,937 food-associated S. aureus isolates were cultured from various foods from 27 provinces during the years 2010 to 2018 in China. In addition, 142 S. aureus human-associated isolates, including 53 healthy and 89 infected human-associated isolates collected in Shanghai between 2015 and 2017, were employed in this study. All S. aureus isolates were confirmed using Vitek 2 Compact (bioMérieux, Craponne, France) and then were screened for MRSA by amplifying the mecA gene. In total, 343 food-associated isolates and 18 human-associated isolates were identified as MRSA. These MRSA isolates together with all human-associated MSSA isolates and 250 food-associated MSSA isolates (randomly selected from the full collection), giving 735 isolates in total, were used for the study of the population structure and the molecular epidemiology (see Table S1 in the supplemental material) and sent for genome sequencing. The identified isolates were stored in brain heart infusion broth with 40% (vol/vol) glycerol (HopeBio, Qingdao, China) at 280°C for the following analysis. DNA purification and extraction. Each isolate was grown in brain heart infusion (BHI) broth (HopeBio, Qingdao, China) at 37°C, and genomic DNA (gDNA) was purified using an Omega EZNA Bacterial DNA kit (Omega Bio-Tek, GA, USA). Genomic DNA was extracted with the sodium chloride-Tris-EDTA (STE) methods. The harvested DNA was detected by agarose gel electrophoresis and quantified by a Qubit 2.0 fluorometer (Thermo Fisher Scientific, USA). Library construction and whole-genome sequencing. A total amount of 1 mg DNA per sample was used as input material for the DNA sample preparations. Sequencing libraries were generated using NEBNext Ultra DNA library prep kit for Illumina (NEB, USA) according to the manufacturer's recommendations, and index codes were added to attribute sequences to each sample. Briefly, the DNA sample was fragmented by sonication to a size of 350 bp, and then DNA fragments were end polished, A tailed, and ligated with the full-length adaptor for Illumina sequencing with further PCR amplification. Finally, PCR products were purified (AMPure XP system), and libraries were analyzed for size distribution by an Agilent 2100 Bioanalyzer and quantified using real-time PCR. The resultant DNA preps were sequenced using Illumina NovaSeq PE150 at Beijing Novogene Bioinformatics Technology Co., Ltd. Genome assembly and annotation. All sequences were preprocessed through readfq v10 (110). To clean the data, reads containing low-quality bases (mass value # 20) over 40% were removed. Those with an N in reads beyond 10% were removed. The reads which overlapped with the adapter, which exceeded 15 bp and had less than 3 mismatches between them, were removed. Clean data were processed for genome assembly with SPAdes v3.13 (111), and QUAST v4.5 (112) was used for assessing the contigs through assembly. The contigs with a length shorter than 1,000 nucleotides were filtered out. The completeness and contamination of genomes were assessed through checkM (113) with the lineage_wf pipeline. We then obtained 673 high-quality S. aureus genomes (N 50 . 50,000) which were used for further analysis. Genomes were annotated with Prokka v1.14.5 (114) using default parameters with -addgenes -usegenus. In silico subtyping identification. Sequence types were identified through MLST, which mapped the sequences to the PubMLST Staphylococcus aureus MLST database (115). Novel STs all showed one or more single base mutations from known STs and were further confirmed by PCR, according to the same protocol given on the PubMLST website (https://pubmlst.org/organisms/staphylococcus-aureus/primers), and by DNA sequencing before being submitted to PubMLST for verification (115). Clonal complexes (CC) were annotated through the geoBURST Full MST algorithm using phyloviz software (116) with a primary founder surrounded by single-locus variants (SLVs) and known CC type in the MLST database. Spa protein A repeat region was identified through spaTyper (117) for S. aureus sequences. SCCmecFinder (118) was used for typing SCCmec, and the contigs that aligned to the SCCmec region were further annotated with Prokkav1.14.5 (114). The names of predicted proteins from Prokka were further confirmed with BLASTp (119). The variant SCCmec regions were further aligned using BLAST to identify SCCmec variants and plasmid integration into the SCCmec element (120,121). SCCmec variants in isolates 11A1151 and 18A25 (region encompassing OrfX [rlmH] to the direct repeat [DR] sequence GAAGCTTATCATAAGTAA) were aligned to MF062 (GenBank GCA_003240235) and N315 (GenBank D81934.2) using default BLASTn settings (119). Pairwise comparison using BLASTn with default settings was performed for the SCCmec IVa variant from isolates 11A832 and 18A245 (region encompassing OrfX [rlmH] to the DR sequence GAAGCGTATCATAAATGA) against the NCBI nonredundant database using the SCCmec region and/or only the J3 region of the cassette, using default BLASTn settings. Virulence factors and ARG analysis. Virulence factors and ARGs were searched through Abricate software (https://github.com/tseemann/abricate) using the VFDB (123) database set B and the CARD database (124). The BLAST search within Abricate was conducted with parameters of .90% identity and .75% coverage (proportion of gene covered), in agreement with previous studies (35). Virulence gene functional categories were assigned manually based on the gene entry in the VFDB database (123). All results were manually curated by careful search of the literature. In addition, to specifically target the fnbA/B genes known to generate false negatives due to isoforms not present in the public databases (125), we performed an additional BLAST search, using as queries the different isoforms as described by Loughman et al. (125) and selecting the best hit entries. To identify the presence of the virulence factor sasX within each of our isolates, a BLASTn search was used with sasX (GenBank MH143577.1) as the query. Thresholds of 90% identity and 90% coverage were used. Core gene alignment and phylogenetic analysis. All annotated files were taken as input for pangenome analysis with core gene alignments through Roary v3.13 (126). IQTree v2.0.3 (127) was then used to construct the phylogenetic trees from the core genome alignment with the general time reversible (GTR) (1F1R10) replacement model. In addition, core genome maximum likelihood phylogenetic trees clustered by different STs were also constructed with all available reference genomes downloaded from the PATRIC database (45). For ST398, only a subset of reference genomes, with collection dates, were used due to the large number of isolates available. The phylogenetic trees were subsequently visualized through iTOLv4 (128). Bayesian divergence estimates. A subset of sequences from this study and PATRIC from ST9-t899-SCCmec XII, ST59-t437-SCCmec IV/V, ST239-t037-SCCmec III, and ST398-t011-SCC mecV in our cohort, alongside reference genomes publicly available (45) for the same lineages for which collection dates were available, were selected for Bayesian evolutionary analysis using BEAST v 1.10.4 (129). As only two reference genomes belonging to ST59-t437-SCCmec IV/V were available, the other 11 ST59-SCCmec IV/V reference genomes were also recruited to the analysis data set. Analysis was conducted on a core genome alignment of each lineage by using Roary v3.11.2 (126). All combinations of three clock models (strict, uncorrelated log normal, and uncorrelated exponential) and four tree priors (constant coalescent, logistic growth, Bayesian skyline, and birth-death model) were tested using stepping stone sampling on a subset of the isolates (ST239 isolates) to identify the best model. Log marginal likelihood values were in the range of 22,459,423 to 22,459,156. The best model was a random uncorrelated exponential clock model, with a Bayesian skyline growth model. The GTR-gamma nucleotide substitution model was used, as selected for the maximum likelihood tree. For ST59, the model did not converge after 250 million steps, and so instead, a simpler model was used (strict clock and constant coalescent growth). The analysis was run for 2 independent chains until the effective sample size (ESS), that is, the effective number of independent draws from the posterior distribution, for all parameters was greater than 200 per chain. This entailed each chain running for approximately 150 million steps. Convergence was assessed in Tracer v1.7.1 (130), and chains were subsequently combined using LogCombiner v1. 10.4 (131). The maximum clade credibility tree was selected using TreeAnnotator v1. 10.4 (131) and then visualized in iTOL v5 (128). Machine learning. Machine learning was used to find features of the sample genomes that could be used to predict resistance to the panel of 13 antimicrobials. Sample genomes were first split into overlapping k-mers of 13 bp in length using GenomeTester (132) to produce a feature table for all samples. Taking each antimicrobial individually, the AMR phenotype of susceptible or resistant (Table S1) was used as the class label, with intermediate phenotypes neglected. As the classes were unbalanced, a synthetic minority oversampling technique (SMOTE) was applied to oversample data of minority class, compensating for unbalanced classes (133). The number of splits in the nested cross-validation and the number of k-nearest neighbors (default values of 5 neighbors) for SMOTE necessitated a minimum number of 12 samples in the minority class. From the panel of 13 antimicrobials, three (daptomycin, linezolid, and vancomycin) had an insufficient number of resistant isolates (minority class) to train and cross-validate the machine learning model, leaving 10 trained models. The Python package Scikit-learn (134) was used to reduce the number of features used. To correct for bias, the clonal the population structure was filtered according to k-mers based on weighted pairwise chi-squared tests between each feature and the phenotype class, as suggested by Aun et al. 2018 (135). The weights of each genome were calculated using the method of Gerstein, Sonnhammer, and Chothia (136). Subsequently, another k-mer filtering was performed, and the top 2,000 features (k-mers) were selected with the highest chi-squared statistic (P value lower than 0.00000000000001, confidence value of more than 99.9999999%). The weighting was based on a distance matrix based on mash distances (135). A panel of machine learning algorithms was then run in the Scikit-learn package (134): logistic regression (LR), linear support vector machine (SVM), and radial basis function (RBF) SVM. Nested cross-validation (NCV) was employed to assess the performance and select the hyperparameters of the proposed classifiers. The inner loop of the NCV found the best hyperparameters of each classifier using stratified 3-fold cross-validation; the outer loop measured the performance metrics using 5-fold stratified cross-validation. Each algorithm was run 30 times and metrics were collected for each run. The mean and standard deviation (SD) from the 30 iterations was then used as the final result statistic. The following prediction metrics were plotted using Seaborn: accuracy (true positive [TP] 1 true negative [TN]/[P 1 N]), sensitivity (true positive rate, TP/P), specificity (true negative rate: TN/N), and area under the receiver operating characteristic curve (AUC). Identification of AMR virulence and HGT genes. Where the machine learning was able to predict the antimicrobial class based on k-mers, these were then used to search the genome for genes that contained the k-mers. Using the pan-genome of the study isolates, annotated using Prokka, the kmers were mapped to genomes using a BLASTn query with the following parameters: evalue, 1,000; word_size, 13; gapopen, 5; gapextend, 2; outfmt, 5; strand, "plus." Genes with an identity of .70% and coverage of .70% were considered to be variants of the same gene and hence were discounted as duplicates, as done in previous literature (137); however, a more stringent threshold was used to ensure all gene variants were accounted for. The k-mer hit count (how many k-mers mapped to each identified gene) of the genes identified was then assessed for statistical significance at a significance level of 0.05 using a binomial exact test, with the probability of a gene hit based on the length of the gene and number of k-mer combinations possible per gene. All genes found were checked in the published literature to find previous associations with AMR, virulence, or horizontal gene transfer (HGT). A clustered heat map was produced using the Seaborn package in Python showing the number of k-mers mapped to each gene per antibiotic. Data availability. Short-read sequence data for all 673 isolates used in this study are deposited in the NCBI SRA and can be found associated with BioProject PRJNA633996. The code used in this study is available in the following GitHub repository: https://github.com/tan0101/Saureus-mSytems-2021. SUPPLEMENTAL MATERIAL Supplemental material is available online only. DATA SET S1, XLSX file, 18.
2021-06-09T06:18:30.526Z
2021-06-08T00:00:00.000
{ "year": 2021, "sha1": "d7fd42e2a729a367eb1c013b82136ee4685ecc81", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1128/msystems.01185-20", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "03c17532d1e2ddf9b3b45358c8331547c6ecd14f", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
264996428
pes2o/s2orc
v3-fos-license
Mesoscopic Model of Extrusion during Solvent-Free Lithium-ion Battery Electrode Manufacturing : Solvent-free (SF) manufacturing of Lithium-ion battery (LIB) electrodes is safer and more environmentally friendly than the traditional slurry casting approach. However, as a young technique, SF manufacturing is under development of its pathways and operation conditions. In different SF processes reported in literature, extrusion is a common step. A detailed model of this process would be extremely computationally demanding. This work proposes a novel simplified discrete element model at the mesoscopic scale for the extrusion during SF manufacturing of LIB electrodes. In addition to active material particles, we consider fluid-like solid particles to approximate the molten polymer and the carbon additive phases. The formulation and other process parameters are taken from our experimental facility that uses extrusion to fabricate filaments for 3D printing of LIB cells. The extrusion is carried out in a conical twin screw extruder. Our approach allows to obtain representative electrode microstructures after extrusion, where electrical conductivity, ionic effective diffusivity, tortuosity factor and porosity are calculated. The model is a proof of concept that is employed to investigate the influence of the extruder speed and the cohesion level on the resulting electrode properties. Introduction The rise of the production of Lithium-ion batteries (LIBs) calls for a global improvement of the electrode manufacturing process.At present, slurry casting is the standard technique.Four different processes have been employed in the SF approach: hot pressing, spray deposition, dry process by melting extrusion (MeltE) and 3D printing by material extrusion (3DP). [1]Although the stages themselves may vary, the main steps of this process typically adhere to a similar pathway (Fig 1a).Initially all the raw materials are in solid phase and include active material (AM), carbon additive (CA), permanent binder and sacrificial binder.The latter allows to create the porosity in the electrode and also improves the extrusion processability by increasing the polymer content.Extrusion allows the melting of the binders and facilitates the mixing of all formulation components under high shear rates.This process can be performed in twin-screw extruders (TSE) or in internal mixers.The latter are typically employed only at an academic scale to minimize the use of raw materials. [1]After going through the extruder outlet, the resulting paste solidifies, which allows for subsequent calendering and partial debinding steps.This is necessary for attaining the desired final electrode microstructure.Material extrusion is one of the seven categories of 3D printing. [6]used Filament Fabrication (FFF/FDM), a material extrusion process, has been used for SF manufacturing of electrodes.However, some solvent is still required for the mixing stage. [1]A completely solvent-free 3DP is challenging, and to the best of our knowledge only the recent work conducted by our team has achieved this as a proof of concept so far. [7]The main steps of this process are depicted in Fig 1b .Here AM, CA and two 10.1002/batt.202300441 Accepted Manuscript Batteries & Supercaps This article is protected by copyright.All rights reserved.polymers feed the TSE.The key difference is that both polymers are fundamental components of the final electrode microstructure, therefore there is no debinding step.Polypropylene (PP) acts as a binder giving mechanical stability to the electrode, while polycaprolactone (PCL) provides the electrolyte path after soaking with liquid electrolyte.In 3D Printing by Material Extrusion, only LFP has successfully used so far as active material in completely solvent-free process [7].The resulting filament is fed into the 3D printer allowing for the creation of arbitrary shapes of electrodes.Despite the outstanding potential of 3D printing, still there are some disadvantages as: poor cycling, nozzle clogging, low mechanical performance and low ionic conductivity.All of the above highlights the key importance of the extrusion step in SF electrode manufacturing.The model can allow to prevent or limit the trial-and-error experimental approach, for instance by studying directly the operational parameters, that are not optimized at this stage of the experimental research.It offers the opportunity to gain a deeper understanding of the extrusion process and explore various avenues for enhancing its efficiency.Numerical modeling of single polymer extrusion has extensively been applied in other fields; recent reviews can be found in [9] and [10] .Basically, the process can be divided in three stages: solid transport, melting and liquid flow.For the second and third stages, continuous approaches, like Computational Fluid Dynamics (CFD) or Smoothed Particle Hydrodynamics (SPH), are more suitable.Recently, Celik et al. [14] proposed an approach coupling DEM and CFD, that allows the study of the three extrusion stages.This approach which presents a high computational cost has not yet been applied to real extruder geometries.The challenge is even greater when modeling the extrusion of dense suspensions, such as those encountered in SF battery electrode manufacturing.This is because in this process, the AM and CA do not melt but are present in high concentrations within the paste.A similar fluidsolid interaction in twin-screw domains arises in other applications such as wet granulation and wet mixing.Washino et al. [15] proposed a CFD-DEM model of wet granulation in a small domain of a mixer.Computational cost can be reduced by implicitly considering the fluid effect on the particles by using a hydrodynamic force.This approach was employed for the simulation of a section of a twin-screw granulator [16] and for the modeling of a dense suspension extrusion in a square-entry die. [17]This methodology ensures a robust representation of the fluid influence; however, this fluid phase is not part of the obtained microstructure.On the other hand, in the slurry casting process, Coarse Grained Molecular Dynamics and DEM methods developed by our research group were able to represent explicitly a liquid phase during slurry mixing, [18] drying [19] and to calibrate their parameters with experimental viscosities and densities. [20]The same methods were applied by our group for the simulation of the calendering process. [21]These are mesostructural approaches, yet, the term microstructural is employed hereafter, as commonly used in the battery field.By considering explicitly the AM solid particles and the other particles, DEM allows to study mixing and aggregation among all the materials.The influence of the dispersion of carbon black was analyzed experimentally and numerically for a small amount of particles. [22]Srivastava et al. [23] related the cohesion 10.1002/batt.202300441 Accepted Manuscript Batteries & Supercaps This article is protected by copyright.All rights reserved.and adhesion at the mesostructure with the electrochemical and mechanical properties of the electrode.Finally, Ludwig et al. [24] investigated the scenarios of dry mixing (carbon, binder and active material) for different values of cohesion among the particles in a simple geometry.Finally, for the slurry process, a model was proposed to investigate the behavior of a viscous fluid (high solid content) in a section of a TSE using SPH. [25]This model, using real slurry rheological data, allows to calculate local shear rates, however a microstructure cannot be obtained due to its continuum nature.To the best of our knowledge, the modeling of the extrusion process for the dry manufacturing of electrodes has never been reported before.The present study proposes a new microstructural DEM model of extrusion during SF battery electrode manufacturing.The solid and molten phases are explicitly considered in the entire geometry of a twin-screw extruder.The simplifications made regarding the molten phase allow to simulate hundreds of thousands of particles, which yield representative electrode microstructures. In the following we start by describing the characteristics and assumptions of our model.Subsequently, we investigate different feeding approaches, cohesion levels and extruder rotation speeds.The obtained electrode microstructures, using a realistic experimental formulation, are critically analyzed.Finally, we conclude and indicate further directions for our work. Model Our model is intended to describe the extrusion step of both SF processes that are shown in Fig. 1.The following description is focused in the 3DP process (Fig. 1b) due to the availability of the required experimental apparatus in our facilities.The raw materials used in our experiments are the active material LiFePO4 (LFP), carbon nanofibers, the polymeric binder PP and PCL the polymer for the electrolyte path.For simplicity, and to focus on the electrode microstructure, the PCL is not considered in the numerical model.Our SEM images of the extruded filament show complex networks among PP, carbon nanofibers and LFP (see Fig. S1 in SI).For simplicity in this first model, we assume two distinct particle types, one for the AM and another, labeled BC (Binder-Carbon), which is composed of PP and carbon nanofibers, as shown in Fig. 2. Our model is based on the classical DEM, [26] where particles are represented as spheres.The extrusion simulation is carried out in a generic conical twin-screw extruder, adapted from [27] , the same type as our laboratory extruder.The particle size distribution of LFP (Fig. S2 of SI) ranges from 0.3 µm to agglomerates of 20 µm.Assuming 10 µm LFP agglomerate particles, filling the extruder completely would require around 10 9 particles, which is unfeasible for DEM simulations with reasonable computational cost.Neglecting fluid coupling, using periodic boundary conditions and the scaling of the particles are the main solutions for reducing the computational cost. [28]The former is already assumed, while the second one is avoided due to the absence of guaranteed periodic flow in the current extruder setup.Therefore, we opt for a change of scale of the extruder as a means of reducing computational cost.The outlet is reduced from its experimental diameter of 2000 µm to 60 µm (Fig. 2).According to our preliminary tests, any further decrease in the extruder size causes a change in the particle dynamics due to the accumulation of the particles in the inter-screw region.The BC particles are assumed to have a lower size than the AM particles so that they are able to form a continuous phase.However, very low values significantly increase the computational time, as a compromise, 5 µm is the selected diameter.The impact of lower diameter particles will be investigated in future work.In our extrusion simulation, the particles are subjected to gravity, particle-particle and particle-wall interactions.Two types of interactions are identified: a repulsive force due to collisions and an attractive force caused by cohesion.Regarding the latter, we neglect noncontact forces (Lennard-Jones and van der Waals) for simplicity.The cohesion force ( ) for particles in contact is given by the Simplified JKR model: where is the contact energy density, the contact area and the unit normal vector.The normal ( ) and tangential ( ) collision forces are calculated with the elastic Hertzian model: where Accepted Manuscript Batteries & Supercaps This article is protected by copyright.All rights reserved.A calibration of these values with experimental data is needed in future work.Rolling friction is also considered by employing the constant directional torque (CDT) model.The Young's Modulus and the Poisson ratio of LFP is found in literature. [29]Using these parameters, the Rayleigh time step can 45 be estimated.The immediate choice of a timestep equal to 20 % of Rayleigh time, as suggested in literature, results in a very long simulation time for filling the extruder.Higher timesteps yielded numerical instability in the simulations.Therefore, a decrease in 3 orders of magnitude of LFP Young's modulus is necessary in order achieve feasible running times and stability.A similar reduction changed the quantitative results for DEM simulations of cohesionless particles but not the trend, [30] which is the essential.forour comparative analysis in this work.The same decrease was applied on the Young's modulus of all materials to minimize the effect on the real particle dynamics.The values of the model parameters are given in Table S1 of the SI. The proposed model can be applied for other active and polymer materials just adjusting the following material properties of the model: Young's Modulus, CED and density.Our simulations, involving around 200 000 particles, take between 6-15 days depending mainly on the chosen rotation speed of the extruder.We used 64 cores AMD EPYC 7513 @ 2.60GHz (256 GB of RAM) of the MatriCs platform (Université de Picardie-Jules Verne, France).The following open-source software were used: LIGGGHTS [31] for the DEM simulations, OVITO [32] for visualization and MeshLab [33] for removing the auxiliary elements of the extruder (screws, gaskets, etc.) and adding a short section at the end of the extruder geometry (see following section).The commercial software GeoDict (Math2Market) [34] was employed for characterizing the resulting structures..For clarity, we rewrite the main assumptions of the model: • Each particle of the model represents an aggregate, which is composed of a set of primary particles, that are not explicitly considered. • The molten phase (binder) is represented as part of a fluidlike solid particle. • An equivalent particle (BC) represents the binder and the carbon nanofibers composite.• Constant values are considered for the cohesion coefficient and Young's modulus of the binder/carbon particles. • The polymer for the electrolyte path (PCL) is not explicitly considered in the model. • The size of the extruder is reduced in the simulations with respect with the actual experimental size.• Young's Modulus of LFP is reduced. • The experimental observed expansion of the filament at the outlet not considered.. Results and Discussion Our proposed model is used for the same formulation than in our extrusion experiments, [7] excluding the PCL polymer, which is not considered here (Table 1).We investigate the influence of the feeding approach, cohesion level and extruder speed on the materials mixing and resulting electrode microstructure.A cylindrical section is added at the extruder outlet (Fig. 2) to mimic the cylindrical filament formed after the solidification of the paste on a conveyor belt downstream from the extruder.Since the particles are still confined in the cylindrical section, the observed expansion of the filament at the outlet of the extruder in experiments is not simulated.However, the nature of the very high mixing during extrusion makes that these microstructural changes will be much more significant that the ones during expansion.All the walls of our domain are rigid boundaries, where only the screws are not stationary.The particles are free to move at the exit of the added cylindrical section in the extruder outlet.In the simulations, the electrode properties are analyzed in the microstructure obtained at the cylindrical section.Except case E, all simulations are carried out at 500 rpm in order to save computational time.The simulation conditions for each case are described in Table 2.For sake of reproducibility, the required data for using our model are listed.The rotation speed and the geometry of the extruder in any CAD format is needed.Regarding the formulation, its mass or volume fraction is required as shown in Table 1.The material parameters for each component of the model are: density, cohesion energy density, Young's modulus, Poisson ratio, friction coefficient, rolling friction coefficient and coefficient of restitution as presented in Tables S1-S5.Finally, the mean size or the particle size distribution has to be defined. Table 2. Simulation conditions for each case.The different feeding approaches are show in Fig. 3 and the values of cohesion energy density are specified in SI in Table S2. Selection of Feeding Approach The order in which the materials are fed into the extruder impacts the final product.In our experiments, the polymer powders are fed initially and go through the extruder in a continuous loop by means of a recirculation system.Once the polymers are molten, a premix of LFP and carbon powders is fed into the extruder.We simulate one passage of the materials until the extruder is filled and a steady-state at outlet is achieved.Therefore, the exact feed conditions of experiments cannot be reproduced.As an alternative, three different strategies are employed and analyzed (a,b,c) (Fig 3).Scenario A mimics a premix of LFP and BC (a), while the others consider the two powders as initially separate.B considers the two powders entering continuously one after the other (b), while in the feeding approach C, the inlets are located side by side (c).The 3D view of the extrusion process (Fig. 3) clearly shows that scenario C has the lower mixing quality.In order to quantify the extent of mixing after the material has gone through the extruder, a Radial Distribution Function (RDF) can be estimated (right hand side of Fig. 3), even if the system is confined within the outlet, to show relative frequency of particle-to-particle contact.This function shows a larger peak at 10µm corresponding to LFP-LFP contacts, in scenario C, indicating poor mixing.Due to the recirculation system used in our laboratory, this is not observed in our simulations, but it could appear in industrial extruders that commonly have a single passage.In that case, the scenario can be mitigated by a premixing of all the components as reported in at least one laboratory SF process. [5]Scenarios A and B only slightly differ near the inlet region, though they quickly homogenize.Therefore, scenario C is the most critical for mixing quality during the simulated extrusion.This is the one chosen for the investigations in the subsequent sections, since in that case the simulations can provide more interesting information. Accepted Manuscript Batteries & Supercaps This article is protected by copyright.All rights reserved. Accepted Manuscript Batteries & Supercaps This article is protected by copyright.All rights reserved.Here the impact of the cohesion strength of LFP-BC and BC-BC on the electrode microstructure is evaluated.In simulations, this is achieved by changing the cohesion energy density (CED) of both interactions.The aim of this comparison is to indirectly elucidate the impact of polymer viscosity, which depends on the chosen extrusion temperature, that is not an obvious decision for experimentalists.Broadly speaking, an increase in viscosity can be represented by an increase in CED.One simulation with lower values of CED (case C) and one with higher values are carried out (case D).The specific CED values are presented in Table S2.Fig 4. shows snapshots of the extruder as if completely fills for both scenarios.As expected, case D exhibits a more compact flow.The difference is expected to accentuate for the case where the extruder has a continuous screw section, as in many real configurations.Furthermore, in the initial stages, the extrusion product shows poor mixing for case C, which is improved as the extruder fills. Once these two cases with different cohesion were obtained, its impact on the electrode properties can be studied.To calculate the structural parameters of the resulting electrode a similar procedure is used as in our previous publication. [35]The details are specified in the SI section.Due to the difficulty in accurately measuring some of the AM and BC properties, those values are calculated from estimated parameters.While no absolute values are provided, our characterization allows us to compare between simulation cases.The 3D-resolved electrode microstructures for each case are depicted in Fig 5 .Case D has higher electrical conductivity ( ) mainly because of the lower porosity () that allows the presence of more conductive BC particles.On the other hand, mesostructure C presents a lower tortuosity factor () and a higher effective diffusivity ( ) which means higher ionic conductivity.This highlights a compromise between electrical and ionic conductivities Influence of extruder speed Our lab experiments are carried out at a screw rotation frequency of 50 rpm.Tests at higher frequencies are planned to investigate its effect on the produced filament.While this extruder allows up to 400 rpm, due to the high viscosity of the paste, maximum feasible experiments should be at around 150 rpm.For industrial applications, extruders should allow higher frequencies.Numerical simulations can contribute to anticipate the effects at very high rotations in a cheaper and safer way.For comparison purposes, here simulations at 500 rpm (case C) and 50 rpm (case E) are carried out.Fig. 6 shows the obtained 3D electrode microstructures for both cases.The lower speed case resulted in lower porosity, leading to slightly lower ionic conductivity, and higher electrical conductivity.Therefore, there is a compromise when increasing the rotation speed.In experiments, Dreger et al. [36] found a compromise during the extrusion step in the wet electrode manufacturing.Increasing rotation speed under a given speed caused an increase of the electrical conductivity.However further increases of speed had the opposite effect.They pointed to the high reduction of the carbon agglomerates size as a cause for the decrease of electrical conductivity at very high speeds.Although, we do not explicitly consider the carbon in the simulation, we can study the agglomeration through our BC particles.To this end, we investigated and removed the AM particles in the obtained microstructures as a post-processing step.Then the coordination number is calculated to provide an idea of the BC-BC agglomeration.Fig. 7 shows lower coordination numbers for the higher speed case.Thus, similar to the experimental observations, simulations confirmed that very high speeds decrease the size of the carbon agglomerates, which results in lower electrical conductivities.On the other hand, the lower speed case shows 10.1002/batt.202300441 Accepted Manuscript Batteries & Supercaps This article is protected by copyright.All rights reserved.Likewise in the second subsection, we demonstrated that the simple cohesion parameter is able to produce expected qualitative changes on the electrical and ionic conductivities.Finally, and more important, the model predicted a decrease of the electrode electrical conductivity due to a decrease in the carbon agglomerates, similar to what was observed by Dreger et al. [36] in experiments in the high rotation speed range.This is the kind of qualitative results at the level of the microstructure that our model aims to predict for different operation conditions. Conclusion A 3D microstructural proof of concept model of extrusion during solvent-free LIB electrode manufacturing is proposed.Active material particles and an equivalent particle consisting of binder and carbon additive were considered.In this respect, some experimental studies with carbon black as well as our SEM images with carbon nanofibers suggest that an explicit consideration of carbon nanofibers in the simulations will improve in future work the description of the carbon additive mixing in extrusion. Although extrusion simulations present high computational cost, our appropriate assumptions make feasible the simulation in the entire geometry of a reduced size extruder.This allows the consideration of the complex trajectories of the particles in the extruder that directly impacts the aggregation/disaggregation phenomena.In this way, despite that a simplistic representation of the molten phase is chosen, the model was able to reproduce pastes with different cohesion level, that experimentally can be obtained by changing the extrusion temperature.Still, a further calibration with experimental data and the inclusion of shear forces are required in future work.Furthermore, the developed approach was able to produce 3D microstructures of the filament obtained by the process of extrusion.This feature allowed the study of the influence of the extruder speed on the resulting structure.Simulations for very high extruder speeds were able to reproduce the experimental observation of decreased in cathode electrical conductivity due to carbon contact loss.Prediction of increasing conductivities as speed raises at low values, are to be expected with an explicit consideration of carbon additive, following experimental evidence.Moreover, experimental and numerical investigations are called for a better understanding of the PP-C interactions, which still remain unknown, unlike the PVdF-C matrix in slurry casting.We also plan to accelerate computational speeds through innovative numerical methods and the use of machine learning.The obtained microstructures can be embedded in electrochemical heterogenous models for simulating performance as already carried out by us for the wet manufacturing process. [37]ur model can be employed for a deeper understanding of the effect of binder-active material ratio studied in solvent-free experiments [5] or adapted to investigate other types of battery technologies such as sodium ion and solid-state batteries.It brings for the first time a digital solution to assist in the optimization of the dry processing of battery electrodes, towards the reduction of the time to market of these new processing methods. Figure 3 . Figure 3. Mixing states and radial distribution functions (RDF) for three different feeding approaches.In RDF graphs, BC-BC (grey), BC-AM (red) and AM-AM (blue).The black arrow highlights the larger value of AM-AM contacts for case C. Figure 4 . Figure 4. Evolution of extruder filling for a highly cohesive (case D) and low cohesive (case C) pastes.Snapshots of each case are shown for three different number of particles in the extruder: 30 000, 60 000 and 120 000. Figure 5 . Figure 5. Microstructures and properties of the obtained electrodes for a highly cohesive (case D) and low cohesive (case C) pastes. Figure 6 . Figure 6.Microstructures and properties of the obtained electrodes for different extruder rotations speeds: 500 rpm (case C) and 50 rpm (case E). Figure 7 . Figure 7. Coordination number of the obtained electrode considering only BC-BC contacts and for different extruder rotations speeds: 500 rpm (case C) and 50 rpm (case E). Table 1 . Formulation and density of the components considered in extrusion. Batteries & SupercapsThis article is protected by copyright.All rights reserved. which can result in some decreasing of its electrical conductivity when modeling explicitly the carbon additive in future work.Unfortunately, a quantitative validation of the model is not possible due to that the current state of the art experiments do not yet provide detailed data.However, different points of qualitative validation were shown in this section.The first subsection of Results allowed to validate that our single passage consideration is able to predict the real observations of poor mixing scenarios.
2023-11-04T08:25:58.800Z
2023-11-27T00:00:00.000
{ "year": 2023, "sha1": "285cbece0e45af1d2f4e5cdfea30a885de051f7d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/batt.202300441", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4553b26f6f7a51f5a9fe64522502dd4001150b51", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
255496677
pes2o/s2orc
v3-fos-license
High expression of KNL1 in prostate adenocarcinoma is associated with poor prognosis and immune infiltration Prostate adenocarcinoma (PRAD) is a common malignancy with increasing morbidity and mortality. Kinetochore scaffold 1 (KNL1) has been reported to be involved in tumor progression and prognosis in other tumors, but its role in PRAD has not been reported in detail. KNL1 expression analysis, clinicopathological parameters analysis, prognostic correlation analysis, molecular interaction network and functional abdominal muscle analysis and immune infiltration analysis by using multiple online databases and downloaded expression profile. The results suggest that KNL1 is highly expressed in PRAD, which is associated with worse prognosis in PRAD patients. KnL1-related genes are highly enriched in mitotic function, which is considered to be highly related to the development of cancer. Finally, KNL1 expression is associated with a variety of tumor infiltrating immune cells, especially Treg and Th2 cells. In conclusion, our findings provide preliminary evidence that KNL1 may be an independent prognostic predictor of PRAD and is associated with immune infiltration. positive reactions are rarely observed in treated PRAD patients (Narayan et al., 2022). Hence identifying more immune targets or new immune mechanisms is necessary. KNL1, also known as cancer susceptibility candidate 5 (CASC5), the protein encoded by this gene is an integral part of multiprotein assembly and is required for the generation of kinetochore/ microtubule attachment and chromosome segregation (Caldas and DeLuca 2014). Thus, normal expression of KNL1 is beneficial for several aspects of mitotic progression. Previous literature has shown that dysfunctional kinetochore components can drive chromosomal instability and aneuploidy leading to tumor progression (Yuen et al., 2005;Shi et al., 2019). In recent years, the high expression of KNL1 in cancer and related cases of promoting the occurrence and development of cancers such as colon cancer (Bai et al., 2019), gastric cancer (Song et al., 2018) and lung cancer (Cui et al., 2020) have also been reported. However, it is unclear whether KNL1 in PRAD has potential function and is involved in immunity infiltration. Here, we first identified the expression of KNL1 in PRAD, and investigated the correlation between KNL1 and clinical parameters and prognosis of PRAD. The biological function of KNL1 in PRAD was explored by mining its related genes, constructing the interaction network, and performing multiangle functional enrichment analysis. Finally, this study revealed the relationship between KNL1 expression and tumor immune invasion. Download of public dataset We downloaded gene expression profiles and clinical data from The Cancer Genome Atlas (TCGA) (https://cancergenome.nih.gov/) (Wang et al., 2016), including 499 tumor samples and 52 normal samples from PRAD patients. Explore the differential expression of KNL1 in online databases Tumor Immunity Estimation Resource (TIMER) (http://timer. cistrome.org/) database is used to identify KNL1 expression in multiple tumor types (Li et al., 2020). Then, the expression spectrum data of TCGA were used to analyze the difference in expression of paired and unpaired KNL1 samples. Correlations between KNL1 expression and PRAD molecular subtypes or immune subtypes were explored from the Tumor-Immune System Interactions Database (TISIDB) (http://cis.hku. hk/TISIDB/browse.php), which integrates multiple data types to assess tumor and immune system interactions (Ru et al., 2019). The Human Protein Atlas (HPA) website (https://www.proteinatlas.org/) was used to compare KNL1 expression in normal and tumor tissues at the protein level. Cell lines and cell culture Human normal prostate epithelial cell line RWPE-1 and human prostate cancer cell lines DU-145, 22RV1, PC-3, VCaP, and LNCaP were obtained from the Chinese Academy of Sciences Cell Bank (Shanghai, China). Cells were cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum at 37°C with 5% CO2. RNA isolation and quantitative reverse Transcriptase-PCR assays Total RNA from cells was extracted using TRIzol reagent (Life Technologies, CA, United States) following the manufacturer's protocol. For quantitative real-time RT-PCR, cDNA synthesis was performed using 500 ng RNA per sample using RT reagent (TaKaRa, Dalian, China) according to the manufacturer's instructions. qRT-PCR amplification was performed on a StepOnePlus real-time PCR system (Applied Biosystems, CA, United States), and data were analyzed using the 2 −ΔΔCT method, with GAPDH RNA as an endogenous control. The primer sequences were as follows: KNL1, forward 5′-ACCTCTCTGGAC TTCAGCACTTACC-3′ and reverse 5′-TCTGTATCAAGATGT GGACCTGGAG-3′; GAPDH, forward 5′-ATGGTGAAGGTC GGTGTGAA-3′ and reverse 5′-GAGTGGAGTCATACTGGA AC-3′. Correlation analysis of KNL1 expression with clinical characteristics The relationship between KNL1 expression and clinical situation was analyzed from eight aspects using TCGA expression profile and clinical information data. In addition, logistics regression based on KNL1 differential expression is performed. Survival prognosis and diagnostic value analysis The Kaplan-Meier plotter (Liu et al., 2018) and forest map were used to assess KNL1 expression and the prognosis of cancer. Univariate and multivariate Cox analyses were used to evaluate the value of KNL1 gene as a prognostic indicator. Furthermore, the receiver operating characteristic (ROC) curve is used to assess the diagnostic value of KNL1 in PRAD. Interaction network building GeneMANIA (https://genemania.org/) (Warde-Farley et al., 2010) and STRING (https://cn.string-db.org/) (Szklarczyk et al., 2021) websites were used to construct gene-gene and protein-protein interaction networks of KNL1 to display molecules co-expressed with KNL1, and to evaluate the functions of these genes. The correlation analysis of nine KNL1-related molecules was done using TCGA expression data. DEGs between KNL1 high and low expression groups in PRAD We investigated the differences between different KNL1 expression groups based on the median KNL1 expression level in PRAD. Volcanic figure threshold for | log2 fold -change (FC) | > 1.0, after the adjustment p values <.05. Heat maps of the threshold for | log2 fold -change (FC) | > 2.0, Correlation analysis between KNL1 and tumor-infiltrating immune cells We assessed the correlation of KNL1 expression with the abundance of six tumor-infiltrating immune cells (TIICs) by TIMER, including B-cell, CD4 + T-cell, CD8 + T-cell, neutrophils cells, macrophages and dendritic cells (DC). At the same time, we also use this database investigated the correlation between KNL1 expression and different immune cell marker genes using the correlation module, and verified them again in the Gene Expression Profiling Interaction Analysis (GEPIA2) database (http:// gepia2.cancer-pku.cn/) (Tang et al., 2019). Gene Set Enrichment Analysis (GSEA) enrichment of DEGs, and the immunologic signature gene sets were selected as datasets for GSEA analysis (Subramanian et al., 2005), which are derived from Molecular Signatures Database (MSigDB) (http://www.gsea-msigdb.org/gsea/msigdb/index.jsp) (Liberzon et al., 2015). The correlation of KNL1 with the markers of 24 tumorinfiltrating immune cells were estimated. Meanwhile, we performed an Frontiers in Genetics frontiersin.org analysis of the difference in TIICs in 24 between the low and high KNL1 expression groups (Bindea et al., 2013). Both of the above use the single sample GSEA (ssGSEA) algorithm (Hänzelmann et al., 2013). Statistical analysis R software (version 3.6.3) was used to process the data and plot the images. The Spearman correlation coefficient reflects the degree of correlation among different genes. The R packages and associated code used have been consolidated as raw data for submission. A value of p < .05 was considered statistically significant in all analyses. KNL1 expression in various types of human cancers The TIMER database findings indicated that KNL1 gene is differentially expressed in a variety of cancers. And compared with normal tissues, the expression level in tumor tissues of PRAD patients was higher (p < .01) ( Figure 1A Figure 2). Subsequently, we analyzed the expression profile data downloaded from TCGA-PRAD, and it could be seen that KNL1 gene was relatively highly expressed in tumor tissues, both in unpaired (p < .001) and paired (p < .001) samples analysis ( Figures 1B,C). By investigating TISIDB, we found that KNL1 was expressed differently in different immune subtypes of PRAD (C1: wound healing, C2: IFN-gamma dominant, C3: inflammatory, C4: lymphocyte depleted) ( Figure 1D), but its expression has no correlation with different molecular subtypes ( Figure 1E). Then, immunohistochemical analysis of the HPA database showed that the KNL1 protein content was also increased in PRAD ( Figure 1F). Furthermore, qRT-PCR assay showed that KNL1 was highly expressed in five prostate cancer cell lines compared with normal cells ( Figure 1G). Relationship between KNL1 expression and clinicopathological parameters Given the high expression of KNL1 gene in PRAD, we further explored the relationship between KNL1 expression and clinical case Frontiers in Genetics frontiersin.org parameters of PRAD patients. As for the division of age groups, a number of studies have shown that in recent years, the morbidity and mortality of the age group over 60 have shown exponential growth, and the growth rate is much higher than that of the relatively young age group, so they are divided into two groups: ≤60 and >60 . At the same time, the positive critical value of 4.0 ng/ml with high sensitivity was selected for grouping based on important evidence such as the setting of The Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial in the United States (Andriole et al., 2009;Lavallée et al., 2016). After sorting and analyzing the expression profile data and clinicopathological parameter files in TCGA-PRAD using R software, we observed the mRNA level of KNL1 was related to PSA (ng/ml), Gleason score, tumor size, regional lymph node metastasis. However, KNL1 expression was not correlated with age, distant metastasis, primary therapy outcome and residual tumor ( Figure 2). Logistic regression indicated that the expression of KNL1 in T3 & T4 overtopped T2 (p < .001), positive lymph node metastasis is more than negative (p = .001), stable disease (SD) & progressive disease (PD) overtopped partial response (PR) & complete response (CR) (p < .001), PSA≥4 ng/ml overtopped PSA<4 ng/ml (p = .002), and high Gleason score (8&9&10) overtopped medium Gleason score (6&7) (p < .001). There was no difference in age (p = .194), distant metastasis (p = .590) and residual tumor (p = .062) ( Table 1). Prognostic potential of KNL1 expression in PRAD To investigate the relationship between KNL1 expression and prognosis of PRAD patients, we conducted a comprehensive analysis of expression profile and survival data in TCGA-PRAD. Prognostic survival analysis showed that KNL1 expression was negatively correlated with overall survival (OS) (HR = 5.30, p = .043) and progress free interval (PFI) (HR = 2.29, p < .001) (Figures 3A,B). Receiver operating characteristic curve showed that KNL1 had a certain accuracy (AUC = .714) in predicting PRAD ( Figure 3C). Finally, we illustrated the relationship between KNL1 expression and other clinicopathological parameters and OS using COX analysis. The univariate Cox analysis showed that distant metastasis (HR = 59.383, p < .001), Primary therapy outcome (HR = .130, p < .001), PSA level (HR10.479, p = .001), Gleason Frontiers in Genetics frontiersin.org score (HR = 6.664, p = .019) and KNL1 expression (HR = 5.299, p = .043) were associated with OS. The multivariate analysis indicated that distant metastasis (HR = 63.927, p = .007) had independent prognostic value ( Table 2). The forest map ( Figure 3D) depicts the results of the univariate analysis. Construction of interaction network of KNL1 and KNL1-correlated genes To explore the mechanism of KNL1 in PRAD, we constructed a gene-gene interaction network for KNL1 using the GeneMANIA database, and analyzed the functions of these genes. KNL1 is surrounded by 20 gene nodes, which represent genes significantly associated with KNL1 ( Figure 4A). Subsequent functional analysis revealed that the genes encoded proteins associated with the following terms: kinetochore, chromosomal region, condensed chromosome, chromosome (centromeric region), condensed chromosome (centromeric region), chromosome segregation and nuclear chromosome segregation ( Figure 4A). At the same time, the KNL1-related molecular network at the protein level was constructed using the STRING database. We show here the PPI network formed by the top 10 KNL1-related molecules, containing 11 nodes and 54 edges, with an average local clustering coefficient of .982 ( Figure 4B). Taking the intersection of molecules contained in the above two networks, the following molecules can be obtained: NSL1, BUB1, DSN1, BUB1B, SPC25, NUF2, MIS12, NDC80, ZWINT. The correlation analysis between their expression and KNL1 expression in PRAD was performed using TCGA-PRAD expression profile ( Figure 4C). Functional enrichment analyses of KNL1 and co-expressed genes To further understand the role of KNL1 in PRAD, the expression profile data of TCGA-PRAD were collated and analyzed as follows. Firstly, the differential analysis between the high and low expression groups of KNL1 was carried out to obtain the differential genes related to KNL1. The volcano plot shows the case Based on the criteria | LogFC|>1 and p. adj<.05 ( Figure 5A). Co-expression heat map shows the correlation of differential genes with KNL1 when the threshold is Frontiers in Genetics frontiersin.org set to |LogFC|>2 and p. adj<.01 ( Figure 5B). Subsequently, 338 upregulated differential genes were included for GO and KEGG enrichment analysis. The top four enriched biological process (BP) terms were nuclear division, organelle fission, mitotic nuclear division, and chromosome segregation ( Figure 5C). The following cellular component (CC) terms were significantly correlated with KNL1: spindle, chromosome (centromeric region), condensed chromosome, and condensed chromosome (centromeric region) ( Figure 5C). Molecular function (MF) terms showed that KNL1 was significantly correlated with the microtubule motor activity, microtubule binding, motor activity, and tubulin binding ( Figure 5C). In the KEGG analysis, these genes were significantly enriched in Cell cycle, Oocyte meiosis, Ascorbate and aldarate metabolism, Pentose and glucuronate interconversions, and Progesterone-mediated oocyte maturation ( Figure 5D). Correlation between KNL1 expression and TIICs marker gene The correlation between KNL1 expression and TIICs marker genes was studied by TIMER. After purity adjustment, KNL1 was positively correlated with marker genes of all T-cell (general), monocyte, TAM, M1 macrophages, M2 macrophages, dendritic cells, Th1 cells, T follicular helper (Tfh) cells and Treg cells. KNL1 expression was positively correlated with some marker genes of CD8 + T-cell, B-cell, neutrophil, NK cells, Th2 cells, Th17 cells, and T-cell exhaustion ( Table 3). The correlation between some TIICs marker genes and KNL1 expression was explored by using GEPIA2 database. In tumor tissues, KNL1 was positively correlated with the marker genes of TAM, M2 macrophages and Tregs (Table 4). Description Marker genes None cor P Purity cor P Discussion PRAD is a complex but common malignancy that causes about 1.3 million new cases and 360,000 deaths worldwide each year. It has become one of the most common urogenital malignancies in elderly Chinese men . KNL1 is an important regulatory gene in mitosis (Krenn et al., 2012). It integrates the functions of various mitotic regulators, including BUB1 and BUBR1, and is the Kinetochore component required for the spindle assembly checkpoint (SAC), which protects the correct segregation of chromosomes during mitosis. Defects in KNL1 function have been associated with genomic instability, leukemia, microcephaly, and neurological disorders (Shi et al., 2019). Previous literature has pointed out that almost all solid tumors exhibit genomic instability at the chromosomal level. Strong experimental evidence supports that chromosomal instability phenotypes occur early in cancer development and represent an important step in tumor progression (Shih et al., 2001). Recent reports suggest that the long-term proliferation of aneuploid cancer cells is threatened by SAC inhibition (Cohen-Sharir et al., 2021). Jennifer G. put forward that KNL1 may be a platform for SAC-activation and SAC-silencing proteins (Caldas and DeLuca 2014). In recent years, more and more studies have shown that KNL1 dysregulation may lead to the progression of colorectal cancer (Bai et al., 2019) and gastric cancer (Song et al., 2018). Down-regulation of KNL1 can inhibit the growth and induce cell death of cervical cancer and breast cancer cells (Urata et al., 2015). Therefore, KN1L is indeed linked to a variety of solid tumors. However, no relevant studies have been found on KNL1 in PRAD, and whether KNL1 is related to immune infiltration in PRAD is still unclear. Through database mining, this study found that KNL1 was highly expressed in PRAD tissues compared with normal tissues. This was confirmed by our qPCR assay. Clinical correlation analysis showed that KNL1 expression was associated with PSA level, Gleason score, tumor size, regional lymph node metastasis. However, no difference was observed in age, distant metastasis, primary therapy outcome and residual tumor, which may be attributed to the lack of a large number of clinical data. In addition, multivariate regression analysis showed that there was a causal relationship between distant metastasis and prognosis to some extent, but there was no difference in correlation analysis, which we believed was related to incomplete clinical data and large differences in sample size between groups. If more clinical data can be added, more stable results may be obtained. Prognostic analysis showed that high expression of KNL1 was associated with poor prognosis. These implications suggest that high KNL1 expression is associated with PRAD progression and may be a potential independent predictor. The gene network construction and functional enrichment analysis showed that the expression of KNL1 and its related genes is highly correlated with mitosis and cell cycle, and they are highly enriched in kinetochore, chromosomal region, chromosome segregation and other biological functions. At the same time, we can see, in the database identified molecules that are associated with KNL1 height mostly for SAC related gene (BUB1 BUB1R, SPC25, MIS12, NDC80, ZWINT), The highly related genes co-up-regulated with KNL1 in PRAD samples (KIF14, ASPM, CKAP2L) are also involved in spindle assembly regulation and microtubule formation, and their high expression has been reported to be associated with the occurrence and development of a variety of cancers (Pai et al., 2019;Yang et al., 2019;Monteverde et al., 2021). As mentioned above, the function of SAC is closely related to the occurrence and development of solid tumors. The results further indicated that KNL1 played an important role in the development of PRAD. Interestingly, these KNL1-related molecules (e.g., BUB1, ASPM, TOP2A) have been implicated in immune infiltration in papillary renal cell carcinoma in another report (Deng et al., 2021). At present, the main analysis of TIICs in tumors usually focuses on T-cell, especially the related studies of CTLA-4 inhibitors and PD-1/PD-L1 inhibitors (Rowshanravan et al., 2018;Rotte 2019). At the same time, more and more researchers have paid attention to the role of B-cell and tertiary lymphoid structures in immunotherapy (Cabrita et al., 2020;Helmink et al., 2020;Fridman et al., 2022). GSEA analysis in this study showed that the gene groups co-upregulated with KNL1 were mainly enriched in CD8+T-cell, B-cell and Tregs. Our exploration of TIMER database suggests that KNL1 expression is well correlated with a variety of TIICs and markers, especially in TAM, M2 macrophages and Tregs. In addition, the GEPIA2 database analysis was used to compare KNL1 in PRAD with normal tissues and TIICs. It can be seen that the correlation between M2 macrophages and Tregs and KNL1 in tumor tissues is stronger than that in normal tissues. It is worth mentioning that the increased infiltration of these immune cells (such as Treg, Th2 cells, M2 Macrophage) may produce a worse prognosis (Ruffell et al., 2010;Mehla and Singh 2019;Tanaka and Sakaguchi 2019;Protti and De Monte 2020;Amoozgar et al., 2021). Our group analysis results exactly showed that higher expression of KNL1 brought more Th2 and Treg infiltration, and at the same time, NK cell and other anti-tumor components showed a lower enrichment level. Our findings suggest that KNL1 does have a certain effect on the immune infiltration of PRAD, and more basic experiments may better prove this view. Although this study has improved our understanding of the correlation between KNL1 and PRAD, at the same time, there are some limitations. First of all, we mainly conducted the analysis through bioinformatics methods, and more experiments are needed to explore and verify the molecular mechanisms and biological functions related to KNL1. Secondly, both K-M plotter and ROC curve performed well in the prognostic analysis, but no good difference was observed in univariate and multivariate regression, which was attributed to the lack of a large amount of data. Therefore, we should establish our own clinical case database to expand the sample size. Conclusion In conclusion, the expression of KNL1 in PRAD is up-regulated, which is significantly correlated with the clinical characteristics of PRAD patients and predicts poor prognosis. This gene can be considered as an early diagnostic and independent prognostic indicator for PRAD patients. In addition, our analysis demonstrated a significant correlation between KNL1 expression and the degree of immune cell infiltration. Therefore, KNL1 may play a potentially important role in immunotherapy. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/Supplementary Material. Author contributions YW and QZ conceived the study and participated in the study design, performance, coordination, and project supervision. YZ, YD, and MP collected the public data and conducted the bioinformatics analysis. SF conducted a validation experiment. YZ and QJ wrote the draft. JW and YW revised the manuscript. QZ got financial supports. All authors approved the final manuscript.
2023-01-07T15:15:12.817Z
2023-01-06T00:00:00.000
{ "year": 2022, "sha1": "604f267a6b09b5f27d53137b8f34ef7f532d5d3e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "604f267a6b09b5f27d53137b8f34ef7f532d5d3e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
81146309
pes2o/s2orc
v3-fos-license
Clinical Studies with Ayurvedic Formulations-A Practitioner’s Preview Traditional Indian Medicine (TIM) is a storehouse for developing lead drugs for clinical trials. Although, clinical research in Ayurveda in TIM is a novel concept, it can be said that drugs like Triphala, (mild laxative) Tikatu (bioavailability enhancer) and Dashmoola (anti-inflammatory) are the outcome of trials conducted by ancient scientists. The review shows clinical studies done with formulations used in TIM. The formulations were grouped according to action on human system based on data generated from internet and literary search. Apparently, the studies may be uncontrolled but their importance in conducting or improving previously conducted clinical trials with TIM can’t be ruled out. The knowledge may well-combine with ‘reverse pharmacological’ approach for cost-effective and potential cures from TIM. Introduction Ayurveda or Traditional Indian Medicine (TIM) is considered to be the oldest-practicing system of medicine. Recently, the herbal drug industry has witnessed explosive growth. CAM systems are in great demand, particularly Traditional Chinese Medicine (TCM) and Traditional Indian Medicine (TIM). Growing popularity of CAM among people, has led to onset of research at molecular and clinical levels [1]. With development of new subjects like medicinal phytochemistry, phytophamacology and phytopharmacotherapy, the importance of clinical research in TIM has become more significant. Analytical study of subjects like Dravyaguna (Medicinal plant Pharmacology) and Kayachikitasa (Internal medicine) is required for enhancing practical utility of TIM [2]. Lack/documentation of clinical trials in TIM has triggered controversies, regarding therapeutic application of formulations used in TIM [3]. Although, formulations of TIM have been used for centuries with success, testing at molecular levels is still a challenge [4]. Pharmacological intervening has opened new age in CAM and TIM research. The concept of reverse pharmacology is rapidly catching up for developing cost-effective and potential drug candidates from medicinal plants [5]. In our view, clinical studies in TIM can be divided in to two distinct groups: 1. Controlled studies 2. Uncontrolled studies Recently, favorable clinical studies have appeared for single herb/polyherbal formulations used in TIM for varied aliments. The studies, seems to be appropriate with regard to several parameters like drug selection and standardization, design, patient participation and results. The present review is dedicated to rare clinical studies done on formulations used in TIM. The list of plants or formulations discussed in the review may be incomplete. The leading factor is lack of indexed publication dealing with clinical aspects of TIM. Moreover, the clinical knowledge documented by authors in Ayurvedic journals were done mostly around 1960 when pharmacological and clinical research were not in limelight. Nonavailability of full-length papers and English version also contributed to the incomplete list. Materials and Methods The key words for the present review were clinical trials, clinical studies, TIM, single herb, polyherbal formulations, and Ayurveda. ABIM The references encountered in the search were later consulted. The data generated after systemic literature study was documented according to human anatomy. Results and Discussion A systemic study afforded several single or poly-herbal, and herbo-mineral and purely mineral based formulations used in TIM. Much of the clinical research was related to respiratory and musculoskeletal system. Among polyherbal formulations, guggul-based formulations were cornerstone for treating arthritis and rheumatism. Use of Triphala was highlighted in various clinical conditions. The major drawback of these clinical studies is lack of control. The studies do emphasize clinical utility of formulations used in TIM, which may be the basis of reinitiating clinical trials. We also believe that instead of expanding list of novel formulations, work should be initiated to evaluate potential of already reported formulations to overcome the shortcomings encountered in earlier clinical studies. Copyright© Ish Sharma and Amritpal Singh.
2019-03-18T14:02:47.652Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "8538a433df27d4b8e7e9af49f3bd47aeb44d6dbd", "oa_license": null, "oa_url": "https://doi.org/10.23880/jonam-16000106", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "9380bf15be1cbe16e95d75490e8a1513c8de7a43", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232106912
pes2o/s2orc
v3-fos-license
RAFF-4, Magnetization Transfer and Diffusion Tensor MRI of Lysophosphatidylcholine Induced Demyelination and Remyelination in Rats Remyelination is a naturally occurring response to demyelination and has a central role in the pathophysiology of multiple sclerosis and traumatic brain injury. Recently we demonstrated that a novel MRI technique entitled Relaxation Along a Fictitious Field (RAFF) in the rotating frame of rank n (RAFFn) achieved exceptional sensitivity in detecting the demyelination processes induced by lysophosphatidylcholine (LPC) in rat brain. In the present work, our aim was to test whether RAFF4, along with magnetization transfer (MT) and diffusion tensor imaging (DTI), would be capable of detecting the changes in the myelin content and microstructure caused by modifications of myelin sheets around axons or by gliosis during the remyelination phase after LPC-induced demyelination in the corpus callosum of rats. We collected MRI data with RAFF4, MT and DTI at 3 days after injection (demyelination stage) and at 38 days after injection (remyelination stage) of LPC (n = 12) or vehicle (n = 9). Cell density and myelin content were assessed by histology. All MRI metrics detected differences between LPC-injected and control groups of animals in the demyelination stage, on day 3. In the remyelination phase (day 38), RAFF4, MT parameters, fractional anisotropy, and axial diffusivity detected signs of a partial recovery consistent with the remyelination evident in histology. Radial diffusivity had undergone a further increase from day 3 to 38 and mean diffusivity revealed a complete recovery correlating with the histological assessment of cell density attributed to gliosis. The combination of RAFF4, MT and DTI has the potential to differentiate between normal, demyelinated and remyelinated axons and gliosis and thus it may be able to provide a more detailed assessment of white matter pathologies in several neurological diseases. INTRODUCTION Myelin is essential for the proper functioning of the central nervous system. It not only accelerates the propagation of electrical impulses along myelinated fibers, but it also provides protection and nutrients to neurons (Saab and Nave, 2017). Disturbances in the integrity of myelin can cause a wide variety of motor, sensory and cognitive symptoms, and demyelination, e.g., damage or loss of myelin sheaths has been associated with several diseases including multiple sclerosis (Noseworthy et al., 2000), Alzheimer's disease (Nasrabady et al., 2018), and traumatic brain injury (Armstrong et al., 2016a). Remyelination is a natural regenerative response to demyelination. Both acquired and genetic demyelinations are followed by remyelination, and this has been found to play an important role especially in multiple sclerosis (Prineas and Connell, 1979;Hirano, 1989) and traumatic brain injury (Armstrong et al., 2016b). Oligodendrocytes create new myelin sheaths that cover the demyelinated axons; however, the newly formed myelin sheaths are typically thinner than the original myelin sheaths and/or may have a different structure and altered conduction properties (Zhao et al., 2005;Franklin and Ffrench-Constant, 2008). Remyelination is a key step in the patient's recovery process, as electrical impulses propagate too slowly along demyelinated axons to allow normal brain function. Non-invasive quantitative imaging of changes in myelin content and microstructure can provide critical information about demyelination and remyelination processes and be useful for monitoring the progression of diseases and responses to treatment. There are several methods available which can be used for imaging of demyelination, however, MRI is able to map myelin only indirectly (Heath et al., 2018). Direct detection of myelin is difficult as the movement restriction of lipid chains in the myelin bilayer causes a fast relaxation decay of the MR signal, although it may become more feasible by adopting zero echo time imaging approaches (Wilhelm et al., 2012;Seifert et al., 2017). Diffusion MRI, in particular diffusion tensor imaging (DTI), monitors the microscopic motion of water molecules that occur in brain tissues as a part of the diffusion process. As myelin sheaths restrict water diffusion, DTI can detect abnormalities in the structure of white matter, although it is not specific for the myelin compartment as many other cell structures contribute to the restriction of diffusion in tissue. Magnetization transfer (MT) MRI is an indirect method that was proposed many years ago for the detection of demyelination (Wolff and Balaban, 1989). This method utilizes the exchange of magnetization between the hydrogen nuclei of semisolid macromolecules and hydrogen protons in free water; as a consequence, semisolid tissue components such as myelin structures can modulate the MR image contrast. One limitation to the use of MT for monitoring myelin is that other macromolecular tissue components, as well as changes in the water content due to edema, also affect the MT contrast. Multi-exponential T2 can serve as a potential indicator of the myelin content in white matter. However, the relative size of the short-T2 component around 8-50 ms is defined as myelin associated water, and this has often been interpreted as the myelin content (Dula et al., 2010). While the water fraction of myelin has been found to correlate with the myelin content, the exact relationship between the short T2 component and the myelin content is not well understood (Tozer et al., 2005). A novel rotating frame relaxation method operating in nonadiabatic regime, entitled Relaxation Along a Fictitious Field (RAFF) (Liimatainen et al., 2010 in the rotating frame of rank n (RAFFn) , was recently presented and shown to have excellent sensitivity for myelin detection both in normal brain (Hakkarainen et al., 2016) and in demyelinated lesions induced by lysophosphatidylcholine (LPC) injections into the corpus callosum and in the dorsal tegmental tract (Lehto et al., 2017) of the rat brain and in dysmyelination (Satzer et al., 2015) in mouse brain. The correlation of relaxation time constants detected with RAFF4 (TRAFF4) with the myelin content obtained in a previous study (Lehto et al., 2017) was ascribed to the increased sensitivity of RAFFn to slow/ultraslow motional regimes. These have correlation times of motion in the ms range Satzer et al., 2015;Hakkarainen et al., 2016), likely reflecting the exchange of myelin associated water as well as the conformational dynamics of methylene functional groups within myelin. The highest correlation between relaxation time constants and the myelin content was achieved with RAFF4 and RAFF5 techniques as compared to T1, T2 and conventional spin-lock rotating frame relaxation contrasts (Satzer et al., 2015;Hakkarainen et al., 2016) in the rat brain. In addition, RAFFn provides the distinct advantage of resulting in a substantially lower specific absorption rate (SAR) as compared to conventional continuous wave (CW) (Liimatainen et al., 2010. While our previous work demonstrated the clear advantages of RAFFn in the detection of demyelination (Lehto et al., 2017), the process of remyelination was not assessed by multimodal MRI. In the present work, we hypothesize that by combining microstructural imaging, DTI, and methods specific to myelin content, RAFFn and/or MT, it is possible to characterize both the myelin content and the integrity of myelin sheaths during remyelination. To test this hypothesis, we used LPC-induced demyelination in the rat corpus callosum, and conducted a longitudinal study using multiparametric MRI data during both the acute demyelination and chronic remyelination phases and compared the results with histological findings. Animal Model A total of 26 adult male Sprague-Dawley rats (Charles River, Germany; 300-350 g) were used in this study. Rats were group housed with a 12 h light/12 h dark cycle and had ad libitum access to food and water. All animal procedures were approved by the Animal Ethics Committee of the Provincial Government of Southern Finland and conducted in accordance with the guidelines set by the European Commission Directive 2010/63/EEC. All surgical procedures were done under inhalation anesthesia using 1.8-2.2% isoflurane in 30%/70% O 2 /N 2 O. To induce demyelinated lesions, stereotaxic injections of the LPC solution (volume of 1.5 µl; concentration: 10 mg/ml; L-αlysophosphatidylcholine from egg yolk; L-4129 Sigma-Aldrich, St. Louis, United States) were administered into the corpus callosum of the rat brain with stereotactic coordinates of 0.4 mm caudal from bregma, 1.4 mm left from bregma, and 2.6 mm from the brain surface (n = 17). Control animals (n = 9) underwent the identical protocol but were injected with 1.5 µl of vehicle solution of 0.1 M sodium phosphate buffer solution instead of LPC. Pilot Study A pilot study was first performed to clarify the time course of the demyelination/-remyelination process in the LPC model under our experimental conditions. It has been previously described that demyelination without an inflammatory reaction peaks at day 3 after LPC injection (Waxman et al., 1979;Lehto et al., 2017). However, it was our intention to determine the time course of remyelination. In the pilot experiment, 5 LPC rats were imaged at 7 T MRI (Bruker Pharmascan, Entlingen, Germany) with an actively decoupled quadrature receiver rat head coil and volume transmit coil pair every 2-3 days for 38 days using a highresolution T2-weighted fast spin-echo (FSE) sequence with the following parameters: TR = 2.6 s, averages = 8, TE eff = 42.7 ms, rare factor = 8, FOV = 25.6 × 25.6 mm 2 , matrix size = 256 × 256, number of slices = 24 and slice thickness = 0.3 mm) with total imaging time of 10 min 55 s. Immediately after the final scanning, the animals were perfused for histology. MRI Protocol to Study Demyelination and Remyelination The remaining rats (n = 21) were imaged on day 3 after the LPC (n = 12) or vehicle (n = 9) injection, when there was already a significant demyelination without any inflammatory reaction or any signs of remyelination (Waxman et al., 1979), and again on day 38 after the injection when there should be a marked remyelination according to our pilot study. All MRI procedures were performed with the 7 T MRI system described above. The location of injections was localized using T2-weighted FSE acquisitions. The center of the imaging slice for RAFF4, MT, and DTI (middle slice), on both day 3 and day 38, was positioned to align with the center of the T2-weighted slice next (caudal) to the slice covering the injection site to exclude any mechanical damage induced by the injection. For the relaxation and MT measurements, a FSE pulse sequence was used as the readout portion of the sequence. The parameters for the readout were TR = 4 s, TE eff = 8.3 ms, n echo = 8, FOV = 32.0 × 32.0 mm 2 , matrix size = 256 × 256, number of slices = 1 and slice thickness = 0.5 mm with a total acquisition time of 16 min for one relaxation time constant map. The RAFFn method has been presented in detail previously . Here, we used RAFF4; to generate RAFFn contrast, trains of RAFFn pulses assembled in P-packets (PP −1 P π P π −1 ) were used as described before (Liimatainen et al., 2010). The duration of each RAFF4 pulse, defined as T p = 4π/( √ 2ω 1 max ), was set to 4.525 ms and the peak RF amplitude was γB 1 = 324 Hz. The RAFF4 pulse train durations were 0, 109, 217, 326, and 434 ms. Separate measurements were performed with and without an adiabatic full passage (AFP) inversion pulse (hyperbolic secant (HS1) pulse, T p = 8 ms, γB 1 = 2,500 Hz) preceding the RAFFn pulse trains (Liimatainen et al., 2010). RAFF4 was calculated by a non-linear least-squares fitting approach simultaneously on data obtained with initial -z and +z magnetization orientations (Liimatainen et al., 2010). Equation 1 was used to model the observed exponential decay and the approach to steady state, Here, S 0 is the initial signal intensity (t = 0), R is the relaxation rate constant describing the decay, and S SS is the steady-state intensity at t → ∞. In acquiring MT metrics, we used the modified inversion MT protocol with two consecutive acquisitions as described previously . Separate measurements were performed with the magnetization initially aligned along the +z axis during off-resonance irradiation, or -z axis to allow the signal to recover, i.e., without or with initial global inversion achieved by an adiabatic full passage (AFP) pulse, in analogy to the acquisitions with RAFF4. A square saturation pulse with γB 1 = 200 Hz was placed at 8 kHz off-resonance with an incremental duration (0.0, 0.3, 0.6, 0.9, 1.2 s). T 1sat , M SS (steady state magnetization) and M 0 (fully relaxed magnetization in the absence of RF), were calculated using pixel-by-pixel analysis, as described by Mangia et al. (2011). MTR was also calculated as MTR = 1-M SS /M 0 . For DTI, segmented spin-echo EPI was used with TR = 1 s, TE = 31.8 ms, n shots = 2, number of averages = 48, FOV = 21.3 × 14.4 mm 2 , matrix size = 170 × 115, number of slices = 5, slice thickness = 0.5 mm, b = 2,000 s/mm 2 , diffusion directions = 42 leading to a total acquisition time of 1 h 18 min. Mean diffusivity (MD), fractional anisotropy (FA), and radial and axial diffusivity (RD, AD) maps were calculated from DTI data. DTI data were corrected for motion and eddy current-induced image distortions using Explore DTI (Leemans et al., 2009). Relaxation time constants and parametric maps from MT and DTI were reconstructed from signal intensities using pixel-bypixel fitting in MATLAB (MathWorks, Natick, MA) and FMRIB's Software Library (FSL). Region-of-Interest (ROI) Analysis All the images from both time points were co-registered to the RAFF4 images from day 3 using Advanced Normalization Tools (ANTs) 1 . Two ROIs in the corpus callosum, one contralateral and one ipsilateral to the injection site, were manually drawn on T2-weighted images in every animal and transferred to the co-registered stack of parametric maps using the Aedes software package 2 When drawing the ROIs, we chose one slice caudally to the injection site based on the day 3 images and we used the same location on day 38. Mean values from each ROIs from every map were used in the statistical analysis. In the vehicle injected animals, the ROIs were drawn at the vehicle injection site similarly as conducted for the LPC-injected animals. Histological Procedures and Analysis After the last MRI session, all animals were transcranially perfused first with 0.9% NaCl (30 ml/min, 2 min, 4 • C) followed by 4% paraformaldehyde solution in 0.1 M phosphate buffer (pH 7.4) (30 ml/min, 25 min, 4 • C). After perfusion, the brains were removed from the skull, and post-fixed for 4 h in 4% paraformaldehyde solution. Then, the brains were cryoprotected in 20% glycerol in 0.02 M potassium phosphate-buffered saline (pH 7.4) for 36 h, and frozen in dry ice, and stored at -70 • C until sectioning. The brains were sectioned into five series of 30 µm thick coronal sections using a sliding microtome. The first series was stored in 10% formalin at room temperature, and second to fifth series were stored in a cryoprotectant tissue-collecting solution (30% ethyleneglycol, 25% glycerol in 0.05 M PBS) at -20 • C until staining. Selected sections from the first series of sections were stained with Nissl (thionin) to assess changes in the cytoarchitecture of the corpus callosum. We stained up to 10 sections covering and exceeding the lesioned area as revealed in MRI on day 3. Consecutive sections from the second series were stained with gold chloride to assess the myeloarchitecture of the corpus callosum (Laitinen et al., 2010). The optical density of Nissl-and myelin-stained sections was quantified in locations corresponding to the ROIs in the MRI analysis. Three consecutive sections were selected based on the MRI images where the ROI was drawn for analysis. The histological sections were selected based on anatomical landmarks, and the ROIs for optical density were drawn in the same anatomical location as in the MRI images in the ipsi-and contralateral corpus callosum. The three consecutive sections represent 450 µm in the rostral-caudal direction, which provides good coverage of the slice thickness of 500 µm in MRI. High-resolution photomicrographs of both Nissl-and myelinstained sections of the corpus callosum were obtained using a light microscope (Zeiss Axio Imager2, White Plains, NY, United States) equipped with a digital camera (Zeiss Axiocam color 506). The whole corpus callosum area was imaged in each section by using the tile mode with an objective of 20×. Acquisition, alignment and format conversion were performed with Zen software (Blue edition, v2.6, Carl Zeiss Microscopy GmbH, United States). The optical density (OD) on Nissl-and myelin-stained sections was quantified using ImageJ software (version 1.47, http://rsb.info.nih.gov/ij/, NIH, United States). First, the color photomicrographs were converted into 16-bit gray scale images, and then the gray scale was inverted to facilitate the interpretation of intensity values in the image to the intensities observed in the myelin-stained sections. We obtained the intensity values from each ROI from Nissl and myelin-stained sections. In order to correct for possible staining differences between sections and brains, the intensity values were corrected against the background intensity with no cell/myelin as in the cortical areas. OD was estimated as (I ref -I cc )/I ref , and for each ROI, the OD value was the average of the three consecutive sections. The estimation of the area of demyelination was conducted on the myelinstained sections by selecting the area with a low content of myelin ipsi-and/or contralaterally. This selection was limited to the area of demyelination included in the previously drawn ROI for intensity. Statistical Analysis Data were analyzed using GraphPad Prism software (version 5.03 for Windows, La Jolla, CA, United States). Numerical results are represented as mean and standard deviation. Differences between vehicle-and LPC-injected rats were assessed using the two-sample t-test, and differences between ipsi-and contralateral corpus callosum within the same brain using the paired t-test. The contribution of myelinated axons and cell density to the MRI metrics was assessed using Pearson's linear correlation of the ROI analysis results from MRI and OD of myelinand Nissl-stained sections. The change of the MRI parameters between days 3 and 38 was assessed using paired-samples t-test separately for ipsi-and contralateral ROIs of vehicleand LPC-injected rats. The Benjamini-Hochberg false discovery rate method was used for multiple comparison corrections, and FDR-threshold q < 0.05 was chosen for statistical significance (Benjamini and Hochberg, 1995). RESULTS The time course of the relative signal changes in T2weighted images after LPC injection is shown in Figure 1. This pilot experiment showed that a clear lesion could be detected on day 3 in the corpus callosum, followed by a gradual recovery of the T2-weighted signal intensity in the subsequent time points (Figure 1G). This is consistent with the demyelination/remyelination process described for the LPC model in white matter (Woodruff and Franklin, 1999). Based on this experiment, we chose day 3 as the time point for demyelination and day 38 for remyelination. On day 3, all the LPC animals exhibited a lesion in the MRI maps with the lesion mainly in the ipsilateral corpus callosum, but also extending to the contralateral side (Figure 2). The group-wise results and comparisons in absolute units are shown in Figure 3, while Table 1 shows relative differences and q-values (FDR corrected p-values) facilitating a comparison between modalities. The relative differences were calculated as (LPC-Vehicle)/Vehicle) * 100%. All MRI metrics revealed the significant and robust effect of demyelination following LPCinjected animals in the ipsilateral site (Figure 3). The largest relative differences were detected by RAFF4, FA and AD (48, -50, -54%, respectively), while MTR, T1sat and RD showed more modest (-18, 21, 26%) but still very clear changes between the demyelinated ipsilateral area and a similar area in vehicle treated animals ( Table 1). The contralateral side also showed statistically significant but smaller changes between LPC and vehicle injected animals. Diffusion parameters, especially AD, FA and RD (-16, -22, 18%) were most sensitive at detecting the contralateral changes; these were caused most likely by the diffusion of LPC from the ipsilateral side to the contralateral side. On day 38, all the LPC-injected animals revealed at least a partial recovery of the lesion in the MRI maps (Figure 4). Nonetheless, significant differences were still observed on day 38 between LPC and vehicle injected animals in the ipsilateral side in all other MRI metrics except the MD (Figure 5). When comparing MRI outcomes on day 3 (demyelination) to day 38 (remyelination), significant differences were detected in all MRI metrics ( Table 1). In particular, the recovery toward normal values on the ipsilateral side of the LPC injected animals was detected with RAFF4 (from 48 to 17%, difference in ipsilateral side of LPC rats, from day 3 to day 38), MTR (from -18 to -7%), T1sat (from 21 to 10%), MD (from -31 to 1%), FA (from -51 to -22%), AD (from -54 to -16%). Furthermore, RD displayed a further robust increase (from 26 to 45%) from days 3 to 38. Figure 6 shows the quantitative assessment of the histological results as well as representative examples of myelin-and Nisslstained sections from vehicle-and LPC-injected animals. On day 38, the optical density (OD) analysis on myelin-stained sections revealed a small but significant decrease in the myelin content when comparing the ipsi-and contralateral ROIs in the corpus callosum in the LPC-injected brains (q = 0.02) (Figure 6A). We found a significant increase of the demyelinated area in animals after LPC injection in comparison to vehicle animals, ipsilaterally (q = 0.0085) but not contralaterally (q = 0.11) (Figure 6B). The demyelinated area was small as compared to the total area of the ROI analyzed in the OD analysis; these results demonstrate that the remyelination was well advanced but not completed at 38 days after the injection (Figures 6D-G). Additionally, we found that myelin alterations were taking place along the corpus callosum structure, which may be an indication of ongoing remyelinating processes ( Figure 6F). The analysis on Nissl-stained sections revealed increased cell density, which can be attributed to gliosis. The OD analysis on Nissl-stained sections showed that values in both ipsi-(q = 0.0032) and contralateral (q = 0.0085) ROIs of the corpus callosum significantly increased when comparing vehicle and LPC animals ( Figure 6C). The increased cell density area overlapped with the demyelinated area (Figures 6G,K) and myelin alterations (Figures 6F,J) observed in myelin staining. These results demonstrate that the persistent demyelination was accompanied by inflammatory processes which were still ongoing at 38 days after the LPC infection. None of the MRI parameters correlated with the OD of myelin staining in the lesion area in the remyelination phase, however, RD, FA and AD correlated with the OD assessed with Nissl staining (q < 0.05) ( Table 2). DISCUSSION In the present work, we investigated the capabilities of quantitative RAFF4, MTR, T1sat and DTI metrics to detect LPCinduced demyelination and remyelination in rat brain corpus callosum. We confirmed the previously demonstrated high sensitivity of RAFF4, MTR, and DTI for detecting demyelination (Lehto et al., 2017). This is the first time when RAFF4 was tested for investigating the myelination status during the remyelination phase. Our main finding was that the remyelination phase was associated with a partial recovery of RAFF4, MTR, and T1sat, FA and AD, while RD remained abnormally high and MD showed a complete recovery on day 38 after LPC injection, i.e., a time point when there was histological evidence of marked remyelination and gliosis. Our results confirmed the sensitivity of RAFF4 and MTR to detect demyelination at 3 days after the LPC injection into the corpus callosum when only mild gliosis was present (Lehto et al., 2017). The demyelination phase was also associated with a distinct pattern in the DTI metric's changes, namely decreases in FA, AD, and MD, and an increase in RD. In our previous study, the LPC induced demyelination in the corpus callosum was characterized by a clearly decreased myelin content as detected by myelin staining. However, in that study we also observed some remaining disorganized pockets in the myelin sheaths with myelin debris being evident in electron microscopy (Lehto et al., 2017). In the present experiments, the pattern of changes in DTI metrics in demyelination phase, was mostly consistent with our previous work, however, now we did find increased RD, a parameter which was unchanged in our previous study. The present finding is in agreement with the general view that increased RD is an indication of demyelination (Song et al., 2005). The difference to the previous study may originate from differences in LPC patches leading to more severe demyelination. This is also consistent with the somewhat relatively larger changes in RAFF4 and MTR observed in the present study as compared to those reported by Lehto et al. (2017). The remyelination phase was characterized by a closeto-normal myelin content as confirmed by OD analysis of myelin-stained histological sections. Unlike on day 3, when only very mild gliosis was present, on day 38 increased cellular density was detected in Nissl staining, likely due to gliosis. As increased cellularity affects relaxation, MT and diffusion, this makes the interpretation of the results more complicated, resembling more realistically the human pathology where myelin damage typically triggers gliosis, and thus these pathological features overlap. At the late time point, we observed a recovery of RAFF4 toward the normal values measured in the healthy tissue, which is consistent with remyelination. It has been shown that RAFF4 is sensitive to the correlation time regime in the ms-range (Satzer et al., 2015;Hakkarainen et al., 2016), which likely corresponds to exchange and dipolar interactions of myelin and water as well as dipolar interaction with methylene groups. Therefore, the high sensitivity of RAFF4 to myelin, also during the remyelination phase, was expected. MT showed a similar recovery toward baseline as RAFF4. However, the relative difference to controls was smaller than in RAFF4, reflecting its lower sensitivity to myelination changes in the demyelination phase. Previously, RAFF4 had been shown to correlate with myelin density to a greater extent than MT in normal brain (Hakkarainen et al., 2016) and in LPC-induced demyelinated lesions in dorsal tegmental tract (dtg) of the rat brain (Lehto et al., 2017). It should be emphasized, however, that there is a distinct difference between relaxation mechanisms during RAFF4 and MT. RAFF4 is a rotating frame method operating in the rotating frame of rank 4, and thus has contributions from longitudinal, T1r, and transverse, T2r, relaxation pathways . In addition to anisochronous and isochronous exchange and dipolar interactions contributing to RAFF4, RAFF4 share cross-relaxation pathways with MT (van Zijl et al., 2018). Therefore, these two techniques provide only partially overlapping information when characterizing tissue integrity. This substantial distinction in the relaxation mechanisms contributing to RAFF4 and MT is reflected in the differential sensitivity of RAFF4 and MT to demyelination, dismyelination and remyelination processes in the brain (Satzer et al., 2015). It is also worth noting that RAFFn offers the possibility of achieving the desired fictitious field by making use of a frequency swept pulse which improves the flexibility in handling SAR issues in human applications . The pattern of changes detected in DTI metrics in the remyelination phase was likely conveying information from multiple factors including the thickness and microstructure of the myelin sheaths as well as the cell density. The partial recoveries of FA and AD are similar to those detected with RAFF4 and may reflect the rebuilding of myelin sheaths and the clearance of the myelin debris. The increase in RD is consistent with the fact that the remyelinated sheaths are structurally different from intact myelin sheaths (Raine, 1984;Oluich et al., 2012;Podbielska et al., 2013;Pfeiffer et al., 2019), i.e., they are likely more permeable to water. MD was the only MRI parameter that returned to the normal level on day 38. It is well known from cancer studies that MD inversely correlates with the cellularity of the tissue (Chenevert et al., 2000) and therefore the increased cellularity due to gliosis likely contributes to the pseudo-normalization of MD. The extension to more complex diffusion MRI models has the potential to extract more specific information related to these processes (Luo et al., 2019). MRI changes were also detected on the contralateral side of the injection between LPC and vehicle injected animals. This is likely attributable to diffusion of LPC along axons in corpus callosum such that LPC reached also the contralateral side. Interestingly, changes in cell density in Nissl, attributed to gliosis, were pronounced on the contralateral side on day 38, probably explaining the higher sensitivity of diffusion changes than were evident with RAFF4 or MT. None of the MRI parameters correlated significantly with optical density of myelin staining in the remyelination phase. This is likely because the optical densities were close to normal in the lesioned area and therefore FIGURE 6 | Histologic assessment of the myelin and Nissl stainings at 38 days after vehicle or LPC injection. OD (A) and demyelinated area (B) analyses of the myelin-stained sections, and OD analysis of the Nissl-stained (C) sections. Values were obtained from the ipsi-and contralateral corpus callosum of vehicle-(n = 9) and LPC-injected (n = 12) rats. Results are shown as mean ± SD. The unpaired t-test compared the same hemispheres between vehicle-and LPC-injected rats (**p < 0.01), and the paired t-test ipsi-and contralateral hemispheres within the same animals ( ++ p < 0.01). Photomicrographs of vehicle-and LPC-injected animals in myelin (D-G) and Nissl (H-K) stains of representative rats. The white arrow points to the ongoing demyelinated area and the presence of gliosis, and the asterisk indicates the area with ongoing myelin alterations accompanied by gliosis. Scale bar: 1 mm (D,E,H,I) and 200 µm (F,G,J,K). there was a narrow range of values both for MRI and optical density. This, together with confounding effect of gliosis on MRI parameters, explains the non-significant correlation values between MRI parameters and myelin density in the remyelination phase, even though there was an evident recovery of MRI parameters, especially RAFF4 and MTR, from demyelination values. The influence of gliosis on diffusion metrics is consistent with the earlier reports of Budde et al. (2011). Consistently, we observed a correlation between cellularity in Nissl staining and diffusion parameters but not with RAFF4 or MT parameters, further emphasizing the different sensitivities of these techniques to detect myelination and cellularity. One limitation of our study is that in spite of careful manual alignment of histology with MRI by an expert in the field, the partial volume effect and the challenge of selecting the same ROIs in MRI and histology could have influenced our results. In addition, the limited sampling in histology vs. the slice thickness in MRI may have affected our assessments of the correlations. CONCLUSION Our data confirms the sensitivity of RAFF4 and MT for detecting the myelin content in demyelinated lesions, but now reveals that remyelination is associated with a recovery of RAFF4 and MT toward normal values. DTI metrics displayed a distinct pattern of changes in the remyelination phase, likely reflecting on-going changes not only in the myelin content but also in the architecture of the myelin sheaths as well as the presence of gliosis. The combination of RAFF4, MT and DTI has the potential to differentiate between normal, demyelinated and remyelinated axonal bundles and gliosis, thus making possible a unique non-invasive characterization of white matter pathologies in several neurological diseases. Further studies will be required to evaluate the sensitivity of multiple MRI modalities to detect remyelination in areas with more isotropic fiber distributions, where RAFF4 has demonstrated its superiority over DTI (Lehto et al., 2017). DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by the Animal Ethics Committee of the Provincial Government of Southern Finland. AUTHOR CONTRIBUTIONS KH, HL, and RS participated in the design of the work, acquisition, analysis, interpretation of data, and preparing the manuscript. AS, AN, MB, and JV participated in the design of the work and preparing the manuscript. ShM and SiM participated in the design of the work, interpretation of the data and preparing the manuscript. AS and OG participated in the design of the work, analysis, interpretation of data and preparing the manuscript. All authors contributed to the article and approved the submitted version.
2021-03-04T14:23:30.919Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "1b6878454d6e024dacfbf8e26b103efc52c44680", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2021.625167/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "1b6878454d6e024dacfbf8e26b103efc52c44680", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
255776043
pes2o/s2orc
v3-fos-license
Evidence for an RNAi-independent role of DICER-LIKE2 in conferring growth inhibition and basal antiviral resistance Higher plants encode four DICER-LIKE (DCL) enzymes responsible for the production of small non-coding RNAs which function in RNA interference (RNAi). Different RNAi pathways in plants effect transposon silencing, antiviral defense and endogenous gene regulation. DCL2 acts genetically redundantly with DCL4 to confer basal antiviral defense, but in other settings, DCL2 has the opposite function of DCL4, at least in formal genetic terms. For example, knockout of DCL4 causes growth defects that are suppressed by inactivation of DCL2. Current models maintain that the biochemical basis of both of these effects is RNAi via DCL2-dependent small interfering RNAs (siRNAs). Here, we report that neither DCL2-mediated antiviral resistance nor growth defects can be explained by silencing effects of DCL2-dependent siRNAs. Both functions are defective in genetic backgrounds that maintain high levels of DCL2-dependent siRNAs, either through specific point mutations in DCL2 or simply by reducing DCL2 dosage in plants heterozygous for dcl2 knockout alleles. Intriguingly, however, all functions of DCL2 depend on it having some level of catalytic activity. We discuss this requirement for catalytic activity, but not for the resulting siRNAs, in the light of recent findings that reveal a function of DCL2 in activation of innate immunity in response to cytoplasmic double-stranded RNA. INTRODUCTION Small non-coding RNAs are central players in the control of genetic information in eukaryotic organisms. Plants make pervasive use of small interfering RNAs (siRNAs) to defend against transposable elements and viruses (Ding and Voinnet, 2007;Law and Jacobsen, 2010), and of both siRNAs and microRNAs (miRNAs) to regulate endogenous gene expression programs during development and in response to changes in the environment (D'Ario et al., 2017;Song et al., 2019). Both classes of regulatory small RNA are produced from longer double-stranded (ds) precursor RNA molecules by multidomain DICER-LIKE (DCL) ribonucleases (Fukudome and Fukuhara, 2017). DCLs use two RNaseIII-domains for dsRNA cleavage and a Piwi-Argonaute-Zwille (PAZ) domain to help anchor the extremity of staggered dsRNA substrates and position the catalytic center in the correct distance from the dsRNA end to produce small RNA duplexes of a well-defined size between 21 and 24 nt depending on the exact DCL enzyme (Macrae et al., 2006;MacRae et al., 2007). In addition, all DCLs contain conserved DExH-box helicase domains (Song and Rossi, 2017) that may facilitate binding of blunt end substrates and processive action of the enzyme on long dsRNA substrates (Niladri et al., 2018). The staggered duplex product released upon DCL-mediated processing subsequently associates with an ARGONAUTE (AGO) protein to form a mature RNA Induced Silencing Complex (RISC) upon duplex unwinding in the AGO-small RNA complex (Matranga et al., 2005;Rand et al., 2005). Higher plants encode four distinct DCLs exemplified by arabidopsis DCL1, DCL2, DCL3 and DCL4 (Margis et al., 2006), of which three have clearly defined functions. DCL1 makes miRNAs, mostly 21 nt in size (Park et al., 2002) while DCL3 makes 24-nt siRNAs involved in transcriptional silencing of repetitive elements through DNA methylation (Xie et al., 2004). DCL4 makes different types of 21-nt siRNAs, most importantly the bulk of antiviral siRNAs and endogenous siRNAs from dsRNA synthesized by the RNA-dependent RNA Polymerase RDR6 following initial targeting of a precursor transcript by one or more miRNAs Dunoyer et al., 2005;Gasciolli et al., 2005;Xie et al., 2005;Yoshikawa et al., 2005;Bouche et al., 2006;Deleris et al., 2006). The exact functions of DCL2 remain less clearly defined. Biochemically, DCL2 is distinct from other DCLs in two regards. First, it produces 22-nt siRNAs, different from the 21-nt and 24-nt size classes produced by other plant DCLs (Xie et al., 2004). This could have biological significance, because the 22-nt size class has a unique, and so far not clearly explained, ability to trigger dsRNA production and siRNA amplification via RDR6 (Chen et al., 2010;Cuperus et al., 2010). Indeed, DCL2 is required for RDR6-dependent siRNA amplification in transgene silencing, and inactivation of DCL4 greatly stimulates this process, probably because DCL2 has no competition for transgene dsRNA in the absence of DCL4 (Mlotshwa et al., 2008;Parent et al., 2015). Second, in contrast to other DCLs, its biochemical activity has only been observed indirectly through absence of 22-nt siRNAs in dcl2 knockout mutants (Xie et al., 2004;Qi et al., 2005;Blevins et al., 2006;Bouche et al., 2006;Deleris et al., 2006;Nagano et al., 2014), perhaps suggesting lower enzymatic activity than that of other DCLs, or that it requires labile co-factors that preclude detection of activity in cell-free systems. Nonetheless, a large body of evidence indicates that DCL2 has important biological functions that manifest themselves in several ways, so far regarded as distinct. First, together with DCL4, it is required for basal antiviral resistance, i.e. for the defense against viruses not adapted to specific hosts by employment of virulence factors capable of suppressing the host antiviral RNAi response (Deleris et al., 2006;Wang et al., 2011;Andika et al., 2015). Experimentally, such a setting is typically mimicked by inactivation of anti-RNAi virulence factors of an adapted virus. The requirement for DCL2 in basal antiviral resistance is observed both at local infection sites, and systemically (Deleris et al., 2006). Second, DCL2 is particularly important for systemic RNAi (Taochy et al., 2017;Chen et al., 2018), an activity that may be linked to its alleviation of symptoms even in infections with adapted viruses (Zhang et al., 2012). Third, in the absence of DCL4 or certain RNA decay factors, DCL2 confers a developmental phenotype with delayed postembryonic growth and leaf anthocyanin accumulation and/or yellowing (Parent et al., 2015;Zhang et al., 2015). This phenotype is strongly exacerbated by simultaneous mutation of DCL4 and DCL1 (Bouche et al., 2006), and in certain combinations of dcl4 and mutants in RNA decay, it becomes outright lethal at the embryonic stage (Zhang et al., 2015). At present, explanations for all of the important biological roles of DCL2 rely on the assumption that their biochemical basis is the production of siRNAs that silence complementary targets through the action of RISC. In basal antiviral resistance, DCL2-mediated 22-nt siRNAs can indeed be detected, albeit only when DCL4 is inactivated, such that disappearance of both 21-22nt siRNAs in dcl2 dcl4 mutants correlates with complete loss of basal antiviral resistance (Deleris et al., 2006;Wang et al., 2011). Nonetheless, even if the 22-nt siRNAs produced by DCL2 could in theory account for DCL2-mediated antiviral resistance, previous observations of partial loss of basal antiviral resistance in the presence of DCL2-dependent 22-nt siRNAs upon inactivation only of DCL4 may also be interpreted to mean that the siRNAs produced by DCL2 are insufficient to confer resistance (Deleris et al., 2006;Wang et al., 2011;Andika et al., 2015). In systemic RNAi, the 22-nt siRNAs produced by DCL2 are proposed to condition efficient amplification of siRNAs via RDR6 in both source and recipient tissues, probably via properties of RISC containing 22-nt siRNA (Taochy et al., 2017). Finally, the severe growth phenotypes conferred by DCL2 in the absence of DCL4 and/or in RNA decay mutants are proposed to result from ectopic silencing by DCL2dependent 22-nt siRNAs, potentially because of their ability to engage the small RNA amplification machinery in an uncontrolled way. Consistent with this hypothesis, ectopic silencing by 22-nt siRNAs of the nitrate reductase-encoding NIA1/2 and of SMXL4/5 encoding phloem differentiation factors is observed in dcl4 mutants, and knockout of these genes recapitulates some of the DCL2dependent defects observed in dcl4 mutants (Zhang et al., 2015;Wu et al., 2017). Clear proof that silencing of SMXL4/5 and/or NIA1/2 by the ectopic 22-nt siRNA populations actually causes the growth defects in dcl4 mutants has not been reported, however, as this would require phenotypic analysis upon specific reversion of silencing of these genes in dcl4 mutant backgrounds; a condition that is difficult to achieve experimentally. Thus, at present it cannot be regarded as established fact that SMXL4/5 or NIA1/2 silencing underlies the DCL2-dependent growth defects observed in dcl4 mutants. Indeed, the fact that the presence of DCL2 in some genetic backgrounds (e.g. dcl4 ski2, dcl4 sgt1b, or xrn4 ski2 (Zhang et al., 2015;Nielsen et al., 2023) can give rise to phenotypes much stronger than the defects observed, for instance, in smxl4/5 knockout mutants (Zhang et al., 2015;Wu et al., 2017) suggests, as a minimum, that additional relevant targets of ectopic DCL2-dependent siRNAs remain to be identified. Alternatively, and similar to the situation described for the role of DCL2 in basal antiviral resistance, the DCL2dependent siRNA populations may not, as a matter of fact, cause the severe DCL2-dependent developmental phenotypes. In principle, answers to this fundamental question should be tangible by rigorous mutational analysis of the DCL2 gene: if DCL2 has functions distinct from mere production of siRNAs guiding RISC to silence complementary targets, it should be possible to isolate separation-of-function alleles that either maintain siRNA production, yet lose other DCL2 functions, or lose siRNA production while maintaining other DCL2 functions. Here, we report the engineering of a series of DCL2 point mutants affecting the catalytic activity of RNAseIII domains and the ATP-binding/hydrolysis activity of the helicase domain, as well as 10 novel alleles of DCL2 isolated by a forward genetic screen for suppressors of DCL2-dependent growth inhibition. The results show that while all reported functions of DCL2 rely on RNaseIII catalytic activity, separation-of-function alleles that allow wild type levels of 22-nt siRNAs to accumulate can be identified. Surprisingly, simple reduction of DCL2 dosage in plants heterozygous for a DCL2 wild type allele is also sufficient to achieve similar separation of function. These results underscore the need to revise current models for DCL2 function, and we discuss them in the light of findings in the accompanying paper that reveal a function of DCL2 in activation of innate immunity in response to cytoplasmic double-stranded RNA (Nielsen et al., 2023). Growth defects occur upon loss of DCL4 protein and depend on DCL2 dosage and dsRNA A key motivation to undertake the present work stems from our observation that when young wild type and dcl4 mutant arabidopsis seedlings were transferred from agar plates to soil, but not when germinated directly in soil, dcl4 knockout mutants often exhibited growth defects and chlorosis ( Figure 1A,B), albeit with incomplete penetrance and batch-to-batch variability in the exact frequency of phenotypic penetrance. Similar growth defects in dcl4 mutants have been noted by others (Gasciolli et al., 2005;Parent et al., 2015;Wu et al., 2017). To analyze whether the incompletely penetrant growth phenotype of dcl4 knockout mutants was caused by loss of a specific class of DCL4-dependent siRNA, we first used an allelic series of dcl4 mutants in two different genetic backgrounds (Col-0 and Ler) including insertion, non-sense and missense alleles that specifically abolish the production of only certain types of DCL4-dependent siRNAs ( Figure 1C) (Dunoyer et al., 2005;Xie et al., 2005;Dunoyer et al., 2007;Liu et al., 2012b;Montavon et al., 2018). These analyses established that the growth phenotype was only observed at an appreciable frequency upon loss of DCL4 protein (i.e. in insertion and non-sense mutants; Figure 1A-C). We used an artificial miRNA targeting DCL2 (amiR-DCL2) to verify that the growth defects observed in mutants homozygous for the non-sense allele dcl4-15 in accession Ler were DCL2dependent ( Figure 1B). This was confirmed with the knockout allele dcl2-1 (Xie et al., 2004) introduced into dcl4-2t, both in accession Col-0 ( Figure 1A). Notably, the G587D substitution in dcl4-8 that causes loss of all classes of DCL4-dependent siRNAs and gives rise to an enzyme that remains bound to dsRNA substrates (Montavon et al., 2018) did not produce growth inhibition and chlorosis phenotypes (Figure 1A), suggesting that access of DCL2 to dsRNA is required to trigger these phenotypes. In addition, we observed that the penetrance of leaf yellowing and growth inhibition was 100% in the dcl4-2t dcl1-11 double mutant (accession Col-0), and noticed that a dcl2-1 knockout allele even in the heterozygous state was sufficient to cause substantial suppression of this phenotype ( Figure 1D). Finally, we verified previous results (Zhang et al., 2015) that inactivation of RDR6 fully suppressed the growth inhibition and chlorosis phenotypes in dcl4 knockout mutants ( Figure 1E). These initial observations suggest that loss of DCL4 protein, not only activity, causes growth defects that depend on dsRNA availability and accessibility, and DCL2 dosage. RNaseIII and helicase activities of DCL2 are required for growth and antiviral defense phenotypes As a first step to define the biochemical basis of the function of DCL2 in causing growth phenotypes and antiviral defense, we engineered point mutants in the RNaseIII catalytic site (DCL2 E1102A ) or in the DExH ATP-binding site (DCL2 D152N and DCL2 E153Q ). We then tested stable, transgenic lines expressing these DCL2 mutants in the dcl4-2t dcl2-1 double knockout background for phenotypic effects and ability to produce 22-nt siRNAs. None of those lines exhibited growth phenotypes (Figure 2A), and DCL2-dependent siRNA generation was either completely abolished (catalytic RNaseIII mutant) or severely reduced ( Figure 2B). Similarly, resistance to turnip crinkle virus deprived of its silencing suppressor, P38 (TCVΔP38, (Qu et al., 2008)), to mimic infection by a non-adapted virus, was compromised in transgenic lines expressing these DCL2 mutant proteins in dcl4 dcl2 knockout backgrounds ( Figure 2C). Thus, the dicer activity of DCL2 is required both to confer growth phenotypes in the absence of DCL4, and to confer basal antiviral resistance. We also included the helicase mutant S494V in this initial mutational analysis, because the analogous A-V substitution in Drosophila Dcr-2 causes defective induction of a defense gene in response to viral infection despite abundant virus-derived siRNA accumulation (Deddouche et al., 2008). Thus, the S494V mutant may be regarded as a candidate separation-of-function mutant. This mutant behaved largely similarly to the ATP-binding site mutants of the helicase domain with suppression of growth phenotypes, defective basal resistance to TCVΔP38, and severely reduced endogenous 22-nt siRNA production, in particular from the endogenous hairpin IR71 (Figure 2A-C). However, in contrast to the catalytic RNaseIII and DExH helicase mutants, the DCL2 S494V mutant showed detectable accumulation of the RDR6-dependent siR255, and in at least one of the lines examined, TCV-derived siRNAs were abundant despite reduced antiviral resistance ( Figure 2C). A number of conclusions and inferences follow from these initial mutational analyses. First, the results obtained with the lines expressing the catalytically inactive DCL2 E1102A mutant show that catalytic activity is required for both DCL2 functions examined. We also infer that dsRNA binding by DCL2 is not sufficient to cause growth inhibition and antiviral resistance, since the domains known to be required for dsRNA binding in DCL1 (Wei et al., 2021) remain intact in the catalytic mutants, and since the disappearance of 24-nt viral siRNA in DCL2 E1102A ( Figure 2C) suggests dominant negative influence on other DCLs, consistent with dsRNA binding. Second, we conclude that the helicase activity of DCL2 is required for siRNA production, perhaps suggesting that DCL2 acts processively and that the ATP hydrolysis cycle of the helicase domain is required for translocating the dsRNA between two catalysis events, as reported for DCL1 (Liu et al., 2012a;Wei et al., 2021). Third, although the residual siRNA-producing activity detected in the DCL2 S494V mutant cannot be regarded as a strong argument for separation of function in and of itself, it clearly motivates further efforts to investigate the feasibility of separation of function, because of proficiency plants expressing DCL2 S494V for some level of virus-derived siRNA production, yet inability to confer full anti-viral resistance. Contrasting effects of DCL2 dosage on siRNA accumulation and growth inhibition We next returned to the observation that growth phenotypes of dcl4 dcl1 could be suppressed by dcl2-1 in the heterozygous state. Indeed, dcl4-2t dcl2-1/+ also showed significantly reduced penetrance of growth phenotypes compared to dcl4-2t ( Figure 3A). Remarkably, heterozygosity of DCL2 also compromised TCVΔP38 resistance in combination with inactivation of DCL4 ( Figure 3B). Because of the incomplete penetrance of dcl4-2t single mutants, we sought to corroborate the suppression of growth phenotypes by DCL2 heterozygosity in a genetic background in which the DCL2-dependent growth phenotype is fully penetrant. For this analysis, we chose the dcl4 sgt1b double mutant, shown in the accompanying paper to exhibit strong and fully penetrant DCL2dependent growth inhibition (Nielsen et al., 2023). Compared to dcl4-2t dcl1-11 that also exhibits full penetrance of growth inhibition, dcl4 sgt1b is easier to handle as it does not suffer from the severely reduced fertility and extremely slow growth of dcl4-2t dcl1-11. Both dcl2-1 and the nonsense dcl2-11 allele (Table S1, see below) substantially suppressed dcl4 sgt1b in the heterozygous state ( Figure 3C-E). Further analysis of a series of heterozygous dcl2 point mutants (Table S1, see below) yielded similar results ( Figure S1). Importantly, this semi-dominance of dcl2 mutant alleles was not paralleled by a defect in siRNA production. dcl4 dcl2-1/+ seedlings accumulated levels of endogenous hairpin siRNAs (IR71) similar to dcl4 (Figure 4A), and small RNA-seq of sterile-grown, asymptomatic seedlings did not reveal reduced 22-nt siRNA levels transcriptome-wide in dcl4-2t mutants heterozygous for dcl2-1 compared to dcl4-2t mutants homozygous for the DCL2 wild type allele ( Figure 4B). This was also true for several individual genes whose ectopic silencing in dcl4 was previously suggested to play a role in inducing DCL2dependent growth phenotypes ( Figure 4C). These results show that full DCL2 dosage is required both for basal antiviral resistance and to induce growth phenotypes in the absence of DCL4, but not to reach wild type levels of 22-nt siRNAs. DCL2-dependent growth inhibition is not caused by silencing siRNAs To corroborate the important conclusion that silencing via ectopic DCL2-dependent siRNAs does not cause growth inhibition, we conducted a forward genetic screen for suppressors of DCL2dependent growth phenotypes in dcl4 sgt1b, in this case with an eye towards recovery of informative mutant alleles of DCL2. We identified 10 mutant alleles of DCL2, including several nonsense alleles, as expected ( Figure 5A, Table S1). Four point mutations clustering in the ATPbinding domain of the helicase (dcl2-6 to dcl2-9) and one in the PAZ domain (dcl2-12) were investigated further. These mutations suppressed the growth phenotype ( Figure 5B), but did not abrogate the production of an endogenous 22-nt DCL2-dependent siRNA in the absence of DCL4, as assessed by RNA blot (Figure 6A). Since dcl2-12 did not fully suppress the dcl4 sgt1b growth phenotype ( Figure 5A), it may simply be a weak mutant allele of DCL2 that reduces its different functions partially, but evenly. In contrast, the mutants located in the helicase conferred strong suppression of dcl4 sgt1b growth phenotypes. Because dcl2-8 was least affected for accumulation of the endogenous siRNA, siR255 ( Figure 6A), we selected it for small RNA-seq analysis. This analysis showed that in contrast to the dcl2-1 knockout mutant, the accumulation of 22-nt siRNAs in dcl2-8 mutants was similar to wild type ( Figure 6B). In addition, the resistance to TCVΔP38 was compromised in dcl2-8 and in several other point mutants, despite the fact that infection triggered abundant 22-nt virus-derived siRNA production, most notably in dcl2-8 and dcl2-12 ( Figure 6C,D). We conclude that DCL2-dependent growth defects can be suppressed by point mutations in DCL2 despite steady-state siRNA profiles similar to those of plants with wild type DCL2. Similarly, defective antiviral responses can be observed despite substantial production of DCL2-dependent virus-derived siRNAs. We note, however, that even if near-wild type levels of DCL2-dependent siRNAs accumulate in uninfected dcl2-8 mutants in vivo, the mutant protein is unlikely to exhibit full catalytic activity, as judged by the relative amounts of viral gRNA and 22-nt virus-derived siRNAs in wild type and mutant (Figure 6C,D). We could not directly test the biochemical activity of this and other DCL2 mutant proteins, because contrary to DCL4, immunopurified DCL2 did not exhibit detectable activity towards a dsRNA substrate in vitro ( Figure S2). In summary, the results of analyses of plants with reduced DCL2 dosage or with specific point mutations in DCL2 indicate that DCL2-dependent growth restriction, and probably even antiviral resistance at local infection foci, do not rely on silencing by siRNAs. Intriguingly, however, the results also suggest that full catalytic activity of DCL2 is required for both of these DCL2-dependent effects, an intricate point that we treat in some detail in the Discussion. Incoherence between biology and known biochemistry of DCL2: implications and solutions The results of this study strongly indicate that two major effects of DCL2, growth restriction in the absence of DCL4 and basal antiviral resistance together with DCL4 in inoculated tissues, do not depend on silencing by DCL2-dependent siRNAs. The argument relies on the fact that both effects can be suppressed in genetic backgrounds, either homozygous point mutants of DCL2 or knockout alleles of DCL2 in the heterozygous state, that produce substantial amounts of DCL2-dependent siRNAs. Indeed, the fact that the siRNA profiles of dcl4-2t dcl2-8 and dcl4-2t dcl2-1/+ are similar to that of dcl4-2t, yet phenotypic suppression is substantial (for dcl4-2t dcl2-1/+) or complete (for dcl4- Second, do our results imply that 22-nt silencing RNAs produced by DCL2 are unable to act as guides for RNAi? Not in the least. The evidence for a specific relevance of DCL2-dependent 22-nt siRNAs in amplified transgene silencing is clear (Mlotshwa et al., 2008;Parent et al., 2015), and is almost certainly a manifestation of activity of RISC loaded with the 22-nt siRNAs (Sakurai et al., 2021;Vigh et al., 2021;Yoshikawa et al., 2021). Indeed, our results remain compatible with a role of DCL2-dependent siRNAs in some aspects of antiviral defense. The infection assays reported here all focused on the analysis of inoculated leaves, hence local infection sites rather than tissues reached upon systemic movement of the virus. There is good evidence that 22-nt siRNAs produced by DCL2 potentiate systemic RNA silencing (Taochy et al., 2017;Chen et al., 2018), and play roles in defense against adapted viruses (Zhang et al., 2012). We therefore suggest that a major antiviral function of DCL2 at the initial stage of infection involves a function that does not depend on the silencing activity 22-nt siRNAs, while 22-nt virus-derived siRNAs play a major role in immunizing uninfected cells systemically. Furthermore, silencing by endogenous DCL2-dependent siRNAs, such as those causing color variation in the seed coat of soybean (Jia et al., 2020), can clearly take place. Thus, the implications of the results presented here should not be generalized further than what they aim to address: whether silencing by DCL2-dependent siRNAs causes basal antiviral resistance at local infection sites and growth phenotypes upon loss of DCL4. Third, if silencing by DCL2-dependent siRNAs is not the cause of growth inhibition and basal antiviral resistance, then what is? In the accompanying paper, a simple explanation is provided that re-establishes coherence between the biology and molecular function of DCL2 (Nielsen et al., 2023). Together with specific intracellular immune receptors of the nucleotidebinding-leucine rich repeat type (NLRs), dicing by DCL2 causes activation of innate immune responses, such that DCL2 in wild type plants mediates cytoplasmic sensing of excess dsRNA, for example during a viral infection in which RNAi is incapacitated. This activity "misfires" upon loss of DCL4 such that processing of endogenous dsRNA by DCL2 causes autoimmunity via NLRs that manifests itself as the strong DCL2-dependent growth inhibition phenotypes analyzed here and by others. The genetics of this model is solid: plants with a DCL2-dependent growth phenotype indeed exhibit a classical autoimmune gene expression profile, and growth phenotypes and immunity-related gene expression can be strongly suppressed by inactivation of two NLRs. Similar to mutation of DCL2, inactivation of those NLRs causes loss of basal antiviral resistance when combined with dcl4, but not alone (Nielsen et al., 2023). Thus, together, the two studies provide a satisfactory framework to understand the important biological effects of DCL2 without the implication of a direct silencing activity of its siRNA products. Many important mechanistic questions now await answers, one of which is particularly nagging and is directly related to the mutational analysis of DCL2 presented here: why is the catalytic activity of DCL2 required for all of its functions when the siRNAs it produces are not? A conundrum and its possible solution: DCL2 activity, but not siRNAs, is required for activation of immunity Since this study establishes that DCL2-dependent growth phenotypes are dependent on (i) endogenous dsRNA (produced by RDR6), (ii) accessibility of DCL2 to dsRNA, (iii) the catalytic activity of DCL2, and the accompanying study shows that the growth phenotypes are in fact manifestations of NLR-mediated autoimmunity (Nielsen et al., 2023), the key question becomes the following: how does DCL2 facilitate sensing of dsRNA to induce NLR-mediated immune responses? In the closing paragraph, we offer some thoughts, naturally somewhat speculative at this point, on this important problem. We exclude the possibility that mere dsRNA binding to DCL2 is sufficient to trigger immune signaling, because the catalytically dead RNaseIII mutant of DCL2 was completely defective in inducing growth phenotypes in dcl4 dcl2 mutants. Thus, in this regard, dsRNA sensing via DCL2 differs from mechanisms of cytoplasmic dsRNA sensing in mammals mediated by the Dicer-related helicases RIG-I and MDA5 (Rehwinkel and Gack, 2020). This biochemical difference in dsRNA sensing may be related to the higher production of endogenous dsRNA in plants than in animals, by virtue of the existence of RNA-dependent RNA polymerases with crucial functions in generation of endogenous siRNAs (Matthew et al., 2011). Ideally, therefore, to avoid autoimmunity, plants would require a sensor system that switches on immune responses only when dsRNA is present in such quantities that DCL2 produces a high flux of siRNAs. Such a kinetic sensor system may explain why reduced DCL2 dosage and mutants with reduced activity that nonetheless produce steady state siRNA levels similar to wild type exhibit defects in immune activation. It is at present unclear whether DCL2 itself is the sensor, a scenario that would require physical association of R proteins with DCL2, or whether siRNA products of DCL2, when made at sufficiently high rates, have properties that allow their perception by other sensors. Thus, sensing at the level of RISC cannot be excluded, a scenario which would conceptually parallel the recently described prokaryotic plasmid-sensing systems that rely on oligomerization of heterodimers of a variant AGO protein and a Toll/Interleukin-1 receptor domain protein upon pervasive base pairing of plasmidderived, AGO-bound guide RNAs to high-copy plasmids (Koopal et al., 2022). DECLARATION OF INTERESTS The authors declare that they have no competing interests. AUTHOR CONTRIBUTIONS CPSN constructed most lines expressing point mutants of DCL2 in dcl2 dcl4, conducted all steps of the dcl4 sgt1b suppressor screen, and acquired and analyzed all high-throughput sequencing data. LA-H acquired the first evidence for defense activation in dcl4 and dcl4 dcl1 mutants, carried out phenotypic analysis of dcl4 mutants, noticed the effect of dcl2 heterozygosity in dcl4 dcl1 mutants, constructed dcl4 rdr6 double mutants, initiated work on DCL2 catalytic and helicase mutants, and participated in discussions with PB leading to formulation of the model for DCL2 function proposed here and in the accompanying paper. LH constructed and characterized dcl4-15/amiR-DCL2 lines. NP constructed the DCL2 wild type plasmid used as a starting point for generation of point mutants, and shared the FLAG-HA-DCL4 transgenic line prior to publication. SUA instructed and supervised genetic mapping based on high-throughput sequencing data. PB conceived the project, acquired funding, designed experiments, supervised work, and wrote the manuscript with input from all authors. ACCESSION NUMBERS sRNA sequencing data sets generated in this study were submitted to the European Nucleotide Archive under accession number PRJEB52819. Plant genotyping and phenotyping All T-DNA lines were genotyped using PCR with 2 different primer sets. One primer set to detect Table S2. Quantification of incompletely penetrant phenotypes, including the statistical analyses of the observations, was done exactly as described in the accompanying paper (Nielsen et al., 2023). Cloning and construction of transgenic lines The vector for expression of genomic, double FLAG, double HA-tagged version of DCL2 under the endogenous promoter [Pro(DCL2):2xFLAG-2xHA-DCL2 WT :ter(35S)] was constructed in the pB7m34GW vector by multisite Gateway recombination, as described by (Karimi et al., 2005). Briefly, the DCL2 promoter was cloned by amplifying the 2.6 kb of DNA sequence immediately upstream of the coding sequence start codon, using primers that added gateway recombination sites to enable cloning into a pDONR4-1R plasmid. DCL2 genomic coding sequence was amplified using primers that add gateway recombination sites for cloning into a pDONR2R-3 plasmid. The The artificial miRNA targeting DCL2 (amiR-DCL2) was designed using the WMD3 -Web MicroRNA Designer (Schwab et al., 2006). A DNA fragment comprising amiR-DCL2 inside the pri-miR319a backbone and flanked by attL1 and attL2 sites (Supplementary Table S3) was synthesized by Integrated DNA Technologies. Using the LR reaction, the pri-amiR-DCL2 was then introduced into a derivative of pGWB502 (Nakagawa et al., 2007) in which the CaMV 35S promoter was replaced by the Cestrum Yellow Leaf Curling Virus CmpC promoter (Stavolone et al., 2003). The resulting construct was then transformed into dcl4-15 mutants. All primer sequences are listed in Table S2. Plant transformation and selection of transgenic plants and lines were done as described in the accompanying paper (Nielsen et al., 2023). Construction and analysis of plants heterozygous for dcl2 alleles To construct dcl4-2t dcl2-1/+, we emasculated dcl4-2t dcl2-1 flowers, and used F1 seeds resulting from fertilization with pollen from dcl4-2t flowers. After analysis (phenotype counts or TCV Δ P38 infections), the genotypes of individual plants were confirmed to be dcl4-2t dcl2-1/+ by PCR using the primers listed in Table S2. The dcl4-2t dcl2-1/+ genotype was also confirmed for plants pooled for RNA extraction for small RNA northern and sequencing analysis. For construction of dcl4-2t sgt1b heterozygous for the different dcl2 alleles (dcl4-2t sgt1b dcl2-x/+), we emasculated dcl4-2t sgt1b dcl2-x, because these plants were easier to manipulate than dcl4-2t sgt1b. Gynoecia of emasculated flowers were pollinated with pollen from dcl4-2t sgt1b. F1 plants resulting from these crosses were examined for phenotypes. All F1 plants were genotyped by PCR using the primers listed in Table S2, and confirmed to be dcl4-2t sgt1b dcl2-x. TCVΔP38 infection and RNA analyses In vitro transcription of TCVΔP38 RNA and infection by hand inoculation of silicon carbide-rubbed leaf surfaces were done exactly as described in the accompanying paper (Nielsen et al., 2023). RNA and protein extractions as well as gel blot analyses of proteins, small RNAs and viral genomic RNAs were also done exactly as described in the accompanying paper (Nielsen et al., 2023). Small RNA sequencing and analysis Libraries for small RNA-seq were prepared from 5 µg of total RNA using the NEBNext Multiplex to specific genes were counted using featureCounts 1.6.3 (Liao et al., 2014). For the analysis of specific sRNA sizes the sRNA reads were sorted according to size prior to counting, using AWK. Calculations of numbers of reads with specific sizes mapping to all or specific genes, as well as the different plots were constructed in R. Dicer assays A 265 bp PHABULOSA fragment in pGEM-T-Easy was PCR amplified with M13 primers to and used for sense and antisene in vitro transcriptions with T7 and SP6 polymerases (Promega) in the presence of 1.2 µM of α-32 P-UTP (10µCi/µL; Hartmann Analytic). Radioactively labeled RNA fragments were PAGE purified, and volumes adjusted so that each contained 1000 cps/µL. Radioactive dsRNA was produced by annealing 1 µL sense to 1 µL antisense RNA in a total volume of 20 µL of 1 x annealing buffer (10 mM Tris-HCl pH 8, 30 mM NaCl, 10 µM EDTA, 1 u Ribolock). The annealing reaction was done in the PCR machine by heating to 75°C for 5 minutes followed by gradual cooling to room temperature in the PCR block for 1 hour without the use of the Pelletier element. EMS mutagenesis and screen for dcl4-2t sgt1b suppressors Approximately 270 mg dcl4-2t sgt1b seeds, an estimated ~14,000 seeds, were incubated with 8 ml of 0.74 % ethyl methanesulfonate (EMS, Sigma) in 0.1 M NaH 2 PO 4 , pH 5.5, 5% dimethyl sulfoxide (DMSO) in 15 ml Falcon tubes in a rotating wheel for 4 hours at room temperature. After 5 washes in 0.1 M Na 2 S 2 O 3 , seeds were dispersed in 0.1% agar and spread directly in soil in families of ~80 individuals. M2 seeds were harvested in pools of ~80 mature M1 plants. About 3000 M2 seeds from each pool were germinated directly in soil and grown under short day conditions (8-hour light, 16-hour dark) in the greenhouse. Plants that did not display the shoot-inhibition phenotype were harvested individually, and the seeds were kept for verification of phenotype and mapping. Mapping of EMS mutants To map the causal mutations of the recovered suppressor mutants, mapping populations were generated by crossing to dcl4-15 sgt1b in accession Landsberg. F1 seeds were germinated directly in soil, and F2 seeds obtained by self-pollination were harvested. 400 F2 seeds of each suppressor mutant mapping population were then germinated under short day conditions (8 hour light, 16 hour dark), and flower buds from 30-100 symptomless plants of each F2 population were harvested in nitrogen and pooled. DNA was phenol:chloroform extracted and DNA libraries were prepared using the NebNext Ultra II DNA library kit for Illumina (#E7645S) using the manufacturer's instructions. The libraries were then sequenced by Novogene Bioinformatics Technology Co., Ltd using paired end 150 bp sequencing with minimum 20 million reads per library. The obtained FASTQ files were trimmed using Cutadapt 2.4 (Adapter sequence: AGATCGGAAGAGC) and read quality was assessed both before and after trimming using FASTQC. We then mapped the reads using Bowtie2 2.2.3 (Langmead and Salzberg, 2012), and used SHOREMAP to obtain the candidate chromosome region and to obtain a list of candidate mutations (Schneeberger et al., 2009). Table S1. B, 49-day-old rosettes of the indicated genotypes grown under short day conditions. Scale bar, 2 cm. See also Figure S1. Asterisks indicate significance of difference compared to dcl4-2t (not calculated for the obvious cases of Col-0 WT and dcl4-2t dcl2-1), ***, P < 0.001 (χ 2 test); NS, not significant. B, Right and middle panels, rosette phenotypes of the indicated genotypes. Growth conditions, photographs and quantifications as described in A. Right panel, small RNA blot shows the expression of the artificial miRNA targeting DCL2 (amiR-DCL2), and the disappearance of the DCL2-dependent 22-nt siRNAs derived from IR71 in the dcl4-15 amiR-DCL2 line. C, List of dcl4 alleles used in this study. D, Results of a complete genotyping (for dcl1-11, dcl4-2t and dcl2-1 alleles) and phenotyping of an F2 population of 414 individuals resulting from a cross of dcl1-11 to dcl4-2t dcl2-1. The observed individuals of each genotype are sorted according to the phenotypic categories described in A. E, Representative rosettes of dcl4-2t and dcl4-2t rdr6-12 plants grown as described in A. For dcl4-2t, plants are circled in colors that indicate the severity of growth arrest. Size bars, 2 cm. dcl4-2t sgt1b dcl2-11/+ Figure 3. Full DCL2 dosage is required for DCL2 to induce growth defects and for basal antiviral defense A, Quantification of rosette phenotypes of the indicated genotypes as in Figure 2A. *, significance of difference P < 0.05 (χ 2 test). B, RNA blot of total RNA isolated from leaves either mock-treated or inoculated with TCVΔP38 RNA at 5 days post-inoculation. The blot was hybridized to a radiolabeled probe complementary to TCV. g, genomic RNA; sg, subgenomic RNAs. EtBr staining of the gel prior to blotting is shown as loading control. C, Rosette phenotypes of the indicated genotypes. Col-0 WT dcl4-2t sgt1b dcl4-2t sgt1b dcl2-8 dcl2-6 dcl2-9 dcl2-12 dcl2-1 Figure 5. Phenotypes of dcl2 point mutants that suppress growth defects in dcl4 sgt1b A, Schematic overview of dcl2 mutant alleles recovered by forward genetic screening for dcl4-2t sgt1b suppressors. Green, missense mutations resulting in at least partial Dicer activity; blue, missense or non-sense mutations resulting in complete loss of Dicer activity. Domain designations: DExH and HelicC, helicase domains; DUF283, Dicer dsRNA-binding fold; PAZ, Piwi-Argonaute-Zwille domain; RNase III, RNAse III domains; dsRBD, dsRNA-binding domain. See also Table S1. B, 49-day-old rosettes of the indicated genotypes grown under short day conditions. Scale bar, 2 cm. See also Figure S1. is shown as loading control. B, Normalized total small RNA counts, sorted by small RNA size, of the indicated genotypes. Small RNA-seq was performed from total RNA extracted from 21-day-old seedlings. C, Accumulation of viral gRNA in TCVΔP38-infected leaves at 5 days post-inoculation, assessed by RNA blot. D, Small RNA blot showing TCV-derived siRNAs in the same leaves. g, genomic RNA; sg, subgenomic RNAs. EtBr staining is shown as loading control. See also Figure S2.
2023-01-14T14:11:38.656Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "0652deb83ea593be51e62dd2795b38582a4a33e3", "oa_license": "CCBYNC", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/01/11/2023.01.10.523401.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "827e5c7e63d5082f525826ae7bbddb8afa15f336", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
15031182
pes2o/s2orc
v3-fos-license
Relative humidity and its effect on aerosol optical depth in the vicinity of convective clouds The hygroscopic growth of aerosols is controlled by the relative humidity (RH) and changes the aerosols’ physical and hence optical properties. Observational studies of aerosol–cloud interactions evaluate the aerosol concentration using optical parameters, such as the aerosol optical depth (AOD), which can be affected by aerosol humidification. In this study we evaluate the RH background and variance values, in the lower cloudy atmosphere, an additional source of variance in AOD values beside the natural changes in aerosol concentration. In addition, we estimate the bias in RH and AOD, related to cloud thickness. This provides the much needed range of RH-related biases in studies of aerosol–cloud interaction. Twelve years of radiosonde measurements (June–August) in thirteen globally distributed stations are analyzed. The estimated non-biased AOD variance due to day-to-day changes in RH is found to be around 20% and the biases linked to cloud development around 10%. Such an effect is important and should be considered in direct and indirect aerosol effect estimations but it is inadequate to account for most of the AOD trend found in observational studies of aerosol–cloud interactions. Introduction Aerosol effects on clouds, through microphysical and radiative processes, are considered as one of the biggest uncertainties in climate studies (Forster et al 2007). Aerosols serve as cloud condensation nuclei (CCN), and/or ice nuclei (IN), providing an estimate for the initial number and size Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. 1 Present address: Center for Global Change Science and Joint Program of the Science and Policy of Global Change, Massachusetts Institute of Technology, Cambridge, MA, USA. distribution of cloud droplets and ice particles. Changes in aerosol loading result in variations in the cloud particle's size distributions and hence impact cloud processes and properties. Clouds developing in a polluted environment have more but smaller droplets (Squires 1958, Rosenfeld andLensky 1998). As a result, the collision-coalescence process is less efficient (Warner 1968, Albrecht 1989 and it delays the onset of warm rain formation. The smaller drops are pushed higher in the atmosphere and because the freezing process is also less efficient, they freeze in higher altitudes, releasing the latent heat in colder environment and further increase the buoyancy and the updraft in the clouds. This chain of processes, leading to deeper convective clouds in a high aerosol loading environment is called the cloud invigoration effect (Andreae et al 2004, Yuan et al 2011, Wang 2005, Khain et al 2005. This effect can be reflected in other cloud properties, such as a larger cloud fraction , Lin et al 2006, Small et al 2011, larger anvils and stronger electrical activity (Altaratz et al 2010, Yuan et al 2011. The invigoration effect has the potential to produce fundamental climate consequences, through impact on the radiation budget, water cycle and thermodynamic balance of the Earth. For retrieving aerosol properties from space one has to overcome many obstacles. Aerosols have a relatively weak optical signal that often suffers from a low signal-to-noise ratio. This task becomes even harder in the vicinity of clouds, since the separation between clouds and aerosols is not always clear (Koren et al 2007) and the likelihood of cloud contamination (contribution of small and thin clouds to the aerosols signal) is higher (Zhang et al 2005). In addition, clouds can illuminate the aerosols in their vicinity (3D radiative effects, Marshak et al 2006). Such illumination may falsely be translated as enhanced AOD. In addition to these problems, aerosols can change their properties in the vicinity of clouds due to hygroscopic growth in a humid environment. Therefore, one of the main uncertainties related to aerosol properties is the radiative signature of aerosols due to changes in RH. The commonly used first approximation for CCN concentration in aerosol-cloud interaction studies is AOD (Andreae 2009). Such an approximation assumes similar conditions of RH and might be significantly offset, when an increase in the AOD due to humidification is interpreted, as an increase in aerosol loading. In particular such biases might pose a problem for cloud invigoration by aerosol studies, since thicker clouds may be correlated with environments characterized by higher RH. Global circulation model (GCM) studies suggested that aerosol humidification might be responsible for most of the observed correlation between aerosols and cloud properties, such as cloud depth, area or rain rate (Quaas et al 2010, Boucher and Quaas 2013). Previous studies have examined the variations in aerosol optical properties due to hygroscopic growth, both in the laboratory (Flores et al 2012) and in field campaigns Morley 2003, Carrico et al 2003). The aerosol hygroscopic growth as a function of the surrounding RH value can be described by a single parameter representation, namely the kappa parameterization (Petters and Kreidenweis 2007). where g is the hygroscopic growth factor, κ is the aerosol hygroscopicity and RH is the relative humidity value (%). The hygroscopic growth equation (equation (1)) has a RH/(100 − RH) kernel that dictates a fast growing behavior near RH = 100% controlled by the (100 − RH) −1 term. This implies a moderate g increase for most of the RH range that increases steeply as the RH value approaches 100%. The steepness of the hygroscopic growth as a function of humidity is modulated by κ. Therefore, when one wishes to study changes in hygroscopic growth due to perturbations in the RH field, one needs to know both the perturbation and the background RH. Recent studies of the RH spatial distribution within cloud fields showed that RH exponentially decays from values around 100% when moving away from cloud edges. The e-fold distance scale, is relatively short, reaching a background value within a distance of a few hundred meters from the clouds. Therefore, the significant hygroscopic growth is limited to a thin belt (of a 100's meter scale) around clouds, while the rest of the field exhibits relatively small spatial RH variations around the background value. These results are supported by a few in situ measurements of RH and specific humidity near clouds (Twohy et al 2009, Wang andGeerts 2010). The lower atmosphere is likely to contain most of the aerosol mass (95% up to 2 km from the surface, Blanchard and Woodcock 1980), with the possible exception of long range transport cases. Moreover, it was also shown that the aerosols in this layer of the atmosphere have the highest hygroscopicity (κ) values, due to a larger contribution of marine hygroscopic aerosols (Pringle et al 2010). In this study we wish to explore two effects related to the link between RH and AOD, using measurements of radiosondes. First, we estimate the expected RH background and variance values in the lower cloudy atmosphere (LCA). It provides an estimate for the additional source of variance in AOD values beside the natural aerosol concentration variability. Such estimates are important both for direct and indirect effect calculations. Variance as opposed to bias should converge to the mean for a large enough dataset. Therefore the variance does not provide an estimate for a possible systematic shift in AOD values as a function of the cloud vertical development (related to cloud invigoration by an aerosol effect). In the second part of this letter, we divide the data into shallow versus thick cloudy layers and estimate the range of systematic shift in RH values and the derived AOD, related to cloud thickness. This provides the much needed range of RH-related biases (namely not linked to aerosol microphysical effects) in cloud invigoration by aerosol studies. Methods and data A 12 year (2000-2011, between June-August) dataset of radiosonde measurements (Durre et al 2006) from 13 registered World Metrological Organization (WMO) stations (seven continental and six maritime) is analyzed in this study. All data are obtained from the Atmospheric Sounding dataset of the University of Wyoming (http://weather.uwyo. edu/upperair/sounding.html). The radiosonde data parameters that are used in this study include vertical profiles of height (m), temperature (T), dew point temperature (T d ) and relative humidity (RH). The accuracy of radiosonde water vapor measurements, used for RH estimations, improved since the year 2000. It is estimated as 6-8% for relative humidity values between The analyzed atmospheric profiles are classified into three types: (1) 'inside a low-level-cloudy profile'-the low part of the profile (below 2 km) is measured inside a cloud and is characterized by an RH > 99%, (2) 'potentially cloudy atmosphere profile'-part of the profile that is located above the calculated lifting condensation level (LCL, explained in detail below), indicates a possible cloud formation and (3) 'cloud-free profile'-the profile does not support cloud formation conditions in the lower atmosphere. In this study, we focus on subset number 2, namely the profiles that allow cloud formation, yet are not measured inside clouds. Such profiles represent the environment in the vicinity of clouds, in which aerosol properties are retrieved from space. The vast statistics of these profiles should represent well the mean background values of RH over the selected locations. The radiosonde profiles are analyzed for characterizing the RH in the LCA in layers of 1 and 2 km depth, above the surface. For estimating the possible contribution of the humidification effect to the observed correlations between AOD and cloud vertical extent, one should look for the corresponding changes in RH values as a function of cloud vertical extent. Therefore, we use the radiosonde profile information to estimate the thickness of the potentially cloudy layer. For each radiosonde profile, the lowest convective cloudy layer is determined between the lifting condensation level-LCL (Bolton 1980) and the equilibrium level (EL). The LCL is defined as the height at which an air parcel (having the average properties of the lowest 500 m of the atmosphere) reaches saturation, when it is cooled according to the dry adiabatic lapse rate. We chose the subset of profiles with LCL < 2 km, with no stable layers located below it. The EL is the height above the LCL where the temperature of a buoyantly rising moist parcel becomes equal to the temperature of the environment. In case the parcel does not rise above a level of free convection (colder than the environment) the top of the potentially cloudy layer is determined as the base of the lowest inversion layer located above the LCL. Due to the radiosonde sampling resolution, the minimal depth of a cloudy layer is chosen to be 75 m. For estimation of the RH difference correlated with variations in cloud thickness, each station dataset is divided into two subsets of shallower and deeper clouds, containing equal numbers of samples. The total number of cloudy profiles in the dataset ranges between 374 (in Manaus) and 959 (Hilo), for the full details about all stations see table 1 in the supporting materials (available at stacks.iop.org/ERL/8/034025/mmedia). The radiative transfer calculations are performed using the spherical harmonic discrete ordinate method-SHDOM (Evans 1998). Two types of aerosols are simulated: aerosol dominated by sea-salt, with a high κ of 0.7 and biomass burning dominated aerosol, with a κ of 0.3 (Andreae and Rosenfeld 2008). The initial dry aerosol size distribution is set to be a bimodal log-normal distribution, comprising fine mode and coarse mode aerosol distributions. The fine mode (coarse mode) geometric mean radius is 0.06 (0.6) µm, the log-standard deviation is 0.7 (0.6) and the total mass content 5 (50) µg m −3 . Results First we characterize the mean RH (RH) in the lower cloudy atmosphere (LCA) and its standard deviation values (σ RH ). These two moments provide a good approximation for the background value in these cloud fields. Figure 1 shows the results for 13 globally distributed stations, during the months June-July-August, separately for layers extending 1 and 2 km above the surface. The 1 km layer results show a RH 1 km value of 77% for the maritime stations, with σ RH,1 km around 9%, and a mean value of 70%, (with a σ RH,1 km of 11%) for the continental stations. Full details about the mean and variance values for all stations are presented in table 2 in the supporting materials (available at stacks.iop.org/ERL/8/034025/mmedia). It can be seen that the variance per station (specific geographic location along a limited period) is relatively narrow. Moreover, a clear negative correlation between the RH and σ RH appears in the two graphs. The potentially cloudy layer thickness is used in this work as a measure for the clouds vertical extent. It is well known that this layer depth does not represent the vertical extent of all the clouds in the field, since convective clouds tend to 'overshoot' the equilibrium level. Nevertheless, the potentially cloudy layer depth is strongly correlated with the mean vertical extent of the clouds in the field (North and Erukhimova 2009). The distributions of cloudy layers' thickness (per station) can be approximated by a normal distribution. The mean values range between 1000 and 2400 m in the maritime stations, and between 1300 and 5500 m in the continental stations (see table 1 in the supporting materials). Figure 2 shows the differences between the mean RH values of deeper and shallower cloudy layer subsets ( (RH)), as a function of RH, for all the stations, in layers of 1 and 2 km above the surface (see table 2 in the supporting materials). The results show that for a layer of 1 km (2 km) depth, the average (RH) is 3% (6%) around an RH of 77% (75%) for the maritime stations and (RH) of 3% (3%) around an RH of 70% (74%) for the continental stations. Using the above results, we examined the partial contribution of the aerosol humidification to the observed correlations between cloud vertical extent and AOD. Radiative transfer calculations reveal that the combination of (RH) and RH can lead to a maximal AOD increase of 6% and 11% for 1 km and 2 km, respectively, in the maritime stations, when the aerosol hygroscopic growth factor is taken to be relatively low (κ = 0.3). A similar AOD increase range of 6% and 11% for 1 km and 2 km, respectively, is found when the aerosol is considered hygroscopic (κ = 0.7) which is more typical of the maritime stations. Calculations for the continental stations reveal a smaller effect on the AOD with an increase of 4% and 5%, for 1 km and 2 km layers, respectively, for κ = 0.3, and 5% and 4% for κ = 0.7. These results suggest a much smaller effect on the AOD differences related to RH differences between the vicinity of thicker versus thinner clouds (figure 2) compared to the natural day-to-day variance (figure 1). Discussion and summary Hygroscopic growth of aerosols in a humid environment may change their physical and hence optical properties. Remote sensing studies that examine aerosol-cloud interactions use the AOD as a measure of CCN concentration. The hygroscopic growth that is controlled by the environmental RH, influences the AOD (and hence the estimated CCN concentration). In this study the RH in the lower maritime and continental cloudy atmosphere is characterized, in order to evaluate the humidification impact on the AOD. The optical impact depends on two parameters, the differences around a given mean RH and the mean RH. To estimate the possible range of the optical effect we estimate the RH mean and variance for 13 stations (June-August) using 12 years of an atmospheric sounding dataset, which is to the best of our knowledge the most extensive and reliable in situ measurement source for RH vertical profiles. The aim of limiting the data to the boreal summer is the need to reduce the meteorological variability as much as possible. The 1 km layer results show a mean RH value of 77% for the maritime stations, with σ RH around 9%, and a mean value of 70%, (with σ RH of 11%) for the continental stations. The results show that the realistic RH range per geographic location (per station) in a specific season is relatively narrow. Moreover, the negative correlation found between the stations RH and the corresponding σ RH , acts to limit the effect of changes in RH on measured AOD, such that in regions with relatively high background RH values, the σ RH is low and vice versa. The maximal expected effect on the AOD due to day-to-day variance is on the order of 28% (in the 2 km layer) with a mean of 20%. This provides an estimate for the additional source of variance in AOD values on top of the natural variance in aerosol concentration and properties. Variance is not a measure for a systematic bias. For large enough statistics the results should converge to the mean value. Therefore, we next aimed to estimate biases in the RH effect on AOD, linked to the vertical development of clouds, by sub-setting the data to potentially shallow and thicker clouds. The estimated bias in AOD is shown to be around 11% for the maritime stations and 5% for the continental stations. This maximal effect of around 10% on the AOD is important and should be accounted for in both direct and indirect aerosol effect studies. However, in most of the cloud invigoration by aerosol studies the differences in AOD values between clean and polluted conditions are more than a few hundred per cent , Small et al 2011, Ten Hoeve et al 2012. It is an order of magnitude higher than the humidification effect on AOD shown here (see a discussion about this issue in Boucher andQuaas 2013, Koren et al 2013). Another source of error in examination of the link between aerosol and cloud properties is the big meteorological variability that controls cloud properties. To minimize this meteorological variance the data can be further divided (in addition to the seasonal division) according to key meteorological parameters that are best correlated with clouds properties . Such classification of the data is expected to significantly reduce the humidification effect on the AOD since it creates subsets of similar meteorological conditions with similar mean RH values. In this study for example, when dividing the dataset into two groups according to the potential cloudy layer thickness, the estimated bias in AOD, due to differences in RH values between thick and thin cloud environments, is reduced by more than half (5% for the maritime and 3% for the continental stations). When the atmospheric sounding data is available, such a method may offer a straightforward meteorological classification that can significantly improve future aerosol-cloud interaction studies, reducing biases due to both meteorological variance and aerosol humidification effects. Koren
2016-04-23T08:45:58.166Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "6b3dfe10e93503d068587dc37df9912d77cb4191", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/8/3/034025", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7c443cb5c11fd4758f97dd2339514c97cd8cf9c7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
59029471
pes2o/s2orc
v3-fos-license
Transdermal delivery of kojic acid from microemulgel Article history: Received on: 02/01/2016 Revised on: 29/01/2016 Accepted on: 19/02/2016 Available online: 30/03/2016 The aim of this study was to develop microemulgel for skin delivery of kojic acid. Microemulsions (ME) containing either oleic acid (OA) or caprylic/capric triglycerides as the basic oily components were developed after construction of pseudoternary phase diagrams. Tween 80 was used as surfactant for oleic acid system both in presence and absence of ethanol or propylene glycol (PG) as cosurfactants. For the caprylic/capric systems, Tween 85 was the surfactant in presence or absence of ethanol as cosurfactant. Selected ME formulations were tested for transdermal delivery of kojic acid both in fluid state and after transformation into gel. Incorporation of cosurfactants expanded the microemulsion zone. The cosurfactant free ME were more viscous. Incorporation of kojic acid in ME systems increased the transdermal flux compared to saturated aqueous solution. caprylic/capric ME were more efficient than (OA) based ME. Transformation of the tested ME systems into gel produced significant enhancement in transdermal drug delivery compared with the saturated aqueous drug solution. However, the data revealed superior efficacy for the fluid ME systems over the corresponding microemulgel. In conclusion, both (OA) and caprylic/capric ME were promising for dermal and transdermal delivery of kojic acid even after gel formation. INTRODUCTION Kojic acid (5-hydroxy-2hydroxymethyl-4H-pyran-4one) is a depigmenting agent obtained from rice fermentation (Burdock et al., 2001). It is a natural antibiotic produced by various bacterial or fungal strains such as Aspergillus oryzae, Penicillium or Acetrobacter spp. (Bentley, 2006;Brtko et al., 2004;Burdock et al., 2001). It exerts its effect by competitive inhibition of tyrosine enzyme due to the ability of chelating copper ion at the active site. It is thus considered as a slow binding inhibitor of the diphenolase activity of tyrosinase enzyme resulting in antimelanogenic action (Cabanes et al., 1994). Unfortunately, the hydrophilic nature of kojic acid limited its ability to penetrate through the stratum corneum of the skin which is the first step for delivering the drug to deeper skin strata. This will hinder the delivery of the drug to the target sites, melanocytes which are localized at the dermal/epidermal border (Curto et al., 1999;Yamaguchi et al., 2007). Many strategies have been employed to enhance transdermal delivery of hydrophilic drugs. These include the use of permeation enhancers (Cornwell and Barry, 1994;Phillips and Michniak, 1995;Sinha and Kaur, 2000;Trommer and Neubert, 2006;Walker and Smith,. 1996), employing vesicular drug delivery systems (El Maghraby et al., 2001;Ntimenou et al., 2012) delivering hydrophilic drugs using microneedles (Oh et al., 2008) or augmenting transdermal flux using laser systems (Gómez et al., 2008;Lee at al., 2001). Microemulsion provides another promising alternative for transdermal delivery for hydrophilic drugs (Cui et al., 2011;Hosmer et al., 2009;Kreilgaard et al., 2000;Zhang and Michniak-Kohn, 2011). Microemulsion is a transparent, thermodynamically stable single optically isotropic liquid system of water, oil and surfactants (Danielson and Lindmann, 1981). Microemulsions can be considered as ideal system for drug delivery of both hydrophilic and hydrophobic drugs. They have the advantages of being thermodynamically stable. They were shown to enhance skin penetration of drugs via different mechanisms (Heuschkel et al, 2008;Kogan and Grati, 2006). The benefit of microemulsion can be even greater if the selected oily phase has skin penetration enhancement ability. Accordingly, the objective of this study was to investigate the efficacy of penetration enhancer containing microemulsion as skin drug delivery system for kojic acid. Oleic acid (OA) or caprylic/capric triglycerides (CP) were selected as the oil phase in microemulsion preparation. The study was extended to incorporate optimum microemulsion formulation in gel based system. Construction of pseudo-ternary phase diagrams Oleic acid (OA) and caprylic/capric triglycerides (CP) were selected as the oily phase. Tween 80 was used as surfactant for (OA) and Tween 85 was selected as surfactant for (CP). These selections were based on the miscibility of the surfactant with the corresponding oil. Ethanol was used as a cosurfactant in both cases with propylene glycol being employed as cosurfactant in case of oleic acid only. Propylene glycol was not employed as a cosurfactant in case of (CP) due to poor miscibility with (CP). Cosurfactant was mixed with surfactants at a ratio of 1:1 (W/W). This ratio was selected after solubilizing the highest concentration of water on titration of a 1:1 mixture of the oil with surfactant/cosurfactant system (Alany et al., 2001). Pseudo-ternary phase diagrams were constructed at ambient temperature using water titration method (Chen et al., 2004;El Maghraby, 2008). For each phase diagram mixtures of oils and surfactant or surfactant/ cosurfactant mixtures were prepared at weight ratios of 0.5:9.5, 1:9, 1.5:8.5, 2:8, 2.5:7.5, 3:7, 4:6, 5:5, 6:4, 7:3, 8:2 and 9:1. These mixtures were titrated with water while magnetic stirring. The resultant mixtures were characterized visually after equilibration with transparent fluid systems being considered as microemulsion (El Maghraby, 2008;El Maghraby et al., 2014). Highly viscous non-fluid systems were considered as gel (El Maghraby, 2008). Preparation of microemulsions The composition of the tested microemulsion formulation is presented in (Table 1). Formulations containing fixed concentrations of oil and water were employed in the current study. This allowed investigation of the effect of different variables which included the type of oil, presence or absence of . cosurfactant and the type of cosurfactant used. The selected microemulsion formulations were prepared by simple mixing oil with the surfactant or surfactant/cosurfactant mixture with the aid of magnetic stirring. The required amount of water was then added while mixing. Excess drug was added to prepare saturated drug solutions with excess crystals being included to maintain saturation. These systems were equilibrated by continuous mixing in water bath maintained at 32 °C for 72 h before skin permeation studies (El Maghraby, 2008). Addition of excess drug did not lead to any phase change. Characterization of the selected microemulsion formulations The viscosity of the selected formulations was determined using RVDVE Brookfield Viscometer Version 1.1 using spindle 92 at 50 rpm (Brookfield Engineering Laboratories Inc., Stoughton, MA, USA). To determine the saturation solubility of drug in different formulations, excess drug was added and the mixtures were equilibrated in a thermostated shaking water bath which was maintained at 32 °C for 72 hours. Excess drug was separated by centrifugation and the supernatant was suitably diluted before HPLC analysis. The physical stability was assessed by subjecting the selected microemulsion formulations to different stress conditions. These included centrifugation of the formulation at 4000 rpm for 15 min followed by visual inspection for any phase separation (Chen et al., 2007). The tested formulations were also subjected to three freeze-thaw cycles. Each cycle included storage of the formulation at −20ºC for 24hours followed by 24 hours storage at 25 ºC (Brime et al., 2002). The systems were then visually evaluated for any sign of phase change. The pH values of those formulae were measured using a digital pH-Meter (Jenway 3310, U.K). The refractive index of microemulsion was measured by digital refractrometer (Atago, Tokyo, Japan). All the tested microemulsion formulations were characterized using conductimetric measurements. Water was replaced with 0.8% w/v aqueous sodium chloride to allow conductivity measurement (El Maghraby et al., 2014). The electrical conductivity of these formulations was recorded by electrical conductivity meter (HANNA-HI 8733, Michigan, USA). Preparation of microemulgel formulations Various gelling agents namely; xanthan gum, sodium alginate, hydroxylpropylmethyl cellulose (Methocel E5, E15 & K15), carrageenan and Aerosil200 (colloidal silicone dioxide) were evaluated for their ability to gel different microemulsion formulations. The gelling agent was dispersed slowly in the supersaturated microemulsion formulations with the aid of overhead stirring. Of these agents Aerosil 200 was the most suitable gelling agent for the tested systems. The concentration of Aerosil 200 used to gel each ME formula is represented in (Table 1). In vitro drug release The in vitro drug release was using vertical glass Franz diffusion cells. These cells has a diffusional surface area of 2.27 cm 2 with the volume of the receptor compartment being 14 ml. Semipermeable membrane (Cellulose tubing, molecular weight cut off is 12000 Da, Sigma diagnostics, St. Louis, MO, USA) was mounted between donor and receptor compartments of these cells. The release studies were conducted at 32± 1 °C to mimic the skin permeation studies. This was achieved by incubation of the diffusion cells into thermostatically controlled water bath. The receptor compartments were filled with distilled water which was used as the receptor fluid. The cells were left to equilibrate to the required temperature. The tested formulations (2ml) were loaded into the donor compartments which were occluded with parafilm. Receptor samples were taken at predetermined time intervals and replaced with fresh receptor. The drug content in each sample was determined by HPLC analysis (see below). The cumulative amount of kojic acid released was calculated and plotted as a function of time to produce the release profiles. These profiles were used to determine the release rate. Preparation of skin samples The study utilized freshly excised full thickness skin obtained from the Rabbit ear. This model was successfully adopted to investigate the skin delivery of a variety of drugs including hydrophilic and lipophilic drugs (El Maghraby, 2010, El Maghraby et al., 2014Nicoli et al, 2008). Freshly excised ears of male rabbits, weighing 2-3 kg were used. The full thickness skin was simply peeled from the underlying cartilage after cutting along the tips of the ears. The skin was cut into sections and was used immediately. Skin permeation studies The study employed the same setup as the release experiments. The skin samples were mounted between the donor and receptor compartments with the stratum corneal side uppermost Water was employed as a receptor. The receptor compartments were filled with distilled water and incubated in a thermostated water bath which was adjusted to ensure that the skin surface was maintained at 32± 1 °C to mimic in vivo conditions. The whole assembly was left to equilibrate for overnight. The tested formulations (2ml) were loaded into the donor compartments which were then occluded as before. Receptor samples were taken periodically and replaced with fresh receptor. The amount of drug permeated was determined after HPLC analysis of each sample. Saturated aqueous solution of the drug containing excess drug crystals to maintain saturation was used as the control. All the tested formulations contained the drug at saturation with excess crystals being included as well. This allowed investigation of skin permeation at equal thermodynamic activity in all systems and provide better chance for detecting any effect for the formulation variables on skin permeation (El Maghraby, 2008, 2010El Maghraby et al., 2014). Chromatography The concentration of kojic acid in each sample was determined using high pressure liquid chromatograph (Shimadzu LC20-20A, Japan) equipped with a double wavelength UV/Visible detector and an auto-sampler unit for injection. Shimadzu LC solution software V 1.24 SP1 was employed for chromatographic data collection and handling. Separation was conducted on a reversed phase column Thermo HYPERSIL C 18 (150 × 4.6 mm, 5µm). The mobile phase was a filtered degassed phosphoric acid solution which was prepared by diluting 0.7 ml of phosphoric acid to 1000 ml with distilled water. This was pumped at a flow rate of 0.8 ml/ min with the effluent being monitored at 270 nm. Samples were suitably diluted with distilled water before injecting 30 µl. Data analysis The skin permeation profiles were obtained by plotting the cumulative amounts of the drug recorded in the receptor as a function of time. The obtained profiles were typical steady state plots which are expected after occlusive application of saturated systems (Figure 1). The transdermal drug flux was obtained from the slope of the regression line fitted to the linear portion of the permeation profile. Extrapolation of this line will intercept with the x axis at a time equal to the lag time (El Maghraby, 2008, 2010, 2012aEl Maghraby et al., 2014). The Student's t-test was used for statistical analysis. Pseudo-ternary phase diagrams Oleic acid (OA) and medium chain glycerides were separately utilized as the oily phase as they were successfully used to enhance the transdermal delivery of hydrophilic drugs like kojic acid (Hosmer et al., 2009;Okumura et al., 1991;Ongpipattanakul et al., 1991;Sinha and Kaur, 2000;Tanojo et al., 1997). For (OA) based systems, Tween 80 was used as the surfactant in the presence or absence of propylene glycol or ethanol which were used as cosurfactants. For (CP) ME; Tween 85 was used as the surfactant in the presence or absence of ethanol as a cosurfactant. The selection of surfactant and cosurfactant was based on the ability of these materials to solubilize large amounts of water when mixed with the corresponding oil (Alany et al., 2001;El Maghraby, 2008). Figures 2 and 3 shows the pseudo-ternary phase diagrams of the two systems in the presence and absence of different cosurfactants. For oleic acid based systems, Tween 80 was able to form microemulsion in absence of cosurfactants (Figure 2a). The microemulsion zone occupied about 16% of the total area of the phase diagram. The gel phase occupied about 15% of the total area with the rest of the phase diagram being identified as coarse emulsion (Figure 2a). The maximum amount of water that can be solubilized in microemulsion was 26%. This was achieved only at very low oil concentration and was reduced progressively with increasing oil concentrations. With respect to the fluidity of the microemulsion, there was a reduction in the fluidity with increasing the concentration of water in the system. Further increase in water concentration resulted in phase change with the system changing to gel structure especially at high surfactant concentration. Further dilution resulted in coarse emulsion formation. Similar phase behavior was recorded for the same oil (El Maghraby, 2012 a). Incorporation of propylene glycol as cosurfactant increased the maximum amount of water to be incorporated in microemulsion to about 70% compared to 26% in the cosurfactant free system at the lowest concentration of oil. The area occupied by the microemulsion zone was increased to 17% with the gel phase occupying only 7% of the total area of the phase diagram ( Figure 2b). Replacing propylene glycol with ethanol resulted in further increase in the area occupied by the ME zone to reach 34%. This effect was associated with complete absence of the gel phase in the total area of the phase diagram (Figure 2c). Incorporation of short chain alcohols as cosurfactant was previously shown to increase the ability of the system to accommodate water and was able to disrupt the gel structure (Alany et al., 2000;El Magrhaby, 2008). For (CP) based systems, Tween 85 was able to form microemulsion in absence of cosurfactants ( Figure 3a). The microemulsion zone occupied about 20% and the gel phase occupied about 48% of the total area of the pseudo-ternary phase diagram with the rest of the phase diagram being characterized as coarse emulsion. This finding is expected based on the bases that gel structure is more likely to dominate in medium chain glycerides microemulsions at high surfactant concentration (Prajapati et al., 2012). At very low oil concentration the maximum amount of water that can be incorporated in a microemulsion system was 16%. Incorporation of ethanol as cosurfactant in this system resulted in further increase in the area occupied by ME zone to reach 25% compared to 20% in the cosurfactant free system. In addition, the cosurfactant containing system did not form any gel structure (Figure 3b). Breaking of the liquid crystalline and gel structure of the ternary system was previously recorded after addition of short chain alcohols including ethanol as cosurfactant (Alany et al., 2001;El Magrhaby, 2008). Table 1. The area occupied by microemulsion zone in the phase diagram depends on the physicochemical properties of the oil phase and the type of surfactant with some essential conditions required for microemulsion formulation. These conditions include the presence of very low surface tension at the oil-water interface and the existence of non-viscous surfactant film at the water oil interface. Penetration and association of oil molecules with the interfacial surfactant film is also required (Schulman et al., 1959). Based on this, the miscibility of surfactant with the oil phase can be taken as initial indication for the suitability of the surfactant in formulation of microemulsion using this oil. The presence of fluidizing group such as double bond in the lipophilic chain of surfactant can provide a chance for microemulsion formation using this surfactant as a single surfactant . This explains the ability of Tween 80 to form microemulsion with oleic acid due to its miscibility with oil and presence of unsaturated acyl chain. The ability of cosurfactant to increase the area occupied by the microemulsion zone can be explained on its ability to reduce the surface tension with high capacity to increase the fluidity of the interfacial film. This explanation agrees with the previously published reports on the use of short chain alcohols as cosurfactants (Aboofazeli and Lawrence, 1994;Stilbs et al., 1983). Characterization of the selected microemulsion formulations The tested microemulsions formulations were selected so that all formulations contained the same concentration of oil and water (15% of each) with the rest of the formulation being the surfactant/cosurfactant system. This selection will thus allow the investigation of the effect of oleic acid and caprylic/capric triglycerides as oils on the skin delivery of kojic acid from microemulsion. The selected formulations were characterized with respect to the viscosity, drug solubility, electrical conductivity, pH and refractive index. These parameters are presented in (Table 2). The viscosity of the tested formulations depended on their composition with cosurfactant free systems being more viscous compared with the corresponding cosurfactant containing preparations. This trend was recorded both in case of (OA) based systems and (CP) based systems (Table 2). Similar behavior was recorded for the viscosity of microemulsion systems after incorporation of short chain alcohols of 3-4 carbon atoms (Alany et al., 2000;El Maghraby, 2008). Preparation of microemulgel resulted in significant increase in the viscosity which is expected after addition of the gelling agent (Table2). With respect to the solubility of the drug in microemulsion systems, there was a dependence on the composition of the formulation with those containing ethanol or propylene glycol dissolving greater amounts of the drug compared with the corresponding microemulsions. However, the overall solubility was lower than the drug solubility in water. This is expected for such hydrophilic drug (Table 2). ( Table 2) presents the results of pH of different ME formulations. All the formulations had pH values in the range of 5-5.9. Also refractive index results are presented in (Table 2) and all the formulations have refractive index results between 1.4219-1.4606. The electrical conductivity of all formulations was measured with the goal of determination of the type of microemulsion. The tested cosurfactant free formulations exhibited relatively low electrical conductivity values indicating that the tested systems are of W/O microemulsion type (Table 2). This is expected taking into consideration the fact that the water content of the formulations was 15% w/w. Despite of fixed water content, the cosurfactant containing systems revealed higher conductivity values compared with the corresponding cosurfactant free formulations but the difference does not imply phase inversion but suggests possible formation of bicontinuous systems. The selected microemulsion formulations were tested for physical stability by subjecting them to different stress conditions. Upon centrifugation of microemulsion formulations at 4000 rpm for 15 min, no phase separation was noticed by visual inspection in all formulations. Also no sign of phase change was noticed visually after subjecting all the formulations to three successive freeze and thaw cycles. These findings reflect the physical stability of the formulations. In Vitro drug release The in vitro release of kojic acid was monitored using semipermeable membrane with the experimental conditions being adjusted to the same skin permeation conditions. This allowed correlation between release data and skin permeation data (El Maghraby, 2008, 2010. The in vitro release profiles of kojic acid obtained from different ME formulations in fluid and gel state are shown in (Figure 4). The apparent release kinetic model was determined for each of microemulsion and gel formulations. This involved fitting the release data to zero order, first order and Higuchi equations before comparing the correlation coefficient obtained from the linear regression of each model. The drug release data were best fitted to Higuchi kinetics (Figure 4). This finding was expected to the gelled microemulsion but was unexpected for the fluid system. The release kinetics of the drug from the fluid ME can be explained on the bases that the ME was of w/o type which entraps kojic acid in its internal aqueous phase. This means that the drug will diffuse through the oily phase before being released from the whole system. This may explain the recorded kinetic model for drug release from the fluid formulations. This means that drug diffusion through the oily phase is the rate limiting step in its release from microemulsions. Similar release kinetics was recorded for other drugs from microemulsion systems (Chauhan et al., 2013; Panapisal et al, (100) 75,640 (40) *The saturation solubility of kojic acid in water was 55.8 (0.87) mg/ml at 32 ºC. Values between brackets are SD (n=3). Fig. 4: The in vitro release profiles of kojic acid from microemulsions (a) and the corresponding microemulgel formulations (b). Formulation details are in Table 1. -Okur et al., 2014). With respect to the drug release rate there was a dependence on the composition of the tested formulation and the solubility of the drug in such formulation which can affect the concentration gradient in the release study through artificial membranes (Table 3). Accordingly, caprylic ethanol ME system liberated the drug at the fastest rate followed by the oleic acid ethanol microemulsion followed by oleic acid PG ME then caprylic acid ME and oleic acid ME. This release rate correlates with the solubilizing power of the ME to kojic acid ( Table 2). Formulation of the ME in the form of gel resulted in a significant reduction in the release rate compared with the corresponding fluid formulation (Table 3). This is expected taking consideration the increase in the viscosity of the formulation after gel formation. The kinetics of drug release followed matrix diffusion release kinetics which is expected with the gel matrix. Similar release pattern was recorded for other drugs from a gel matrix after incorporation of microemulsion in the gel structure (Chudasama et al., 2011;Cojocaru et al., 2015) Skin permeation of kojic acid from tested formulations Full thickness skin obtained from the inner side of freshly excised rabbit ears was used in this study. This skin has been successfully utilized to monitor skin permeation of a variety of drugs from different delivery systems including microemulsion (Corbo et al., 1990;El Maghraby, 2010;Touitou et al., 2000). This skin model was also shown to be a successful barrier for skin permeation studies for both lipophilic and hydrophilic materials (Nicoli et al., 2008). The permeation profiles are shown in (Figure1) and the calculated transdermal permeation parameters are presented in (Table 3). Application of drug as aqueous solution (control) resulted in a very low permeation rate of the drug. This is clear from the recorded transdermal flux value (Table 3). This is expected taking into consideration the hydrophilic nature of kojic acid which has a very low partition coefficient (log P = -2.45). Other investigators recorded poor skin permeation for the same drug after application of aqueous solution (Oliveira et al., 2010). Incorporation of kojic acid in different microemulsion formulations significantly increased the transdermal drug flux compared with the saturated aqueous control (Table 3). The efficacy of microemulsion formulations depended on their composition. For (OA) based systems the basic formulation comprised the oil with Tween 80 and water. This formulation was considered as the prototype and was modified by incorporation of either propylene glycol or ethanol as cosurfactants. Incorporation propylene glycol as cosurfactant in microemulsion resulted in a significant increase in the transdermal flux of the drug (p < 0.05) compared with the prototype formulation (Table 3). Replacing propylene glycol with ethanol in this formulation resulted in significant alteration in the transdermal flux compared with the prototype formulation (P < 0.05). The recorded result with respect to the effect of ethanol is similar to that recorded for the same cosurfactant with eucalyptus oil microemulsion (El Maghraby, 2008). With respect to the potentiating effect obtained after incorporation of propylene glycol can be explained on the base of the synergism between oleic acid and propylene glycol. Synergistic transdermal penetration enhancing effect was recorded after mixing oleic acid with propylene glycol. The pattern was also recorded for the same combination in microemulsion formulation (Barry, 1987;El Maghraby, 2012a;Tanojo et al., 1997). With respect to the (CP) based system, the basic formulation comprised the oil with Tween 85 and water. This formulation was considered as the prototype and was modified by incorporation ethanol as cosurfactants. This formulation was even more efficient than that containing oleic acid with respect to enhancing kojic acid transdermal delivery. Incorporation ethanol as cosurfactant in microemulsion resulted in a trend of increased transdermal flux of the drug compared with the prototype formulation (Table 3). Incorporation of the microemulsion formulations into the gel matrix resulted in a reduction in the transdermal flux of the drug compared with the corresponding fluid microemulsion but the microemulgel systems delivered the drug through the skin at significantly higher rate compared with the aqueous control. The superiority of the fluid systems over the corresponding gel phase systems is expected as the former can provide intimate contact between the colloidal structure and the microarchitecture of the skin surface. The efficiency of the tested microemulgel systems correlated with the efficacy of the corresponding fluid microemulsion with respect to the rank order. The mechanisms of enhanced transdermal drug delivery from microemulsions have been reviewed with different possible mechanisms being suggested (El Maghraby, 2012b). The high drug loading capacity of microemulsions was considered as the first possible mechanism (Kreilgaard et al., 2000). However, this mechanism can be applied to lipophilic drugs suffering from poor solubility but does not apply for the current study which deals with freely water soluble drug. Second hypothesis is the possibility of supersaturation which results from phase transition of the microemulsion after application on the skin. This can modulate the thermodynamic activity and the driving force for the transdermal drug transfer (Kemken et al., 1992). The third possible mechanism depends on the ability of the microemulsion droplet to come into intimate contact with the microenvironment of the skin surface due to very small droplet size and very low surface tension. This mechanism explains the superiority of fluid formulation over microemulgel (El Maghraby, 2008. The forth possible explanation for enhanced transdermal delivery from microemulsions may be the penetration enhancing effect of the microemulsion components. The later possibility has high probability taking into consideration the nature of the main components of the tested microemulsion systems. For example oleic acid was shown to exert skin penetration enhancing effect for both hydrophilic and hydrophobic dugs from microemulsions (El Maghraby, 2012a;Malakar et al., 2011). It was also able to enhance skin penetration from microemulgel (Sabale and Vora, 2012). As for oleic acid, medium chain glycerides have been used as a penetration enhancer for drugs from microemulsions (Hosmer et al., 2009;Lopes et al., 2010;Zhang et al., 2010). CONCLUSIONS Microemulsions containing oleic acid or caprylic/capric triglycerides are promising for dermal and transdermal delivery of kojic acid. Incorporation of cosurfactants in the microemulsions augmented the transdermal drug delivery potential of kojic acid from the colloidal system. The prepared systems were successfully formulated in gel form which retained good fraction of the penetration enhancing ability of the corresponding fluid microemulsion.
2018-12-15T05:51:45.035Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "0034e04cecc7c19e90e85304a703ca0046247314", "oa_license": "CCBYNCSA", "oa_url": "http://www.japsonline.com/admin/php/uploads/1795_pdf.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0034e04cecc7c19e90e85304a703ca0046247314", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
257098396
pes2o/s2orc
v3-fos-license
Assessment of Uncarboxylated Osteocalcin Levels in Type 2 Diabetes Mellitus Osteocalcin is one of the main organic components of the bone matrix and consists of 49 amino acids excreted from osteoblastic cells in carboxylated and uncarboxylated forms. Carboxylated Osteocalcin belongs to the bone matrix, whereas uncarboxylated osteocalcin (ucOC) is an important enzyme of osteocalcin in the circulatory system. It is an essential protein for balancing the minerals in bones, binding with calcium, and regulating body glucose levels. In this review, we point out the assessment of ucOC levels in type 2 diabetes mellitus. The experimental results that show ucOC controls glucose metabolism are significant because they relate to the current obesity, diabetes, and cardiovascular disease. To confirm that, low serum levels of ucOC were a risk factor for poor glucose metabolism, and further clinical studies are required. Introduction And Background An increased blood sugar level is a symptom of a range of metabolic illnesses known as diabetes mellitus (DM) that affect insulin secretion, insulin action, or both. Long-term harm and dysfunction of many organs, particularly the kidneys, heart, nerves, eyes, and blood vessels, are linked to DM. Development of DM involves a variety of disease-causing events, including autoimmune damage of pancreatic beta cells through subsequent insulin action [1] . In Saudi Arabia, DM is one of the major health issues, and it is ranked among the top 10 countries in the world regarding the prevalence of DM [2]. Type 2 DM is characterized by insulin resistance because dysfunction of beta cells. DM patients suffer an impairment in the action or secretion of insulin. The illness develops when the body does not release enough insulin or when the body's cells are unable to use insulin [3]. Osteocalcin (OCN) was the first molecule to be identified as a connection between bone metabolism and glucose [4]. One of the major organic forms of the bone matrix is OCN [5]. In the bones, OCN binds to hydroxyapatite after carboxylation of glutamyl residue at positions 17, 21, and 24 γ-in the presence of vitamin K [6]. Carboxylated osteocalcin (COC) contributes to the outer bone matrix. In contrast, uncarboxylated osteocalcin (ucOC) is the circulatory system's active form of OCN. It is an essential protein for balancing bone minerals, binding with calcium, and regulating body glucose levels. The uncarboxylated form, which is secreted into the bloodstream, promotes insulin secretion and plays role in glucose balance [7]. Contrarily, insulin increases the expression of OCN in osteoblasts. [8]. When the pH is acidic enough to decarboxylate proteins, bone resorption takes place. Osteoclasts analyze the carboxylation status and work of OCN. Accordingly, bone resorption-dependent glucose metabolism in mice and humans is promoted or inhibited by raising or lowering insulin signaling in osteoblasts [9]. According to recent publications that are primarily based on rat models and according to in vitro research, the noncarboxylated form of OCN modulates physiological pathways in an endocrine manner [10]. Material and methods The aim of this study is to investigate the relationship between ucOC levels and type 2 DM. We thoroughly searched the following databases: PubMed, EMBASE, and Cochrane. We examined English-language publications published between 1989 and 2018. Likewise, we looked through the reference lists of the retrieved papers to find more relevant material. Keywords included "uncarboxylated osteocalcin," "osteocalcin," "carboxylated osteocalcin," "diabetes mellitus type II," "glucose metabolism." The first search yielded 125 papers. However, after the screening and quality assessment process based on abstract and full-text documents, only 35 articles were included ( Figure 1). This review was performed according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [11]. Osteocalcin In general, the skeleton is responsible for an organism's support and movement. Beyond its mechanical capabilities, bone has come to be recognized as a regulator of a number of metabolic processes separate from mineral metabolism [7]. It is regulated in osteoblast in response to physiological or pathological processes. It can be affected by growth factor, hormone, cytokines, and physical stimulus through signal transduction pathway, bonding to the BGLAP gene promoter or interfacing with nuclear transcription factors. A recent study reported that ucOC enhances mice's insulin sensitivity and glucose tolerance, minimizing the onset of DM. Additionally, ucOC raised the expression and production of adiponectin in adipose tissue, which improved insulin sensitivity. Moreover, decreased circulating levels of adiponectin have been endemically linked to insulin resistance and type 2 DM. Adipocytes' expression of adiponectin is raised by GluOC. However, results in human studies are conflicting and inconsistent; as compared to healthy adults, people with type 2 DM had significantly lower serum levels of OCN. Also, serum OCN in some investigations is inversely correlated with the homeostasis pattern of insulin resistance index, fasting plasma glucose, and fasting insulin [12]. Numerous drugs that are utilized in both clinical and non-clinical contexts have an impact on OCN levels and may be useful in the treatment of type 2 DM. Thorough research should be conducted into the molecular processes that control OCN expression and its potential significance in the development and management of DM [13]. The active form of OCN, known as ucOC, is released by osteoblasts and circulates in the blood. Reverse COC remains in the bones [14]. Several studies agreed that the protein receptor GPRC6A is a putative receptor for ucOC [15][16][17] and is known for its wide distribution in many tissues of the human body as the pancreas. Many studies have reported its association with insulin secretion and insulin resistance, which leads to the development of type 2 DM [18][19][20][21][22][23]. Osteocalcin and insulin receptors on osteoblasts Lately, separate reports from the Karsenty Lab [24,9] and the research group of Clemens [25,26] indicate that osteoblasts regulate glucose metabolism through insulin signaling via the ucOC pathway ( Figure 2). Rats with osteoblast-specific insulin receptor deletion (InsRosb/) had high blood sugar, decreased insulin secretion, and low ucOC levels [24]. In addition, the experimental results from that study showed that OCN was decarboxylated in resorption lacunae, which increased the levels of circulating ucOC, and that insulin signaling in osteoblasts increased bone resorption by osteoclasts [24]. These results are backed up by a separate report showing that perturbation of insulin receptors in osteoblasts decreases total OCN and ucOC levels, leading to glucose intolerance and reduce insulin levels in rats [25]. Therefore, in these rodent models, insulin acts through insulin receptors present in osteoblasts and thus increases the decarboxylation of OCN. These results illustrate new mechanisms that participate in insulin signaling and regulation of glucose metabolism and focus attention on the need to consider bone in turnover with ucOC [27]. The detection of more variables influencing this pathway is anticipated, and further explanation is needed given the importance of this mechanism for human skeletal and glucose metabolism [28,29]. FIGURE 2: Osteocalcin and insulin receptors on osteoblasts Gla-OCN, carboxylated osteocalcin; Glu-OCN, uncarboxylated osteocalcin The results obtained that show ucOC regulates glucose metabolism are significant because they relate to DM and heart disease. Obesity is a risk factor for DM and cardiovascular disease, as it leads to insulin resistance [30]. Therefore, new strategies to improve insulin resistance may help lower the prevalence of DM [31]. Scientific evidence from knockout rats and cells points out that ucOC promotes insulin secretion and improves insulin sensitivity [8,31]. Reduced total OCN levels are linked to insulin resistance, elevated blood sugar, and type 2 DM in humans, according to observational studies. [32]. In a recent study, decrease total OCN (TOC) levels were associated with high prevalence of metabolic syndrome, elevated blood glucose, and triglyceride levels, and remained significant after glucose control [33]. Nevertheless, research wherein ucOC and TOC were assayed is smaller in number and provided the most effective certified proof of a function for ucOC specifically. Therefore, extensive observational research is needed to evaluate ucOC [34]. Crosssectional studies, which only indicate associations at a single point in time, make it challenging to determine causality. Consequently, longitudinal research determining the role of TOC and ucOC as independent predictors of onset DM and cardiovascular problems would be very important. When the role of ucOC in human metabolism has been more precisely characterized, more research will be required to determine how to modify ucOC levels to affect clinical outcomes [35]. The function of osteoclasts contributes to bone resorption and provides an acidic resorption gap where OCN is decarboxylated. An exciting attitude at the interplay in fracture hazard and DM [36]. Conclusions We conclude that low serum ucOC is a risk factor for glucose metabolism and the development of type 2 DM. We need more studies on humans. If these effects are proven, they will help in the prevention and treatment of DM. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2023-02-24T05:13:59.874Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "1d1bc4d20b221dd02fceddd86c9e0dc8a6844039", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "1d1bc4d20b221dd02fceddd86c9e0dc8a6844039", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
245617375
pes2o/s2orc
v3-fos-license
When Is an Interview an Inter View? The Historical and Recent Development of Methodologies Used to Investigate Children’s Astronomy Knowledge This paper provides a historical review of the interview research that has been used by science educators to investigate children’s basic astronomy knowledge. A wide range of strategies have been developed over the last 120 years or so as successive teams of researchers have endeavoured to overcome the methodological difficulties that have arisen. Hence, it looks critically at the techniques that have been developed to tackle the problems associated with interviews, questionnaires and tests used to research cognitive development and knowledge acquisition. We examine those methodologies which seem to yield surer indications of how young people (at different ages) understand everyday astronomical phenomena—the field often referred to as children’s cosmologies. Theoretical ideas from cognitive psychology, educational instruction and neuroscience are examined in depth and utilised to critique matters such as the importance of subject mastery and pedagogical content knowledge on the part of interviewers; the merits of multi-media techniques; the roles of open-ended vs. structured methods of interviewing; and the need always to recognise the dynamism of memory in interviewees. With illustrations and protocol excerpts drawn from recent studies, the paper points to what researchers might usefully tackle in the years ahead and the pitfalls to be avoided. If you want to know how people understand their world and their lives, why not talk to them? Conversation is a basic mode of human interaction. Human beings talk with each other; they interact, pose questions, and answer questions … (p. xvii). They stress that an interview is an inter view, where 'knowledge is constructed in the inter-action between the interviewer and the interviewee' (p. viii), an argument which we agree with and seek to develop in several ways in this paper. Historically, interview methodology has progressed significantly, especially in investigations into children's intellectual development-thus the work of Piaget and Vygotsky through to Donaldson, Bruner and beyond. A particularly fruitful avenue of this research has been that associated with young people's cosmological ideas which have been the focus of many researchers; concepts like daytime and night-time, sunrise and sunset, the movement of the Earth, Sun and Moon (ESM), seasons and so forth-pioneered by Piaget and continued by others, including Nussbaum and Novak et al., Vosniadou and Brewer et al., Nobes et al., Schoultz, Säljö et al. and the current authors. An important feature of this development has been the early trend away from replicable standardized tests (Binet et al.), through replicable clinical interviews (Piaget et al.) originally aimed at supporting 'ages and stages', to guided, semistructured, open-ended interviews with Socratic dialogue; the latter by their very nature being non-replicable (see further below). This evolution is countered by a more recent trend utilising cultural artefacts such as globes, maps and pre-made models of the Earth in interviews featuring forced-choice rather than open-ended questions. It is apparent from the literature examined below that the interview has evolved from objective, rather sterile, non-interactive origins requiring minimal subject/content knowledge (CK), or pedagogical content knowledge (PCK), on the part of the researcher, to a more subjective, constructive and interactive activity requiring maximal CK and PCK on the part of the interviewer. We believe that the review presented here is timely because researchers are now at the stage of being able to correlate neuroscience with psychology in response to interview questions. Preliminary findings indicate that memory (what young people seem to recall) is much more dynamic and creative than previously thought and sensitive, open-ended questioning is an essential tool in probing this dynamism. Thus we seek to encourage debate on changing views of cognitive development in general and memory in particular as a result of these recent advances in neuroscience, developments which challenge the reproductive aspects of memory (traditionally reflected in evidence from Piagetian interviews with Socratic dialogue). In Bryce& Blown (2016), we gave a first account of how ideas are dynamic and multimodal, actively created at the point of recall (cf. Edelman, 2001), demonstrating the extent to which children switched between what they said, what they drew and what they modelled in play-dough during interviews. And in Bryce & Blown (2020), we illustrated the conceptual flexibility apparent in children's interview responses, showing the creation as well as the inhibition of concepts (see footnote 1 and further in due course). The studies of children's cosmologies and the frameworks used here were selected because they utilised Piagetian open-ended, or socio-cultural forced-choice interviews, to investigate children's developing concepts, the considered nature of which, in fact, reflect methodological issues. Some of the literature scrutinised below is in opposition to the authors' arguments and an effort has been made to maintain a balance across the review. In our analysis, the overarching question has been to identify what can now be claimed as productive features of in-depth interviewing and, inter alia, the characteristics of effective interviewers-the knowledge and skills required of them, whether as researchers or teachers, roles which often overlap in efforts to ascertain children's intuitive and scientific ideas. We will therefore focus critically on: (a) The need for interviewers to be involved in any research design so that they are better prepared for opportunities to diverge from standard interview questions to explore concepts more deeply using Socratic dialogue; (b) The significant part played by multimodal interview sessions and what can be interpreted from interviewees' responses to questioning utilising different modalities; (c) How an understanding of the creative (as opposed to the reproductive) dimension of remembering-what recent neuroscience research tells us about the dynamism of human memory-alters our interpretations of what may be revealed when people are questioned (see footnote 1 ); (d) The consideration which should be given to the value of the spoken word as a reflection of conceptual ideas and the merits of open-ended over forced-choice questions. On the interviewer as teacher or teacher as interviewer raised in (a), sensitive questioning requires robust CK/PCK and must be carried out carefully to avoid 'foreclosure' on judgements about subjects' understandings. Allowing for the creative aspects of remembering, interviewers should always assume that an interviewee probably has more to say/ to mean and the interview must open up the prospects for him/her doing so (c.f. the exemplifications discussed in Bryce & Blown, 2021). Through a close examination of how (a) and (b) have figured in much of the science education literature-particularly in studies of children's cosmologies-together with the as-yet-few acknowledgments of (c) (dynamic memory) in most interview research, we will argue that researchers (and teachers) can misrepresent or pre-judge the understandings which young people have of basic concepts. Our discussion will juxtapose Socratic questioning and the PCK of the interviewer with the creative dimension to remembering in the interviewee. With respect to (d), although we place high value on the spoken word as a reflection of thought, we are aware of its limitations particularly with younger children and those of other cultures. Because of this in our own cross-age, cross-cultural work we used Socratic dialogue to clarify verbal responses to questions and triangulated verbal language with children's drawings and play-dough models, as detailed below. A Historical Review of Investigations into Children's Cosmologies The following is a brief outline of the many studies which have been conducted in the last 120 + years, listed chronologically, where children have been interviewed in attempts to reveal how they understand natural phenomena. 2 In the short discussion of each investigation, we discuss the key methodology employed and, where appropriate, compare and contrast the reasons indicated by the researchers concerned-being respectful and sensitive to the very different circumstances in which the early researchers operated. Contrasts become particularly evident in the studies carried out in the last two decades (e.g. between Vosniadou et al., and ourselves vs. Schoultz et al.) and we make these clear. Beginning just before the last century, Lange (1879) pioneered the use of questionnaires and interviews to investigate 800 young children's ideas about the natural world on entering school in Plauen, Germany. The studies seemed to indicate how little children knew about things and how much needed to be done in schools. The researchers found that kindergarten children knew a little more than non-kindergarten children, and there were some sex differences. City and country children were also found to be different. For example, Lange reported that 42% of country children had seen the Sun rise vs. only 18% of city children, and that 70% of country children had reported seeing and hearing a lark vs. only 20% of city children (see in Reese, 1982). A few years later, Hall (1883) conducted the first cross-cultural studies of children's cosmologies with 400 Anglo-American, Irish-American and local German children, aged 3-6 years in Boston, USA. Reese (1982) observes that Hall seemed to have been quite sensitive to questions of method and took much more care than the antecedent Germans had. Noting the rate of the German work, about one question every half-minute, Hall had suggested that the inquiry had been perfunctory and that the teachers doing the work were uninterested. Hall took pains to engage four experienced kindergarten teachers, to see to it that they were reasonably motivated, and to allow for cross-questioning-perhaps an early example of concern about the PCK of the interviewers and allowing for a degree of Socratic dialogue, or at least probing beyond the standard interview questions. According to Cairns (1983), they held small group interviews (n = 3) to make children feel at ease with the interview environment. The 134 items in Hall's questionnaire were largely drawn from the same seven areas used in Germany; namely astronomy, mathematics, meteorology, animals, plants, local geography and miscellaneous (Glover & Ronning, 1987, p. 46). As in Lange's studies, Hall found 'a pronounced ignorance of the natural world' particularly in city children and recommended that all children would benefit from time in the country (Hall, 1883, p. 255;in Young, 2016). In pivotal research published in two books, Piaget (1929 and1930) reported his investigations into European children's views of the origin of the Sun and Moon, the movement of the clouds and heavenly bodies and the concept of Life, in children aged 2-13 years. He was against teachers presenting the Copernican (heliocentric) system to children as a hypothesis for the movement of the heavenly bodies because 'the heliocentric view is so far removed from children's conceptualisations of the earth-sun relation that it would be quite fruitless to teach young children this view ' (1930, p. 85). He criticised 'tests' in his Footnote 2 (continued) (CK), Pedagogical Content Knowledge (PCK). A search of interview literature would include: interview methodology, Piagetian interviews, Socratic dialogue. And a search of cognitive development literature would feature keywords such as children, cognitive development, science concepts, Piaget, Vygotsky. In general peer reviewed papers were preferred but given that the studies cover a century or more of work this was not always possible. Hence publication dates were left open ended to ensure that the reviews were as inclusive as possible within the parameters defined by keywords. Implicitly, this was a review of empirical studies with the focus on the variety of interview techniques utilised within different research methodologies. These reflected an ongoing tension between two main theoretical groups: constructivism after Piaget and socio-culturalism after Vygotsky. 1 3 1930 paper because the method 'does not allow a sufficient analysis of the results' and it 'falsifies the natural mental inclination of the subject or at least risks doing so' (p. 3). He went on to say that: 'The only way to avoid such difficulties is to vary the questions, to make counter-suggestions, in short, to give up all idea of a fixed questionnaire ' (p. 4). This one-to-one open-ended questioning technique came to be known as the 'clinical method'. Piaget's questions, according to his 1929 text, were determined in manner and in form, by the spontaneous questions actually asked by children of the same age or younger. Claparède, in the introduction to Piaget (1926), summed up the 'clinical method' thus: The clinical method is the art of questioning: it does not confine itself to superficial observation, but aims at capturing what is hidden behind the immediate appearance of things. It analyses down to its ultimate constituents the least little remark made by the child. It does not give up the struggle when the child gives incomprehensible answers but only follows closer in chase of the ever-receding thought, drives it from cover, pursues and tracks it down, till it can seize it and lay bare the secret of its composition. (Claparède, cited in Brearley & Hitchfield, 1966, p. xiv). Bliss (1993) praised Piaget's interview methodology, his rapport with children, and his attention to detail with the following words: From a methodological point of view, Piaget established a tradition. He went into schools and talked informally to children; he usually devised some interesting activity or task for the child to do as the focus of the conversation; but above all he listened and valued what children said. This approach, known as the clinical method, has now been widely adopted, but unfortunately is sometimes not carried out with as much care as Piaget himself insisted on (p. 39). As will become apparent in due course, most of the research conducted since Piaget's early work, and reviewed here, has utilised Piagetian clinical interviews. Shortly after World War II, in the USA, Oakes (1947) interviewed 153 children, aged 4-13 years, about their cosmologies and compared his results with all known work providing a valuable source. He found no evidence of gender mediation in his own studies of children's cosmologies, nor in his extensive review of the literature extant at that time. More recent studies have continued to provide support for the view that boys and girls have similar, holistic-rather-than-fragmented, cosmologies which have features in common across cultures and ethnic groups (see Bryce & Blown, 2007 in due course). Also in the USA, Haupt (1948Haupt ( , 1950 reported a discussion on the Moon between a teacher and her class of children, aged 6-7 years in 1948 (part 1). Interviews with the class individually in 1950 were reported in part 2. The first study involved free discussion prompted by 'Let's talk about the Moon', analysis of the children's responses yielding five categories. Each of these was subjected to an array of detailed questions which were used for the individual interviews in the second study. In his conclusion, Haupt stated that 'It seems, sometimes, that imagination is the most important factor in children's thinking' and he judged that there was a need for further research into the roles of imagination 'in various levels of children's thinking ' (p. 233). This is something we have considered in detail from a quite different perspective incorporating neuro-scientific findings in Bryce and Blown (2021)-see later. In the late 1970s, after a gap of over 40 years, Nussbaum and Novak (1976) used Piaget's techniques to re-investigate children's cosmological ideas, the individual interviews being supported by a structured questionnaire and various drawings and models as props. Working in the USA, they studied the Earth shape and gravity concepts of 26 American children, aged 8 years, using an Interview-Instruction-Interview research design; i.e. each child being given a structured interview patterned after Piaget's clinical interview technique. They developed an Earth Notions Classification Scheme which has been of seminal importance. The interview procedure was developed with the aim of revealing the child's version of the Earth concept, the final form of which was achieved through a developmental process consisting of several phases. In each phase, the form of the interview test was improved over the previous one in terms of number and variety of tasks, the quality of the drawings and the calibre of the questioning style. Each improvement revealed further differentiations in children's notions. This led to further improvements, by suggesting better or completely new tasks, drawings and probing questions; and so on, iteratively. The basic tasks common to nearly all the assessment items involved predicting directions of imaginary free fall occurring at different points on a model of the Earth and explaining these predictions. Nussbaum and Novak observed that visual props were apt to provide the child with some cues that would interfere with the spontaneity and authenticity of his/her natural thinking (thereby risking the validity of the interview interpretation). They decided, therefore, to start the interview with a set of questions including 'What is the shape of the Earth? How do you know that the Earth is round? …' in the absence of any visual Earth model. Only after presenting these questions were a globe and pictures introduced into the interview (pp. 536-537). The presence or absence of such stimuli has proved controversial, as we shall see in due course. In the same year, Za'rour (1976) interviewed 55 Christian and 55 Muslim children, aged 4-9 years in the Lebanon, about their concepts of phenomena such as rain, Sun, wind, Moon and falling bodies. [Za'rour's text refers to Moslems, the normal term in use at that time.] He found little evidence of animism. Significantly more Christians perceived the Moon as having phases than Muslims (c.f. for further considerations of how children perceive the shape and size of the Moon). In a follow-up study, Nussbaum (1979) interviewed 240 children, aged 10-14 years, in Israel about their Earth shape and gravity concepts. His findings caused him to modify Nussbaum and Novak's (1976) Earth Notions Classification Scheme merging two of the earlier notions and adding the notion of a two hemisphere Earth; the lower synonymous with ground; the upper with sky. The open-ended interview questions from the previous study were modified to a multiple-choice format with each of the four alternative answers presented by a drawing. The alternatives were designed so that each of them would represent at least one of the five notions described above and based on concrete answers offered by children in the previous study. The children were asked to explain their choices. At the start of each interview, the student was asked to draw a picture that would include the Earth, the sky, the Sun and the Moon, and some enlarged figures of people standing on the ground. This was done in order to obtain an evaluation of the child's conception of the Earth before they could be influenced by the multiple-choice format. The interviewer conversed with each child whilst he/she was constructing their own picture. Each half of the sample (cross-cut longitudinally) was questioned by a different trained interviewer and each individual interview lasted for 20-30 min. The child's responses, choices and explanations were recorded immediately and were analyzed on the same day by his/her interviewer. In the conclusion to this well-planned and carefully executed study, Nussbaum stated that this Piagetian method 'made possible a "penetration" into the children's cognitive structure' (p. 92). Researchers Howe (1979, 1980) interviewed Nepalese children: 128, aged 8-12 years, from the Kathmandu valley, and 128, also aged 8-12 years, from the Pokhara valley, to ascertain their concepts of Earth. Using Nussbaum and Novak's (1976) classification scheme, they found that Nepalese children had similar notions to American children but there was an age decrement of 4 years. Similar differences were found between the Nepalese groups, access to school being a critical factor in acquiring scientific knowledge. With respect to Piagetian tasks of conservation, seriation and classification, the researchers found that children from rural Pokhara equalled and at some points surpassed their urban counterparts from Kathmandu in conservation of area and weight at all ages. Mali and Howe argued that this was due in part to the rural children having experience in serving tea and other goods from their home stalls. Thus the comparative aspects of this study were unusually refined, the researchers focusing carefully on the important features of the cultural surroundings of the children being interviewed. The authors state in their 1980 paper that, contrary to their earlier findings on Nussbaum and Novak's (1976) Earth concepts, the Nepalese children 'were not retarded [sic]' with respect to European children in conservation of area, weight and volume. The 1980s saw several teams researching children's cosmologies. In Israel, Nussbaum and Sharoni-Dagan (1981) used the Interview-Instruction-Interview design with 41 children, aged 7-8 years, to investigate methods of teaching scientific concepts of Earth. They advocated that teachers should actively encourage cognitive disequilibrium or dissonance as a strategy to enhance cognitive accommodation of more scientific concepts. The first ten questions of the interview plus the drawing task probed the child's notion of the Earth as being a huge sphere 'surrounded' by cosmic space. Using three-dimensional props and drawn Earth models, the rest of the questions presented hypothetical situations on the Earth requiring the child to predict the direction of free fall of objects at different locations. The interview format and procedures were basically the same as those used previously (Nussbaum, 1979), with about half of the questions being open-ended and the other half being in a multiple-choice format. In the latter, each of the four alternatives was presented by a drawing. Children were asked to explain their choice and this added an open-ended component to each of the 14 multiple-choice items. Again, the question sequence was presented without any props at the beginning of the interview in order to obtain an evaluation of the child's genuine conception of the Earth before he/she could become influenced by props and multiple-choice drawings. The strategy employed shows convincing links between sound research and ways in which teachers can act in accord with Ausubel's (1968) dictum: 'The most important single factor influencing learning is what the learner already knows. Ascertain this and teach him (sic) accordingly' (p. vi). From a comparative perspective, Klein (1982) interviewed 12 Mexican-American and 12 Anglo-American children, aged 7-8 years, in the USA, and discovered a poor understanding of Earth concepts in both groups. She emphasised the need to compare children of similar socio-economic background since cross-cultural studies often used dissimilar groups confounding linguistic, cultural and socio-economic factors, something better dealt with in some of the aforementioned pieces of comparative research. Researchers Sneider and Pulos (1983) interviewed 159 US children, aged 9-14 years; classified their data using Nussbaum and Novak's (1976) scheme; and compared the distribution of the notions held at each age level in their own and all previous studies. They developed Earth Shape and Gravity Scales which complemented the earlier scheme. Treagust and Smith (1986) investigated Earth shape and motion, gravity and the Solar System with 24 children in Australia, aged 15 years, interviewed in small groups (n = 4). The purpose of the study was to examine secondary students' understanding of the Solar System following a course of instruction and to identify misunderstandings and misconceptions. As part of their investigations into how students understood the motion of the planets, these authors use a series of interview cards representing fictional solar systems where the planets differed from our own, a strategy which has not found favour with others. Categories of misconceptions associated with gravity and the Sun's source of energy were identified. Discussions were conducted with groups of 3 or 4 students, in three different schools, from classes who had completed the astronomy topic. Each interview lasted approximately 30 min, was tape recorded and later transcribed. Recommendations were made to improve the teaching of Solar System Astronomy in Australian schools. Brewer et al. (1987) searched for evidence of cultural mediation of Earth concepts among 26 children, aged 6-12 years, in Samoa, whom they interviewed and conducted modelling sessions with clay. They found cases of children making ring-shaped models of the Earth which were believed to be influenced by the layout of houses in a Samoan village. Although valuable from a methodological and cross-cultural perspective, this study was overtaken by the substantial papers of Vosniadou and Brewer (1990, 1992, 1994. Based on work with children in Greece and the USA, Sadler (1987) investigated concepts of day and night, the seasons and the phases of the Moon with 25 children, aged 15 years. His taped interviews were used in the award-winning documentary A Private Universe (Schneps & Sadler, 1989). The interviews partly informed Sadler's study of 1414 US high school students' astronomical misconceptions using multiple-choice tests (Sadler, 1992). Views that were incorrect but held by a large number of students were used as distractors. Researchers Jones et al. (1987) interviewed 32 children, aged 9 and 12 years, in Australia, concerning their concepts of the shape, size and motion of the Earth, Sun and Moon (ESM). The children were questioned using a clinical interview technique and stimulus materials. A number of alternative views were shown to be held. The authors stated that: 'use of a similar procedure with a group of children would seem to offer a powerful teaching methodology in which both teacher and students gain from the dialectic learning situation that is developed by this technique-apart from providing insights concerning children's understanding' (p. 43). In England, Baxter (1989) carried out interviews with 20 children, aged 9-16 in an attempt to discern the notions used by them to account for easily observed astronomical events. Noting the development of these notions and historical parallels in astronomy, he found that early [learned] concepts 'are not exchanged for the accepted theory' (p. 506). The wider study of which the findings were part was intended to develop materials and approaches for teaching astronomy as part of the science curriculum of all pupils. The 1990s saw an increasing focus on methodological improvements to the interviewing of children to ascertain their ideas in the field of astronomy. The concepts of 60 European-American children, aged 6-11 years in the USA and 90 children, aged 5-11 years in Greece were explored by Vosniadou and Brewer (1990) using a questionnaire in interviews that lasted from 30 to 45 min. These researchers used factual questions (e.g. 'What is the shape of the earth?', p. 610) and 'generative' questions (e.g. 'If you were to walk for many days would you ever reach the edge of the earth?', p. 610), arguing that the latter would better reflect children's conceptual knowledge. They observed that children in both cultures had difficulty in understanding that the shape of the Earth was spherical but that they consistently used a limited number of alternative mental models to explain their cosmologies (each with elements of intuitive and scientific concepts). The same researchers interviewed 60 children in the USA, aged 6-11 years, and developed a classification taxonomy of concepts of Earth shape and habitation (Vosniadou & Brewer, 1992). Five mental models of the Earth were identified: rectangular, disc, dual earth, hollow sphere and flattened sphere. The methodology followed that described in the 1990 paper. Children were asked 15 questions about the shape of the Earth from a 48-item questionnaire. Follow-up questions were used to clarify those responses which the researchers could not understand. Children were asked: 'tell us more about it' or the last part of a child's response was repeated as a question. In a few cases, when the researchers could not understand what the children were telling them, they were forced to engage in more extensive questioning (pp. 543-545). For their following study, Vosniadou and Brewer (1994) interviewed 60 children aged 6-11 years; also in the USA, the researchers developed a classification taxonomy of concepts of the causes of the day/night cycle. The methodology followed that in the1990 paper described above. Thirteen questions about children's ideas about the disappearance of the Sun at night, the movement of the Moon, explanations of the day/ night cycle and the disappearance of the stars during the day were selected from their questionnaire on astronomy concepts. Detailed notes were made of children's responses and the interview recorded using a tape-recorder. The scoring was done later on the basis of both the transcribed data and the experimenter's notes. The results showed that the majority of children used a small number of relatively well-defined mental models of the ESM consistently to explain the day-night cycle. Comparative investigations continued to figure in the 1990s. Nakashima (1993) interviewed 80 Japanese children, aged 6-9 years, investigating their concepts of Earth shape; and 128 children, aged 6-9 years, investigating Earth shape and gravity. The study focused on how the students tried to inter-relate their own, informal observational knowledge with conflicting 'knowledge from scientific information', in most cases with little success. According to the author, achievement required explicit instruction. Samarapungavan et al. (1996) questioned 38 Indian children, aged 5-8 years, individually in interviews lasting approximately 45 min and compared their results with those of their earlier American studies (Vosniadou & Brewer, 1990, 1992, 1994. They found more disc-shaped-Earth cosmologies in Indian children vs. American children, which they took as evidence in support of their hypothesis that children's cosmologies are constrained universally and culturally. Firstly, by universal first-order constraints such as that the Earth is flat and supported; and secondly by cultural constraints such as notions of the Earth's shape and location relative to the Sun and Moon limiting explanations of the day-night cycle. Many Indian children borrowed the idea that the Earth is supported by an ocean or a body of water from folk cosmology. The questionnaire used was a modified version of that developed by Vosniadou and Brewer (1990, 1992, 1994. Some questions required only verbal responses; others required the children to explain their responses with clay models that they made or with pre-made Styrofoam models that they selected. Diakidoy et al. (1997) interviewed 26 US American-Indian children, of Lakota/Dakota ethnicity, aged 6-11 years, to ascertain their models of the shape of the Earth and the causes of the day/night cycle. The methodology followed that described in Vosniadou and Brewer (1990). A 45-item questionnaire was used with 18 questions on Earth shape, 10 questions on the day/night cycle and 17 questions on Earth's motion and the stars. The results indicated that the children used a small range of relatively well-defined models of the Earth and the day/night cycle similar to those found in previous studies. However, the children preferred a hollow sphere model, in keeping with Lakota/Dakota mythology. Younger children also used some novel animistic explanations of the day/night cycle. They concluded 'that while the process of knowledge acquisition in astronomy follows a similar path in all children regardless of cultural variables, cultural cosmology influences both the specific models constructed as well as the modes of explanation provided for astronomical phenomena' (p. 159). In Sharp (1999), the author interviewed twenty-five 7-year-olds, one-to-one, in England about the Earth in space and other areas of astronomy. Probes containing both verbal and non-verbal techniques were incorporated. These included open conversations, discussions about specific instances and events, the manipulation of physical props, drawings, word association and picture recognition (Osborne et al., 1985;White & Gunstone, 1992). Responses were collected in writing, recorded on tape and interpreted in terms of their content, accuracy, language, logic and reasoning, sources of information and consistency. Responses were also compared alongside those from previous studies (Vosniadou & Brewer, 1992). The researcher concluded that focusing on Earth shape and gravity alone, however, may have resulted in an underestimation of children's other abilities and learning potential in this field. In the early 2000s, children's cosmologies research, internationally, continued to see efforts to advance interview methodology. Researchers Vosniadou et al. (2001) interviewed a class of 5 th and 6 th grade students in Greece to ascertain children's concepts of mechanics (force and energy) as part of a broader study of astronomy. In addition to preand post-tests, interviews were used to clarify some of the questions regarding children's understanding that could not be answered by the analysis of their responses in the written tests. The interview, which was really a discussion with the interviewer who was also the teacher, was a situation where the students were prompted to express their opinions, and were helped by the teacher through hints to re-evaluate their answers, and to provide more information. Thus, the interviews tested the students more at the Zone of Proximal Development (ZPD 3 ) (Vygotsky, 1978) than the post-tests did. In the same year, researchers Schoultz et al. (2001) published their findings from interviews with 25 children, aged 6-11 years in Sweden, having set out to ascertain the effectiveness of lessons based on socio-cultural theory. In contrast to Vosniadou et al., they used cultural artefacts such as globes and the concept of countries as a focus, to investigate children's concepts of the shape of the Earth and gravity from a 'situated and discursive perspective'. That is, Schoultz et al. considered it proper to 'tune' the subjects in to what the interviewers were interested in asking about; focusing their minds on what sorts of things were to be thought about in front of them. They concluded that the globe served as 'a discursive structure with clear boundaries' which enabled all of the participants to express scientific concepts rather than intuitive ones. As raised in point (d) of our introduction, their view of the value of verbal responses in interviews reflects their socio-cultural perspective encapsulated in the statement: 'We shall not favor the assumption that what is said in an interview situation is a reflection of conceptual content in the mind of the individual' (p. 109). Following this, and again in Sweden, Ivarsson et al. (2002) investigated conceptions of the shape of the Earth and gravity through a study of maps and countries with 18 children, aged 7-9 years. They found no evidence of children having mental models (see Vosniadou & Brewer, 1992, 1994. Such 'constructs' were regarded as a product of the interview methodology, so that when cultural artefacts such as maps are introduced these intuitive notions effectively 'disappear'. This argument is contested, not least by Nussbaum and Novak (1976); Vosniadou et al., (2004 and; ; and others, who consider the early presentation of artefacts like globes leads to conflicts with incompatible prior/intuitive knowledge in many children; the resulting dialogue masks what that understanding actually is. Nobes et al. (2003) interviewed 167 children (82 Asian Gujarati and 85 Caucasian), aged 4-8 years, in East London to ascertain their concepts of the shape of the Earth. Children had to select from a set of plastic models and answer forced-choice questions without having to explain or justify their responses, and the three fieldworkers involved in the research 'had some knowledge of previous work in this area …'. The authors found no significant differences between cultures once allowance was made for variations in language skills. Their results indicated that children's knowledge was fragmented rather than coherent [aligning themselves with diSessa (1988) rather than Vosniadou and Brewer (1992) and Blown & Bryce (2010)]. Siegal et al. (2004) interviewed 59 children, aged 4-9 years, in Australia, and 71 children, age 4-9 years, in the UK, and found the Earth concepts of Australian children more advanced than those of English children. Due to early instruction in this domain in Australian schools, this did not surprise Siegal et al. and they concluded that coherence vs. fragmentation in children's concepts is a reflection of the timing of culturally transmitted information as well as the questioning methods used in research. Working in Athens, Vosniadou et al. (2004) investigated how methods of questioning affected children's responses regarding the shape of the Earth and the day/night cycle. Seventy-two children from Grade 1 and Grade 3 in a middle-class elementary school were tested individually at school by two experimenters, either by an open method or by a forced-choice method of questioning. The interviews lasted approximately 15-20 min for each child. Different results were obtained from the two methods of testing, suggesting that they tapped different forms of knowing and different ways of reasoning. The open questioning replicated Vosniadou and Brewer's (1992) findings, i.e. that the majority of the responses were consistent with a small number of internally consistent mental models. The forced-choice method of questioning resulted in more scientifically correct responses, but also fewer internally consistent ones. The authors concluded that the forced-choice method of questioning, together with the presentation of the spherical model of the Earth, could inhibit the generation of internal (mental) models. In a follow-up study, Vosniadou et al. (2005) interviewed 44 children, aged 6-7 and 8-9 years, again in Greece, to determine their concepts of Earth shape. They found that the use of globes as interview props inhibited the generation of mental models. 'It appears that in the absence of an external, cultural model, children can form internal representations which they can distort in ways that make them consistent with their prior knowledge. But, when the cultural artifact is present, such distortions are not possible with the result that children end up with internally inconsistent patterns of responses ' (pp. 333-334). In longitudinal studies, spanning several years between 1987 and 2000, investigated the cultural mediation of children's thinking about the Earth using a Piagetian interview technique designed to elicit responses from children from all 'levels' of their conceptual organisation (intuitive, cultural and scientific). An interview guide designed to cover the syllabus of astronomy and Earth science topics common to children in New Zealand (abbreviated NZ) and China at each level was used. The instrument was originally written in English which was translated into Hanyu (Mandarin) to assist trained interpreters in China. This was complemented by Socratic dialogue to clarify responses when appropriate. Close scrutiny of the research literature had revealed that some strategies used in the past to probe children's ideas have been influenced by the background of the interviewer, either in the design of their questions or in the use made of concrete props (e.g. of the Earth's shape). This tended to obscure the degree of cultural influence of those interviewed. Central to this research therefore was the development of an interview method ('instrument attunement') which was flexible, culturally adaptable and could be tuned to the response level of the child. The participants included 129 boys, 113 girls from China and 213 boys, 227 girls from NZ. The methodology utilising the children's own observations of the Sun and the Moon led into discussion of the motion and shape of the ESM. The 2 nd author (from NZ himself) spent considerable time in both the NZ and the Chinese school communities in order to become familiar to the subjects in their schools. Surprisingly, the development of children's concepts was found to be remarkably similar within the three main ethnic groups (China Han, NZ European and NZ Māori) in the two cultures (China and NZ). Cases of cultural mediation were detected but these could be assimilated into a common taxonomy of cosmological concepts for all participants. Further to the above (and published in ), children's cosmologies were investigated over a 13-year period, using multi-modal, in-depth interviews with 686 children (217 boys, 227 girls from NZ and 129 boys, 113 girls from China), aged 2-18. The procedure followed that of Bryce and Blown (2006) using questions from a comprehensive interview guide supported by Socratic dialogue. Children were interviewed whilst they observed the apparent motion of the Sun, the motion and phases of the Moon and features of the Earth; drew their ideas of the shape and motion of the ESM, and the causes of daytime and night-time; then modelled them using play-dough, which led into discussion of related ideas. Models of Earth shape were not introduced to clarify children's responsesin contrast to Vosniadou and Brewer's (1990) approach where children were asked to select an Earth shape from a range of models, the choice of which compared favourably with what they had said in generative dialogue. Although this paper supports Vosniadou and Brewer's claim that children have coherent notions about the shape of the Earth, the introduction of models was considered to inhibit rather than assist the investigative process. The interviews revealed that children's cosmologies were far richer than previously thought and surprisingly similar in developmental trends across the two cultures. There was persuasive evidence of three types of conceptual change: a long-term process (over years) similar to weak restructuring; medium-term processes (over months) akin to radical restructuring; and a dynamic form of conceptual crystallisation (often in seconds) whereby previously unconnected/conflicting concepts gel to bring new meaning to previously isolated ideas. The interview technique enabled the researchers to ascertain children's concepts from intuitive, cultural and scientific levels, and supported the argument that children have coherent cosmologies which they actively create to make sense of the world rather than fragmented, incoherent 'knowledge-in-pieces'. Although the current paper has focused on interviews and verbal language as a primary source of knowledge, our own technique has been essentially multi-modal, including children's drawings and play-dough modelling. We have also reported the multiple sources of astronomy knowledge utilised by children (including peers), knowledge of which informed interview design and technique (see Blown & Bryce, 2020). As Espinoza (2005) suggests, referring to US students in an experimental study, teachers should make greater use of thought-experiments mediated by student peers to overcome resistance to accepting scientific interpretation of Newtonian concepts of force and gravity (see ). Espinoza's design included opportunities for students to respond to questions about gravity and pendulum motion by drawing and by video. The latter sessions were conducted by a fellow student who put the questions: 'In the laboratory activity, a student volunteer described the situation, demonstrated the motion, and then asked the same questions as those in the pencil-and-paper task.' (pp. 276-277), thus offering a degree of collaborative learning (see Bryce & Blown, 2012). See reference to the use of pendulums in our discussion of observational astronomy below. As reported in the first of two papers, Panagiotaki et al. (2006a) tested the influence of question type (open vs. forced-choice questions) and medium (drawings vs. 3-D models) on the scores of a sample of 59 6-year-olds in England. They found that the use of drawings and open questions increased the apparent incidence of naïve mental models, and the combination of physical 3-D models plus forced-choice questions elicited more scientifically correct responses (as well as higher proportions of scientific and inconsistent mental models than the combination of drawings and open questions). The researchers argued that '…. children know more about the earth than the mental model theorists claim, and that naïve mental models of the earth are largely artifactual' (p. 353). In their second paper, Panagiotaki et al. (2006b) describe the use of one open and eight multiple-choice questions (with varied numbers of response alternatives ranging from 2 to 4 for written items and 7 for the items involving 3-D models). They concluded that only 10% of the children showed any evidence of naïve mental models. However, as the writers acknowledge: drawing, model-making or answering open questions require free-recall and imagination, as against the easier, recognition demands of forced-choice tasks. In the UK, Sharp and Sharp (2007) conducted a 'quasi-experiment' with 31 children, aged 9-11, learning about astronomical ideas in two vertically grouped classes in an English primary school located in a mixed socio-economic catchment area. (Children in the control group were taught the ideas later.) The class teacher of the experimental group was described as 'an enthusiastic, 36-year-old male practitioner with 15 years teaching experience and a positive attitude and constructivist orientation towards primary science' (p. 376), i.e. with good subject and pedagogic content knowledge. The children in the experimental group engaged in a wide variety of illustrative, investigative and problem-solving activities: reading about astronomy and conducting their own research; preparing their own encyclopaedia of space using a multi-media authoring package; working with concrete scientific models; observing the sky at night as parentally supervised homework; and with parents and children participating in an evening trip to a local observatory. Interviews preand post-intervention of an hour's duration were recorded. Continuing their work in Greece, Skopeliti and Vosniadou (2007) interviewed 84 children aged 6 and 8 years individually, using a questionnaire similar to that used by Vosniadou and Brewer (1992) supported by a map and globe. In a pre-test, children were asked to make drawings and play-dough models of the Earth and to indicate where people live. The sample was split, one half given a globe, the other a map, and both asked further questions. The researchers found that the post-test influenced what children then said, relying on their incompatible prior knowledge. ('… presentation of the globe caused a dramatic change in children's responses regarding the shape of the Earth with most children abandoning their previous representation of the earth and adopting the culturally accepted representation'.) The researchers concluded that 'the use of an external representation is not an act of "direct cultural transmission", but a constructive process during which the information that comes from the culture is interpreted and influenced by what is already known' (p. 244). As part of the comparisons between children's intuitive/informal scientific knowledge and their later achievements and attitudes in school science, Bryce & Blown (2007) examined the gender-related findings from in-depth interviews with 119 boys and 121 girls, ranging from 2 to 12 years, in China and NZ. The interview used questions from an extensive interview guide complemented by Socratic dialogue as detailed in Bryce and Blown (2006). The questions and the interview framework were designed to maximise flexibility to permit children to share their cosmological concepts over a wide age range. By comparing boy/girl cosmological concept categories and by tracking their developmental trends by age, statistical evidence revealed the extent of the similarities within and across these diverse cultures. The findings reinforceed those from the authors' previous studies (as above) and provide support for the view that boys and girls have similar, holistic-rather-than-fragmented, cosmologies which have features in common across cultures and ethnic groups. In a series of three studies, Wilhelm (2009a, b; investigated US children's concepts of the Moon and shadows. Topics included the Moon's changes in appearance; the Moon's size; the Moon's distance from Earth; and the source of illumination of the Moon. Through Piagetian interviews, children's ideas about the cause of lunar phases and the nature of shadows were ascertained and clarified. It was found that children gained astronomical knowledge from a variety of sources including family, their own observations and experience. In her later study, Wilhelm investigated gender differences in astronomical learning particularly lunar phases. With a pre-test/post-test design utilising a Lunar Phases Concept Inventory and a Geometric Spatial Assessment, she found that males scored significantly higher than females on the science domain of assessment and females made significant gains in the mathematics domain. (In Bryce & Blown, 2007 studying New Zealand and Chinese children, we reported girls' superior ability to visually represent their cosmologies and boys' greater awareness of gravity; and in Blown & Bryce 2020 we identified teachers, parents and librarians as major sources of astronomical knowledge, as well as children's "own observations" as important sources.) In Blown and Bryce (2010), we describe the study of 345 young people over a 10-year period using a multi-media, multi-modal methodology in a research design where survey participants were interviewed three times and control subjects were interviewed twice. The interviews used standard questions supported by Socratic dialogue when opportune as detailed in . Each interview session took between 40 and 120 min overall (with appropriate breaks between studies) depending on the age of the child. An interview guide designed to cover the syllabus of astronomy and Earth science topics common to children in NZ and China at each level was used. Five hypotheses were confirmed rejecting the knowledge-in-pieces argument in favour of conceptual coherence: (a) conceptual coherence shown as patterns of high correlation of concept representations between the media used to assess subjects' understanding within a survey, as well as (b) coherence revealed as consistency of those concepts across modalities; (c) enhanced conceptual understanding and skill through repeated interviews across (longitudinal) surveys, as young people develop their knowledge; (d) cultural similarity in subjects' representations of basic static concepts (e.g. the shape of the Earth); and (e) improved understanding of basic dynamic concepts (e.g. the motion of the Earth) and complex dynamic concepts (e.g. seasons and eclipses), interpreting a concept as a skill (c.f. Barsalou, 2003). In the same year, Hannust and Kikas (2010) describe a longitudinal study carried out in Estonia, where the investigators followed the development of Earth concepts of 143 children, aged 2-3 at the start, for 3 years. The children were interviewed at annual intervals utilising the same questions (based on Vosniadou & Brewer, 1992) on each occasion. Hannust and Kikas reported that 'in most cases young children's knowledge was fragmented and accurate knowledge was often expressed alongside inaccurate/ synthetic ideas' (p. 164). They argued that children needed to know scientific facts before they start taking the global perspective when describing the world. 'When faced with ambiguous open questions, children often experience difficulties that can induce them to change the types of answers they provide' (p. 164). Most of the 'synthetic answers' found in the study were attributed to the nature of the tasks used in the study. Also in the same year, Plummer, Wasko and Slagle (2011) investigated elementary pupils' explanations for the daily patterns of the apparent motion of the Sun, Moon and stars, interviewing 24 US children, aged 8-9 years. At Grade level 3 in the USA, national standards indicate that such children 'should learn to use the Earth's rotation to explain daily celestial motion' (Abstract, lines 4/5). The research indicated that about half of the sample were working from naïve mental models, the other half were using more scientific explanations but far less frequently. An instructional program using computer simulations and models suggested that pupils of this age could be helped to move between Earth-based and heliocentric frames of reference in their thinking. Fre'de, Nobes, Frappart, Panagiotaki, Troadec and Martin (2011) studied the influence of methods of questioning and analysis on the interpretation of children's conceptions of the Earth. They interviewed 178 French children, aged 5-11 years, comparing forcedchoice questions with open-ended questioning to ascertain whether their knowledge was coherent or fragmented. Children were interviewed individually for about 30 min in a quiet area of their school with rapport established and explanations given. All protocols were scored twice by two independent judges. Agreement reached 100% for the forced-choice conditions and 87% for the open conditions. All disagreements were resolved through discussion (p. 437). The study found that forced-choice questions resulted in higher proportions of scientific answers than open questions, and children appeared to have naïve mental models of the Earth only when the mental model coding scheme was used (thus supporting the fragments of knowledge argument and that naïve mental models of the Earth are methodological artefacts). Knowledge in astronomy across a broad spectrum of ages and experience in China and NZ was investigated in Bryce and Blown (2012) article on 'novices' vs. 'experts'. There were 960 participants in all, aged 3-80 years, including 68 junior school pupils; 68 primary school pupils; 111 middle school students; 109 high school students; 79 physics undergraduates; 60 parents; 103 pre-service primary teachers; 131 pre-service secondary teachers; 72 primary teachers; 78 secondary teachers; 50 amateur astronomers and astronomy educators; and 30 astronomers and physicists; with approximately equal numbers of each group in both cultures; and of boys and girls in the case of children. The methodology utilised Piagetian interviews with three media (verbal language, drawing, and play-dough modelling), as described in Bryce and Blown (2006) in the case of children; and a written questionnaire for adults. A combination of closed and open questions to afford different forms of reasoning at all levels of experience was used. Closed How? questions investigated scientific knowledge anticipating simple statements about phenomena (e.g. How the Earth moves). Whereas open-ended Why? questions invited more complex explanations of the cause of phenomena (e.g. Why the Earth moves). The results showed that expertise (as scientific knowledge and conceptual skill) is a process of gradual acquisition from childhood to adulthood and from novice to expert. Venville et al. (2012) carried out a detailed interview study with a small group of children, aged 3-8 years, eight in the USA, two in Australia. To seek information about possible social and cultural influences on their knowledge, parents were also interviewed. The authors detected evidence of both the framework theory perspective and the knowledge in pieces perspective in student knowledge: The children '… provided [us] with a kaleidoscope of ideas about the Moon and the social and cultural experiences that influenced their ideas…' (p. 745). The writers stated their wish for further similar case studies with children of different ages and cultures. Two papers by Tao et al., (2012Tao et al., ( , 2013 describe studies of 54 children, aged 8 years, in China, and 54 children, aged 8 years, in Australia, who were interviewed about their concepts of Earth shape, gravity, day/night and the seasons following the questions used by Brewer (1992, 1994). The children were drawn from schools in districts of high, medium and low socio-economic status in both countries. A science quiz was used to assess scientific understanding and in-depth interviews used to further explore conceptual understanding. The researchers reported that: 'Most children were not sure about the rotation of the Earth, the Sun and the Moon. Some thought the Sun and Moon stayed in the sky, and the Earth stayed in the middle and rotated; others explained that the Earth rotated around the Moon, and the Moon rotated around the Sun' (2012, p. 892). Note. The use of 'rotated' as synonymous to 'revolved' is not uncommon in the literature (see , discussion on p. 651). In a further study reported in the 2013 paper, the same authors interviewed 18 children, age 8 years, and 20 children, aged 12 years, in China; and 18 children, aged 8 years, and 18 children, aged 11 years 6 months, in Australia about their concepts of Earth shape, gravity, day/night and the seasons. They found that regardless of culture, children of similar age held similar concepts about the Earth, with Year 3 pupils more likely than Year 6 pupils to demonstrate intuitive concepts of a round and flat Earth, whilst Year 6 pupils were more likely to demonstrate consistent understandings of a spherical Earth. The authors state that 'The findings supported the universality of entrenched presuppositions hypothesis. Cultural mediation was found to have a subtle impact on children's understanding of the Earth' (2013, p. 253). An article by Blown and Bryce (2013) examined the continuity of thought-experiments about gravity throughout the ages and was used to contextualise a set of interviews with 247 children in NZ and China designed to ascertain their ideas about falling objects. The sample included 68 pre-school pupils, 68 primary school pupils, 56 middle school students and 55 high school students, with approximately equal numbers in each group and of boys and girls in each group in each culture. The methodology was as described in Bryce and Blown (2006). This included a series of three multimodal thought-experiments to probe concepts of gravity. The first involved drawing and modelling 'Self' and 'a Friend on the other side of the world' throwing and dropping balls, describing the path of the balls, and explaining why the balls moved as they did. The second scenario had 'Self' and 'Friend' placing drink bottles (part full, with the tops off) on the surface of the Earth, explaining in each case what would happen to the water and why it would happen. And the third thoughtexperiment entailed 'Self' dropping a ball into a deep hole through the Earth, describing what would happen to the ball, and why it would happen. From a cross-age perspective, it was found that young children displayed an intuitive sense of gravity which developed with age as a result of learning and experience in close association with concepts of Earth Shape and Earth Motion. From a cross-cultural viewpoint, the development of these concepts was found to be similar in China and NZ (cultures where teachers generally hold a scientific world view). Overall, taking into account both developmental and cultural strands, the results supported the argument that children's concepts of the Earth and gravity are coherent, not fragmented 'knowledge-in-pieces'. In Bryce and Blown (2013), 248 children aged 3-18 years (119 from China and 129 from NZ) were interviewed using elements of both constructivist and socio-cultural methodologies to research children's concepts of the shape and size of the ESM. The study was based on an ethnological, cross-cultural, longitudinal design utilising one-to-one Piagetian clinical interviews incorporating Socratic dialogue (as described in . The interviews were held in a setting with the interviewer as an accepted member of the culture and usually in a social setting in that other children and adults were not excluded from visiting and observing experiments. The interviews investigated children's concepts of the Motion of the Earth through observation of changes in the shadow of a vertical shadow stick, followed by them being asked to draw the motion of the Earth. The children then drew and modelled the shape of the ESM and compared their sizes. The understanding which young people display during interactions intended to solicit their knowledge are very much a reflection of the sensitivity of the questioning they encounter, the ways in which they are allowed to show their understanding and of course the nature and extent of the relevant experiences they have had in their education to that point. 'Testing' is considered to be very much second best to 'interviewing' but the latter requires time, familiarization and acceptance of the researcher by the person whose ideas we seek to understand (particularly relevant in cross-cultural research). Interest in children's cosmologies continued internationally as shown by Saçkes, Smith and Trundle's (2016) case study involving 56 children aged 4-5 years (27 in the USA, 29 in Turkey) using semi-structured, individual interviews. The authors concluded that the preschoolers were able to make comparable observations of the sky, consistent with the framework theory of developing knowledge. The better representation of science concepts and skills in US early education programs did not confer advantage to the performance of US over Turkish children. The article by Bryce and Blown (2016) notes the convergence of recent thinking in neuroscience and grounded cognition regarding the way we understand mental representation and recollection: ideas are dynamic and multi-modal, actively created at the point of recall. Also, neurophysiologically, re-entrant signalling among cortical circuits (Dresp-Langley, 2012;Edelman, 2005) allows non-conscious processing to support our deliberative thoughts and actions. The qualitative research described in this paper examined the exchanges occurring during semi-structured interviews with 360 children aged 3-13, including 294 from NZ (158 boys, 136 girls); and 66 from China (34 boys, 32 girls) concerning their understanding of the shape and motion of the ESM. The standard questions from the interview guide being supported by questions seeking clarification of ideas through Socratic dialogue as in Bryce and Blown (2006). In particular, the research focus was on the switching taking place between what is said, what is drawn and what is modelled. The evidence was supportive of Edelman's view that memory is non-representational and that concepts are the outcome of perceptual mappings, a view which is also in accord with Barsalou's (2003Barsalou's ( , 2008 notion that concepts are simulators or skills which operate consistently across several modalities. Quantitative data indicated that the dynamic structure of memory/concept creation was similar in both genders and common to the cultures/ethnicities compared (NZ European and NZ Māori; China Han). Also, it was evident that repeated interviews in this longitudinal research led to more advanced modelling skills and/or more advanced shape and motion concepts. The research reported in investigated the everyday and scientific repertoires of children involved in semi-structured, Piagetian interviews carried out to check their understanding of dynamic astronomical concepts like daytime and night-time. The methodology followed that of ) utilising a comprehensive interview guide of standard questions on basic astronomy complemented by Socratic dialogue. The research focused on the switching taking place between embedded and disembedded thinking (see Donaldson, 1978); on the imagery which subjects referred to in their verbal dialogue and their descriptions of drawings and play-dough models of ESM; and it examined the prevalence and character of animism and figurative speech in children's thinking. Modified ordinal scales for the relevant concept categories were used to classify children's responses and data from each age group (with numbers balanced as closely as practicable by culture and gender). Although in general there was consistency of dynamic concepts within and across media and their associated modalities in keeping with the theory of conceptual coherence (see Blown & Bryce, 2010;Bryce & Blown, 2021), there were several cases of inter-modal and intra-modal switching in both cultures. Qualitative data from the interview protocols revealed how children switch between everyday and scientific language (in both directions) and use imagery in response to questioning. The research indicated that children's grasp of scientific ideas in this field may ordinarily be under-estimated if one only goes by formal scientific expression and vocabulary. Greek students' misconceptions about the day/night cycle after reading a science text were investigated in the study by Vosniadou and Skopeliti (2017). Further, 99 children; 50 aged 8; 49 aged 10 attending the same school in a middle-class Athens suburb sat a written pre-test where they had to (a) draw a picture of a person living on the Earth when it is daytime and when it is night-time and (b) write an explanation of day and night. The researchers then interviewed ten pupils aged 8, and ten aged 10 (testing and interviews took place in a small interview room in the children's school). The results convinced the authors that initial explanations 'continued to exist in the conceptual repertoire of the reader, and it is not difficult even for young children to retrieve it from memory and use it to create a situation model of the initial text ' (pp. 20-21). They also found that children had cultural explanations for the day/night cycle long before learning the scientific view, and that these explanations appeared to 'co-exist with scientific ones even after conceptual change had been achieved' (p. 21)-bringing into question what is meant by 'conceptual change'. In the study by Blown and Bryce (2020), the semi-structured, multimedia interviews described previously Bryce and Blown (2006) were used with 538 children (125 boys and 145 girls in NZ, 144 boys and 124 girls in China). These were augmented using questionnaires with 80 parents, 65 teachers and 5 local librarians. Together, these were used to investigate the sources of children's astronomy knowledge, focused on their understanding of daytime and night-time and the roles played by the Sun and Moon in creating familiar events. The analysis (a) considered how teachers, parents and librarians (libraries, books) continue to be major sources of scientific knowledge despite the rise of electronic media (the Internet); (b) identified the extent to which folklore-both local and imported by migration-was an important source of that knowledge; and (c) showed how metacognitive bootstrapping and a growing awareness of co-existing everyday and scientific repertoires of knowledge resulted from divergent sources. Children frequently revealed their sources of information during the interview without prompting. In other cases, they were asked where they learned about the ESM. Finally, Bryce and Blown (2021) followed the same interview methodology as in the 2018 paper described above, allowing about 1 hour per interview, 141 children aged 3-12 made up of 73 from NZ (36 boys and 37 girls) and 68 from China (34 boys and 34 girls). At the end of their interview, children were asked the following questions about imagery: (1) When I asked you about the Earth, Sun and Moon did you think of any stories you have been told about them? (2) Did you see any words (in your imagination)? (3) Did you see any pictures (in your imagination)? (4) Did you see anything moving (in your imagination)? (5) Where did your ideas come from? (6) What were you thinking of? The paper explored the dynamic nature of memory now accepted by neuroscientists who emphasise: • Its creative (in contrast to its reproductive) character; and therefore • Challenge the representational connotation often implicit in cognitive analyses of what children say when remembering; and • Cast serious doubt on the common-place presumption that recall is akin to the extraction of ideas from a mental data-base. The study re-affirms the merits of sensitive clinical interviewing. When used by an experienced researcher in conjunction with Socratic dialogue and triangulated with children's drawings and models, it can yield valid data about children's knowledge (as researchers like Vosniadou and her colleagues have productively demonstrated). The research described in the preceding historical review of investigations into Children's Cosmologies are listed by Researcher, Year of Publication and Topics in Table 1. Discussion of Issues Arising from These Investigations As mentioned in our introduction, from the experience of researchers in the field, including our own investigations reported in the literature review, several methodological issues have arisen during the evolution of interview design and technique, as follows: (a) The need for interviewers to be involved in any research design so that they are able to diverge from standard questions to explore concepts using Socratic dialogue. This demands in-depth, content knowledge of the field of observational astronomy (CK), as well as skill in interview technique exemplified by Piaget's clinical method and proficiency in identifying opportunities for teaching (PCK) within Vygotsky's ZPD. In harmony with Kvale and Brinkmann (2014), we believe that the interview as builder of knowledge through dialogue has been undervalued due to a number of factors such as lack of appropriate interviewing knowledge and skills, shortness of time and theoretical bias aimed at refuting the findings of others rather than making genuine contributions to human knowledge. The 'proficiency' of the interviewer (vis-à-vis the necessary CK and/or PCK) is however often unstated, many writers considering it unnecessary for that to be made clear. Whilst this is not to imply carelessness, it can be surmised from those few studies which do explicitly spell out the links between the interests of researchers and relevant school teaching, that it is a matter of concern. The relationships between interviewer CK/PCK and his/her skill in handling Socratic dialogue to best advantage merit close scrutiny (discussed further below). Implicitly in general terms, however, behind all of the investigations is the need to determine what children think in order to guide further teaching. Also implicitly, and of equal importance, is the desire on the part of educators to replace everyday ideas with scientific concepts, i.e. to bring about 'conceptual change' for the young people concerned. Sometimes the outcomes of research into children's ideas are global in character with the design of a study seeking to inform curriculum change in a country or culture. At other times, the outcome is more local with reform taking place within a single school or class as part of a trial. Whatever the level of educational endeavour, design and application depend on teachers having high CK/PCK; and awareness of opportunities afforded within the ZPD. (b) The significant part played by multimodal methods and what can be interpreted from interviewees' responses to in-depth questioning utilising different modalities. Since their introduction by Nussbaum and Novak (1976), children's drawings have complemented the spoken word in many interviews. Similarly, the innovative use of clay modelling by Brewer, Herdrich and Vosniadou (1987) added a third modality from which to triangu- Yes late children's ideas. Working independently, the current authors adopted a multi-modal methodology involving verbal language, children's drawings and children's play-dough modelling which has proved to be most successful (see text and protocols below). (c) How an understanding of the creative (as opposed to the reproductive) dimension of remembering-what recent neuroscience research tells us about the dynamism of human memory-alters our interpretations of what may be revealed when people are questioned. The main objective of interviews in science education research is to determine what children think to inform curriculum development and teaching strategies. A resultant aim is to replace children's everyday ideas with scientific concepts-the aforementioned process of conceptual change. Bearing in mind that recent neuroscientific research has shown this to be somewhat mistaken since human memory does not forget old ideas but suppresses them in favour of more plausible ones to fit the immediate situation, everyday and scientific ideas co-existing in a creative, dynamic memory. What is required is to teach children how to discriminate between the two repertoires of ideas to make appropriate selections to fit specific situations. Interviews utilizing openended questions with Socratic dialogue reveal these co-existing concepts and provide the opportunity to guide children towards more scientific ways of thinking through scaffolding 4 (with the interviewer in the role of teacher rather than impartial researcher 5 ). (d) The consideration which should be given to the value of the spoken word as a reflection of conceptual ideas and the merits of open-ended over forced-choice questions. Whereas the child's verbal responses to questions are of paramount importance to the Piagetian clinical method (see Piaget, 1926Piaget, , 1929Piaget, , 1930, and equate to thought in Vygotsky (1962), for some they do not enjoy such high status in radical socio-cultural methodology (see Schoultz et al., 2001). This contrast is exemplified by the different outcomes evident from open-ended questions generated from dialogue between interviewer and child utilised by Vosniadou et al. versus the closed or forced-choice interviews preferred by Schoultz et al. (the latter featuring the use of cultural artefacts such as globes or maps as props). Addressing points (a)-(d) is important to science education because, notwithstanding great progress in neuroscience, the Piagetian interview remains the gold standard way of ascertaining what children think. Researchers need to be aware of the methodological 4 Scaffolding: Foley (1994) traces the origin of the term scaffolding to the work of Vygotsky. It was popularised by Bruner (1985) who understood that, for learning to take place, suitable social interactional frameworks had to be provided-by adults/peers. Scaffolding is the assistance given to a child within the ZPD to bridge the gap between what the child already knows and what the child could know with adult help. 'What the child is able to do in collaboration today he will be able to do independently tomorrow.' (Vygotsky, 1987, p. 211). 5 Traditionally, interviewers (particularly teacher/researchers) avoided teaching even when conditions were ideal to do so. As reported by , care has to be taken in longitudinal studies not to impart knowledge or change children's ideas deliberately since that could confound the outcome of repeated measures (the use of survey and control groups might enable the influence of the interview as a source of knowledge to be assessed). Where possible, in response to children's questions demanding scientific information, the researcher referred children to their class teacher or librarian. However, in cases where children were unlikely to be interviewed a second time (such as the control groups) the researcher might share information. Evidence from experience and the literature suggests that children see researchers as teachers whatever reservations teacher/researchers may have about teaching whilst interviewing. We found that despite avoiding deliberate scaffolding, our interviews did influence the outcomes in longitudinal studies; the survey groups who were interviewed twice having more advanced concepts than the control groups who were interviewed only once. difficulties encountered in interviewing and the guidance suggested by others active in the field in the past and present to ensure a successful outcome. In the sections which follow, we will now focus on these and other key issues which arise from the historical record. In doing so, we will illustrate the points with extracts of several protocols from interviews we have conducted. The text of each extract is left-addressed but the parts of the protocols which illustrate Socratic dialogue are indented (and marked 'Start of …' and 'End of ….'). 'Main Discussion' points relate to the main text of this article. Protocol Discussion The researcher's knowledge of astronomy, local knowledge of geography and information from child's teachers guides the Socratic dialogue. Main Discussion The concept of teacher knowledge and skill has been of concern to teacher educators for over a century (see Bullough, 2001, for a review). Since its introduction by Shulman (1986Shulman ( , 1987, Pedagogical Content Knowledge (PCK) has been the subject of debate between teachers and educational researchers. Whilst 'what to teach' has been defined and supported by science curricula broadly described as Subject or Content Knowledge (CK); 'how to teach' (PCK) has been relatively overlooked for a variety of reasons not least because teacher knowledge and teaching skill are hard to define (see Abell, 2008;Abell et al., 2009;Barnett, 2003;Fernandez, 2014;and Neumann et al., 2019). In his criticisms of PCK from a historical perspective, Settlage (2013) wished that the concept had taken on a greater role operationally as subject matter knowledge for teaching; arguing that teachers need greater contextualised understandings of how students may be helped to learn. Addressing these concerns from the perspective of teacher/researchers, we believe that (whatever the constructs are called, 'pedagogical' being enigmatic) there is a need for teachers to not only know the essential knowledge of their teaching domain but also, a much more difficult task, have the ability to sense moments of readiness for learning and opportunities for scaffolding through Socratic dialogue. Subject knowledge is of limited value unless the teacher knows when it is relevant to the context; and how to teach it in a way that captures the imagination of students (see van Driel et al., 1998;Nilsson & Vikström, 2015;Neumann et al., 2019). Focusing on the ideal prerequisites of researchers seeking to investigate children's observational astronomy of the ESM, we are looking for teachers with a working knowledge of basic astronomy and an equally sound understanding of developmental psychology fit for educational purposes. The former is essential for the interviewer (as researcher) to devise knowledge probing questions for incorporation into Piagetian clinical interviews; the latter is needed to enable the interviewer (as teacher) to take advantage of children's responses to teach by scaffolding within Vygotsky's ZPD-either directly (breaking Socratic tradition) or indirectly by referring the child to his/her class teacher or librarian. This requires sensitive interviewing. Socratic dialogue can clarify children's ideas for the interviewer, with the method enabling the child's own construction of knowledge during the child-interviewer interaction. This principle is not easy to put into practice and it underlines the opening words of our argument that 'an interview is an inter view'. Considering the CK aspect either separately or as an integral part of PCK, the literature on research into children's cosmologies considered above emphasizes the need for teachers to give children 'direct experience with phenomena' (Nussbaum & Novak, 1976, p. 549). Whilst the constraints of time and timetable structure limit the opportunities for children to carry out direct observational astronomy, the literature (including our own studies) do give some examples. For instance: (i) studying the divergence of the shadow of a shadow stick due to the apparent motion of the Sun as a result of the rotation of the Earth; (ii) observing the Moon in daytime against a fixed object such as a power pole or lamp post at the same time daily over a period of time to gain an impression of the motion of the Moon; (iii) recording the shape of the Moon over time, such as between interviews where these extend over more than a few days followed by classroom or library research of phases; (iv) viewing sunrise and sunset at home and (with parental co-operation) thinking in terms of Earth moving and with respect to the Sun rather than the Sun moving with respect to Earth; (v) noting the change in position of a constellation of stars over time (again with parental collaboration) and explaining the changes in terms of the rotation of the Earth; and (vi) observing Foucault pendulums: interesting phenomena demonstrating the rotation of the Earth which may be seen in some museums. Note Drawing Ground horizontally below Earth, and Sky horizontally above or below Earth are thought to be indicators of a flat-Earth cosmology (Nussbaum & Novak, 1976). Earth Shape Modelling R. Could you make the shape of the Earth with the green play-dough? C. (Models Earth ball-shaped) (see Fig. 2a). Main Discussion Through dialogue, drawing and modelling the child develops their Earth concept with elements of physical shape, Ground and Sky, Habitation of Earth and Identity with Earth. By such multi-modal activities, any cognitive conflicts between concepts can be resolved to create a concept of the Earth for the current context. The retention of these multi-modal concepts is thought to be dependent on the neural pathways that created them through a process of re-entry (see Edelman, 2005Edelman, , 2006. Access to the linguistic elements of the original pathways may depend on recognition of linguistic patterns in the questions (Cromer, 1987). Protocol Discussion Zhang modelled the Earth as a ball, and then flattened it to a disc to match his drawing. Finally, through Socratic dialogue he decided that it was ball-shaped. This process suggests that he was comparing different concepts of Earth as he modelled. Further questioning on Identity with Earth confirms that he knows the Earth is spherical by his placing 'Self' and 'Friend': on opposite sides of the Earth; and indicating that gravity acts from the Earth's centre. Main Discussion Reasoning in multiple modalities involving comparison between concepts suggests rapid mental simulations (see Barsalou, 2003). Recalling concepts from a year before hints at processes such as Cromer's (1987) linguistic pathways or Edelman's (1987) re-entry theory to re-generate the concept. As described in the historical literature review, multi-modal methods have been used to investigate children's cosmologies since the pioneering work of Nussbaum and Novak (1976) who utilised a globe, pre-made models, pictures and drawings of the Earth as props and asked children to draw the path of falling rocks and water from a spherical Earth. Following experience, they cautioned against the use of cultural artefacts which they found to inhibit children's intuitive responses. Notwithstanding these reservations, a similar procedure was adopted by Nussbaum (1979), Nussbaum and Sharoni-Dagan (1981) and Sharp (1999). Influenced by Nussbaum and Novak's (1976) work, two groups of researchers independently developed multi-modal methods (incorporating verbal language, children's drawings and children's clay and play dough modelling) for investigating the emerging field of children's cosmologies. These were Vosniadou and her colleagues working in Samoa, Greece, India and the USA (Brewer, Herdrich & Vosniadou, 1987;Vosniadou & Brewer, 1990, 1992, 1994Samarapungavan et al., 1996;Diakidoy, Vosniadou & Hawkes, 1997); and the authors working in NZ and China . These media were also utilised by Skopeliti and Vosniadou (2007). Cultural artefacts in the form of globes and maps were central to the socio-cultural methodology used by Schoultz et al. (2001) and by Ivarsson, Schoultz and Säljö (2002) to critically question the methodology of Vosniadou et al. The results of using globes and maps as props appeared to confirm Nussbaum and Novak's (1976) reservation that rather than illuminating children's concepts cultural artefacts suppress intuitive concepts. Following their recommendations as far as possible, we avoided using cultural artefacts such as tennis balls as props unless children had already indicated that they believed the Earth to be spherical; and we avoided globes. The multimodal methodology developed by the authors [exemplified by the protocols of verbal interactions of Katherine and Zhang Zhe, together with drawings ( Fig. 1) and play-dough models ( Fig. 2)], has proved to be particularly effective in probing children's concepts of Earth shape and gravity as in Bryce and Blown (2013 Protocol Discussion Although some of these questions in Socratic dialogue could be interpreted as 'leading', this must be weighed against their role in reminding the child of what they already knowas in the case of the rotation of the Earth which the child had 'forgotten'. Main Discussion In our introduction, we posed the question of how understanding the creative (as opposed to the reproductive) dimension of remembering can alter our interpretation of interview responses (be they verbal language, drawings or play-dough modelling). In Bryce and Blown (2021), we drew attention to what recent neuroscience research tells us about the dynamism of human memory (see Footnote 1 ). We concluded that open-ended Piagetian clinical interviews conjoined with Socratic dialogue can yield insights into how children think, including the processes of imagery, memory and metacognition. These observations revealed that memory is in a constant state of flux with new ideas being compared with old in a competition for relevance to the interview situation and the question context. The evidence suggests that multiple repertoires of knowledge coexist in memory in two major domains: everyday, intuitive, cultural ideas and scientific concepts. The former earlierlearned notions are not replaced by the new ideas taught at school, rather they are inhibited as not appropriate (or the best match) to the context. We also found evidence of memory being non-representational as argued by Edelman, and concepts being akin to simulators or skills which operate consistently across several modalities (see Barsalou, 2003;Edelman, 1989Edelman, , 2001Edelman, , 2005. The dynamic nature of memory is evidenced by examples from children's cosmologies where everyday concepts such as sunrise and sunset coexist in harmony with scientific concepts such as the rotation of the Earth: both describe the same phenomena but do so utilizing different perspectives and different language modes or repertoires. The classic case of radical knowledge restructuring put by Vosniadou and Brewer (1987) was 'the change from a geocentric schema in which the earth is conceptualized as flat and motionless to a heliocentric schema in which the earth is conceptualized as spherical and rotating' (p. 60). Whilst we found evidence of this and similar cases of restructuring (see Protocol Discussion During the interview, the opportunity arose to probe the child's knowledge of the Earth-Moon relationship through Socratic dialogue; i.e. to explore whether the child might understand the concept of tidal locking of the Earth and Moon due to mutual gravity and the Moon's period of rotation equalling its period of revolution (NASA, 2017). Protocol Discussion Chen mentioned radiant energy but is not familiar with nuclear fusion-although she knows that the Sun's energy comes from hydrogen and helium. Protocol Discussion Like NZ children, most Chinese children were more familiar with the Sun than they were with the Earth. Chen demonstrated considerable scientific vocabulary in response to standard interview questions and Socratic dialogue. Main Discussion In point (d) of our introduction, we reported that Schoultz et al. (2001) questioned the value of oral language. They did so from a radical socio-cultural perspective, one that seems to be out of kilter with Vygotsky who placed great value on both inner speech as thought and external speech as essential for the social construction of knowledge within the ZPD (Vygotsky, 1962). Traditionally, verbal language has been the mainstay of Piagetian oneto-one clinical interviews; and together with children's drawings and models have been the standard method of sharing ideas about the world as described in children's cosmologies. The spoken word has proved particularly helpful in clarifying complex ideas; or explaining cultural interpretations of phenomena; or capturing switching between cultural and scientific repertoires; particularly when responding to Socratic dialogue. The two aforementioned cases where verbal language needs support are when interviewing young children age 3-6 at kindergarten or pre-school; or when interviewing children in another culture using another language through interpreters. In these situations, a multi-modal methodology with drawing and modelling affords triangulation with verbal language (limited due to age, or subject to interpreter mediation with technical terms having to be simplified in real time during three-way Researcher-Interpreter-Child dialogue 7 ). A fourth modality employed with success by the current authors to complement the spoken word was video-recording of gesture particularly when modelling the shape and motion of ESM. Historically, gesture and language are strongly associated, and there is evidence that drawing and clay were used to share ideas as depicted in cave art and figurines. The evolution of hand and mind together is also manifest in tool making and was summed up by the philosopher A. N. Whitehead: 'It is a moot point whether the human hand created the human brain, or the brain created the hand. Certainly the connection is intimate and reciprocal' (cited by Donaldson, 1978, p. 83). Language is a central component of this development. Using everyday and scientific language precisely yet economically is one of the prerequisite skills demanded of researchers who design interviews. And this is accentuated when working in another culture with another language involving interpreters. Knowing which everyday term translates to which scientific word and vice versa requires the type of ability that Barsalou (2003) refers to when he associates concepts with skills. Sometimes words are inadequate as when (moving to the physical sciences) Heisenberg wrote to Bohr on the enigma of quantum theory: 'we must realise that our words don't fit'; and Bohr replied 'words are all we have' (Baggott., 2011, p. 101). Fortunately, the spoken word is usually able to meet the challenge of interpretation. In conclusion The historical record of investigations into children's cosmologies explored in this paper has revealed a rewarding vein of activity by researchers in science education. The work has contributed to how we understand young people think and develop intellectually as they 7 Significantly, although there have been several cross-cultural studies, and some longitudinal studies, only those by the current authors have employed both an ethnographical approach involving relatively long periods being spent by one of the authors as a teacher and researcher in both cultures; and longitudinal repeated measures requiring tracing of participants and follow-up interviews. Although intended to illuminate and measure conceptual change over time purely as a result of teaching and development, with the interview and interviewer of neutral influence (as objective researcher rather than interactive teacher); as mentioned above we found that in fact repeated measures resulted in enhanced conceptual knowledge in survey groups (interviewed twice) over control groups (interviewed once). This result highlights an area of particular interest to us, namely changes taking place within individuals as a result of interview questions or Socratic dialogue which are part-and-parcel of qualitative research investigations. wrestle with important scientific ideas that impinge on their daily lives (however 'ordinary' the concepts might be thought to be at a cursory glance). Also, and particularly through the more recent studies, it has revealed how complex and demanding interview strategies actually are. The methodological findings are widely applicable in our view and signal warnings to researchers and teachers about what we may take to be 'understood' when we listen to young people expressing their thoughts and answer our questions-whether they are speaking these thoughts, or drawing pictures to show how they see them, or making representative models of what is in their minds. The literature has certainly emphasised the importance of cross-referring between modalities as we search for ways to support children's scientific learning. 'Caution' should be the watchword when we are tempted to conclude that we now know what a young person thinks on the basis of any short interchange we have shared, however focused we feel that exchange has been. Crucially, it is imperative that the interviewer has to be very well versed in the subject area concerned; capable of teaching it to children; skilled in distinguishing between when and when not to teach; and in particular skilled in using Socratic dialogue to give the interviewee opportunities to freely articulate his/her thinking-therefore proficient in manner and style to be seen as completely non-threatening in a setting conducive to friendly exchange. The everyday repertoire of children's ideas often encountered in interview situations stem from parents and grandparents in their own cultural contexts, as well as from class teachers, librarians and much of the media. Teacher/researchers however have a responsibility to contribute to the development of children's scientific repertoires to appropriate levels, either by Socratic dialogue within the ZPD or by referring children to teachers and librarians for further knowledge. Teacher/researchers are also well placed to complement others in teaching children to distinguish between the two coexisting ways of interpreting and making sense of the world. Those (few) educational policies intent on replacing everyday ideas with scientific ones seem somewhat misguided in that they fail to recognise that active remembering and thinking accommodate both interpretations of nature. The challenge is to design interviews that afford children the opportunity to respond in either repertoire and for interviewers to encourage children to think multi-modally. 8 If we are to respond to the challenge of Bruner (1960) that 'any subject can be taught in an intellectually honest form to any child at any stage of development' (p. 33), then researchers must be finely attuned to opportunities to scaffold scientific knowledge either directly or by referral to other scientific sources of learning (teachers and librarians). From a discursive position (see Van Langenhove & Harré, 1999;Jones, 2012), one could ask: If interviews are inter views (or inter-views), what do children gain from the interview? Our own response is: 'Clarifying their ideas through Socratic dialogue, children's drawings and play-dough modelling'. Astronomy is a science that captures the imagination of children in a unique way, leading in some to a lifelong passion as professional and amateur astronomers. For them, pursuing the frontiers of knowledge such as the origin of the universe and space exploration (see Salimpour et al., 2021), the interview as inter view is a unique opportunity to kindle lifelong enthusiasm for science. 8 Although, in general we designed and conducted our interviews in scientific mode we made use of cultural stories, legends and folklore to encourage children to share ideas (such as the story of Alice in Wonderland by Lewis Carroll to probe gravity concepts; the Māori legend of Maui capturing the Sun to illuminate concepts of the Sun; and folklore on Chinese Festivals to explain phases of the Moon). Notwithstanding the reservations of some researchers to put aside their objectivity, the reality is that young children see researchers as teachers (as many are). Fortunately, techniques such as Socratic dialogue permit the researcher some flexibility to teach without compromising tradition. From the perspective of interviewer as researcher, the impact of the interview on learning is hoped to be minimal to enable children's concepts to be ascertained in their purest form. Whereas from the view of interviewer as teacher, the interview is designed to elicit and clarify knowledge and, in some cases, become a teaching instrument utilising Socratic dialogue. Avoiding imparting knowledge is particularly relevant in longitudinal studies where conceptual change as a result of development, experience, teaching and learning is being ascertained by repeated measures-asking the same questions up to 5 years apart with survey and control groups. In the event, in our studies, we found that several participants found the interview and its associated activities to be a source of astronomy knowledge (see Blown & Bryce, 2020). And, despite efforts of impartiality, survey groups who were interviewed three times had more advanced concepts than controls who were interviewed twice; the enhancement being attributed to survey children being familiar with the interview context and terminology (see Blown & Bryce, 2010). Thus, although unintended, the interview as inter view proved to be a powerful teaching instrument supplementing cultural and classroom sources of knowledge with lasting effects. (see Footnotes 5 and 7 on conceptual enhancement as a result of repeated measures). Finally, given the quantity and quality of work in the field to date, it might be reasonable to ask those science education researchers who are interested in children's cosmologies if there is more to be done. Assuredly, yes, and for a variety of reasons. With respect to astronomical knowledge, new ideas and information continue to arise and surface in the mass media, many of which school pupils take notice of and which 'intrude' on what might be considered as conventional science topics. Recent highlights would include planetary moons, exoplanets, asteroids, black holes, gravitational waves, the accelerating, expanding universe, dark energy, dark matter and so on. That is to say, whilst they may strictly go beyond the syllabus, new phenomena and events bring overlapping material into debate, raise confusions and stimulate questions. Furthermore, we tend to think conventionally about education-teachers individually managing lessons with classes of young people, with evermore sophisticated resources, simulation materials and computers at their disposal. However, on-line learning increasingly drives instruction with phone technology, 'smart' television, school intranet facilities and the Internet (home-accessed) changing the ways in which much science reaches young people, and shaping its very content. Also, what we now refer to as social media also influence what young people learn as science (and everything else besides). The shift is not simply dispositional in character, affecting interests and bias. What children are encouraged to understand and believe through sources other than their teachers is more than simply incidental or 'other' background material. The richness and vitality of what is encountered on a daily basis brings considerable challenges to teachers (for example, see Bryce and Gray, 2004 with regard to biotechnological progress; and Bryce & Day, 2014a, b with regard to climate change). Interviewers face new challenges in respect of what science is actually experienced by young people. Questions put to children need to be contextualised rather differently; the CK and PCK of future researchers will have to be very different from those of their predecessors. Additionally, and unexpectedly, the effects of the Covid-19 pandemic in 2020 and 2021 have massively interrupted traditional schooling. In the UK and elsewhere, teachers have had to create on-line teaching and work with young people through digital materials and email on an unprecedented scale. Part-time education has invaded lives dramatically, both at school and university, so-called blended learning being put in place in advanced countries like the UK. (Blended learning is the euphemism for the sub-standard arrangements which
2022-01-02T14:41:02.867Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "f4937b2fb2bcd4643cf9a24052d379afd1c2df3f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11165-021-10032-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0ced6c15363ef6ba8f067acb37f2d517d09067ca", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
208175963
pes2o/s2orc
v3-fos-license
Quasi-simultaneous Spectroscopic and Multi-band Photometric Observations of Blazar S5 0716+714 during 2018-2019 In order to study short timescale optical variability of $\gamma$-ray blazar S5 0716+714, quasi-simultaneous spectroscopic and multi-band photometric observations were performed from 2018 November to 2019 March with the 2.4 m optical telescope located at Lijiang Observatory of Yunnan Observatories. The observed spectra are well fitted with a power-law $F_{\lambda}=A\lambda ^{-\alpha}$ (spectral index $\alpha>0$). Correlations found between $\dot{\alpha}$, $\dot{A}$, $\dot{A}/A$, $\dot{F_{\rm{\lambda}}}$, and $\dot{F_{\rm{\lambda}}}/F_{\rm{\lambda}}$ are consistent with the trend of bluer-when-brighter (BWB). \textbf{The same case is for colors, magnitudes, color variation rates, and magnitude variation rates of photometric observations.} The variations of $\alpha$ lead those of $F_{\rm{\lambda}}$. Also, the color variations lead the magnitude variations. The observational data are mostly distributed in the I(+,+) and III(-,-) quadrants of coordinate system. Both of spectroscopic and photometric observations show BWB behaviors in S5 0716+714. The observed BWB may be explained by the shock-jet model, and its appearance may depend on the relative position of the observational frequency ranges with respect to the synchrotron peak frequencies, e.g., at the left of the peak frequencies. \textbf{Fractional variability amplitudes are $F_{\rm{var}}\sim 40\%$ for both of spectroscopic and photometric observations. Variations of $\alpha$ indicate variations of relativistic electron distribution producing the optical spectra. } Introduction Blazars are a subclass of active galactic nuclei (AGNs) and usually exhibit extreme variability in the whole electromagnetic spectrum (e.g., Ulrich et al. 1997). Depending on the rest-frame equivalent widths (EWs), blazars can be divided into BL Lacertae objects (BL Lacs) and flat-spectrum radio quasars (FSRQs). The EWs of BL Lacs and FSRQs are < 5Å and > 5Å, respectively (e.g., Ghisellini et al. 2011;Ghisellini & Tavecchio 2015). Generally, the continuum radiation of BL Lacs is believed to be relativistically boosted along the line of sight by relativistic jets with small viewing angles (e.g., Urry & Padovani 1995;Ulrich et al. 1997) and shows observational characteristics, such as featureless optical spectra, strong non-thermal emission, and high polarization, etc. There are two peaks in broadband spectral energy distributions (SEDs) of blazars (e.g., Ghisellini et al. 1998;Ulrich et al. 1997). Their low and high energy peaks are located around from infrared-optical-ultraviolet (UV) to X-ray bands and around MeV-GeV-TeV γ-ray bands, respectively. The low energy peak is the synchrotron radiation from relativistic electrons in the relativistic jets and the high energy peak, the γ-ray emission, is generally interpreted as the inverse-Compton (IC) scattering of the synchrotron soft photons for blazars and the external soft photons for FSRQs by the same electron distribution that radiates the synchrotron photons (e.g., Ulrich et al. 1997;Ghisellini et al. 1998;Celotti & Ghisellini 2008;Tavecchio et al. 2010;Neronov et al. 2012;Zhang et al. 2012;Madejski & Sikora 2016;Zheng et al. 2017). Various variability timescales, e.g., from minutes to decades, have been found in most BL Lacs and these timescales can help us to investigate the properties of radiation region (e.g., Xie et al. 1999Xie et al. , 2002Xie et al. , 2005Covino et al. 2015;Liu et al. 2015;Wierzcholska et al. 2015;Feng et al. 2017;Liu et al. 2019). The variability timescales are usually divided into three classes: the timescales less than one night are regarded as intra-day variability (IDV) or micro-variability (e.g., Wagner & Witzel 1995;Falomo et al. 2014); the timescales from days to a few months are short-term variability (STV) (e.g., Li et al. 2017); and the timescales larger than several months are known as long-term variability (LTV) (e.g., Dai et al. 2015). Different variability timescales may be originated from different emission regions. Thus, we can study different radiation mechanisms via variability with different timescales. Furthermore, the flux variability often follows different spectral behavior and the correlation between the variability of flux and spectral index (or magnitude and color) will shed light on the physical processes of radiation for BL Lacs. A common phenomenon has been found in most BL Lacs. The bluer spectral index usually arises at the brighter phase in most BL Lacs (e.g., Villata et al. 2004;Bonning et al. 2012), i.e., bluer-when-brighter (BWB). The BWB trend is often regarded as evidence of shock-in-jet model (e.g., Marscher & Gear 1985;Gupta et al. 2008;Bonning et al. 2012). However, many observations do not show any correlation between colors and magnitudes (e.g., Agarwal et al. 2016;Hong et al. 2017), or show only weak correlations (e.g., Wierzcholska et al. 2015). The discrepancy of the color-magnitude correlations is a crucial issue that can help us to understand more detailed radiation properties in jets. first discovered by Kuhr et al. (1981) and was widely studied on the whole electromagnetic spectrum (e.g., Ostorero et al. 2006;Abdo et al. 2010;Hu et al. 2014;Dai et al. 2015;Gaur et al. 2015;Feng et al. 2017;Hong et al. 2017;Sandrinelli et al. 2017;Liu et al. 2019). It is one of the most active and bright BL Lacs in the optical band and shows a completely featureless spectrum (e.g., Biermann et al. 1981;Danforth et al. 2013). A number of groups have focused on the broadband photometric study of S5 0716+714 in the optical regime. Almost, all of them have found the variability with timescales from mins to years (e.g., Nesci et al. 2002;Hu et al. 2014;Dai et al. 2015;Agarwal et al. 2016;Hong et al. 2017;Li et al. 2017). Many studies reported high IDV duty cycles (DCs; Wagner & Witzel 1995), i.e., DCs ≥ 70% for S5 0716+714. Variation amplitudes are larger than 0.05 mag for 80% of 52 nights (Nesci et al. 2002). Hu et al. (2014) gave a DC of 83.9% on 42 nights. Agarwal et al. (2016) obtained a DC of ∼90% by 23 night observations. The probability of variability in S5 0716+714 is nearly daily. The various (strong, weak, or non) BWB trends have been also reported in many observations. Dai et al. (2013) found that the source exhibited strong BWB chromatism in LTV, STV, and IDV. Hu et al. (2014) showed strong and mild BWB trends on IDV and STV, respectively. Agarwal et al. (2016) did not found any correlations between colors and magnitudes. Recently, Hong et al. (2017) reported an outburst state during 2012 and they found both BWB chromatism and weak BWB trend in most nights. However, in few nights, the data did not show any correlations between colors and magnitudes. The observational characteristics mentioned above indicate that S5 0716+714 is a natural laboratory for studying the radiation properties of BL Lacs. Almost all of the previous studies only used a few of broadband photometric observations. Bandwidths of broadband filters are usually larger than 1000Å and different filters have different bandwidths. Therefore, the relationship between brightness and spectral behavior is only roughly studied. The broad bandwidths might also influence the relationship during some phases (e.g., might decrease the correlation coefficient during weak phases). Moreover, the adjacent bands will partly overlap each other, which will further influence the correlation between the brightness and spectral behavior. In order to investigate the relationships of index-flux, index variability-flux variability, color-magnitude, and color variability-magnitude variability, and shed some light on the radiation processes of BL Lacs, we simultaneously monitored S5 0716+714 with spectroscopic observations and broadband photometry. The spectral data can provide the light curves (LCs) at narrow enough wavelength coverage which allow us to study the above relationships in details. Besides, comparing photometric LCs to spectral integral LCs will help us probe the effect of bandwidth. The correlations of variability among different bands and different wavelength ranges could also help us to limit the relative location of radiation. In Section 2, we describe the detailed information of observations and data reductions. The results and our analyses are presented in Section 3. Finally, discussion and conclusion are presented in Section 4. Observations and Data Reduction All the spectroscopic and photometric observations of S5 0716+714 were carried out with the 2.4 m alt-azimuth telescope, which is located at Lijiang Observatory of Yunnan Observatoris, Chinese Academy of Science. The longitude, latitude, and altitude of the observatory are 100 • 01 ′ 48 ′′ , 26 • 42 ′ 42 ′′ , and 3193 m, respectively. From mid-September to May, the observatory is dry and most nights are clear. The average seeing of the telescope obtained by the full width at half maximum (FWHM) of stars is ∼ 1 ′′ · 5 (e.g., Du et al. 2014). For the 2.4 m telescope, the pointing accuracy is about 2 ′′ , and the closed-loop tracking accuracy is better than 0 ′′ · 5 hr −1 . In 2010, the telescope was mounted with an Yunnan Faint Object Spectrograph and Camera (YFOSC) at Cassegrain focus. This is an all-purpose CCD for low/medium dispersion spectroscopy and photometry. The CCD can keep low readout noise under high readout speed, which benefits from all-digital hypersampling technology. During our observations, the readout noise and gain are 9.4 electrons and 0.35 electrons/ADU, respectively. The CCD chip covers a field of view (FOV) of 9 ′ · 6× 9 ′ · 6 with 2048 × 4096 pixels, and the pixel scale is 0 ′′ · 283 pixel −1 . YFOSC can quickly switch from photometry to spectroscopy (≤ 1 s), and we can also choose the binning mode to reduce the photometric readout time. The detailed parameters of the telescope and YFOSC were described in Wang et al. (2019). The monitoring campaign started in 2018 November and spanned ∼ 106 days. For most clear dark or grey nights, we basically performed photometric and spectroscopic observations of S5 0716+714 within 10 minutes. Thus, the photometry and spectroscopy can be considered to be quasi-simultaneous. During our observations, we successfully obtained the photometric data in 42 nights and spectral data in 47 nights. The cadence of spectroscopy is ∼ 2.08 days. The complete observation information is listed in Table 1. Photometry The photometric observations were performed using Johnson BV and Cousins RI filters. In order to obtain the accurate magnitude calibration of the target, we always set several comparison stars in the observed FOV. The comparison stars were presented in Villata et al. (1998), who have calibrated the magnitudes in the BV R bands. We found that star2, star3, star5, and star6 are closest to the target (see Figure 3 in Villata et al. 1998). Besides, the four comparison stars were also used in Ghisellini et al. (1997), who gave the data of the I band. Thus, these stars are selected as comparison stars in our observations. The magnitude of S5 0716+714 is calibrated as follows: where N is the number of comparison stars, M i std is the standard magnitude of the ith comparison star, and M o and M i are the instrumental magnitudes of the target and the ith comparison star, respectively. Figure 1 shows the calibrated LCs of S5 0716+714. The calibration errors include two components. The first is the Poisson errors of the target and comparison stars, and it can propagate through Equation (1). The second is from the systematic uncertainties which might be caused by the phase of the moon, weather condition, etc. We calibrated one of the comparison stars (star3) using Equation (1) and the variability of the star can be regarded as the systematic error. The different band calibrated magnitudes of S5 0716+714 and star3 are listed in Tables 3-6. The systematic error is calculated by where M ag 3 is the calibrated magnitude of star3. Finally, the errors are ≤ 1% in most nights. The errors are also listed in Tables 3-6. All the photometric data were reduced using standard Image Reduction and Analysis Facility (IRAF) software. After the bias and flat-field corrections, we extracted the instrumental magnitudes of the target and comparison stars with different apertures. To avoid the contamination of the host galaxy mentioned in Feng et al. (2017), we tested two different apertures: dynamic apertures (several times FWHM) and fixed apertures. For each type of aperture, we chose 10 different apertures. The aperture radii of fixed apertures and dynamic apertures are 1 ′′ · 5-8 ′′ · 0 and 1.3-3.5 × FWHM, respectively. The results are almost the same in different apertures. However, the best signal-to-noise ratio (S/N) could be obtained with the aperture radius of 6 ′′ · 0, and we adopted the photometry under this aperture as the final result. Spectroscopy Considering the featureless spectra of BL Lacs, the spectroscopic observations were carried out with Grism 3, which provides a relatively low dispersion (2.93Å pixel −1 ) and wide wavelength coverage (3400-9100Å). We found that the spectrum of Grism 3 might be slightly contaminated by the 2nd order spectrum as wavelength is longer than ∼ 7000Å, and the 2nd order spectrum is ∼ 5% times intensity of the 1st order spectrum. To avoid the effect of the 2nd order spectrum, we use a UV-blocking filter which cuts off at ∼ 4150Å. Thus, the secondary spectrum will be rejected shorter than ∼ 8300Å. The final spectra cover the observed-frame of 4250-8050Å. To improve the flux calibration, we simultaneously put the target and star3 in the long slit. This method was used widely (e.g., Kaspi et al. 2000;Du et al. 2014;Lu et al. 2016), and can obtain the relatively high quality spectra even in poor weather. To minimize the effects of seeing, we use a wide slit with a projected width of 5 ′′ · 05. For each night, we also observe a spectrophotometric standard star, which can calibrate the absolute fluxes of the target and comparison star. The raw spectral data are also reduced with IRAF. After correcting the bias and flat-field, we calibrate the wavelength of two-dimensional spectral image using standard helium and neon lamps. We extract the spectra of the target and star3 after removing the cosmic-rays. The extraction aperture radius is 21 pixels (∼ 5 ′′ · 943), nearly the same with photometry. We calibrate the absolute fluxes of the target and star3 using the spectrophotometric standard star. Note that miscentering of the object in slit will cause the shift of wavelength and then will influence the calibration of flux. We correct the shift by the absorption lines from 6400-7100Å. In the end, we re-calibrate the spectra using the template spectrum of comparison star. The template spectrum is obtained by averaging the spectra of star3, which are observed in the nights with good weather conditions. The absorption lines of atmosphere are also corrected by the comparison star. Figure 2 is the mean spectrum and an individual spectrum. We bin individual spectra to obtain the spectroscopic LCs, and the bin width is 50Å. The flux and error of each bin are obtained by the mean and standard deviation of the fluxes in the corresponding bin, respectively. We find that the LC of each bin is nearly the same with each other, and then only 6 bins with the centers of 4425, 5125, 5825, 6525, 7225, and 7925Å are used for analysis. The 6 bins are denoted in the top panel of Figure 2, and the relevant LCs are shown in Figure 3. Fractional Variability Amplitude and Spectral Index The variability amplitude of each light curve is calculated by the root-mean-square (RMS) fractional variability amplitude F var (e.g., Rodriguez-Pascual et al. 1997;Edelson et al. 2002;Vaughan et al. 2003). The fractional variability amplitude F var is defined as where S 2 denotes the total variance for the N data points in a light curve, < F > is the mean flux of the light curve, and < σ 2 err > denotes the measured mean square error of the N data points: Edelson et al. (2002) gave the error σ Fvar on F var : First, we convert all the photometric data to flux. Then, we measure both spectral and photometric variability amplitude. The variability amplitudes of different LCs are listed in Table 2. The spectral indices and amplitudes of S5 0716+714 are obtained by fitting the spectra via a power-law (f λ = Aλ −α ). Figure 2 shows the best fit to the mean spectrum and individual spectrum. The variability of spectral index is shown in the left top panel of Figure 3. Results and Analysis During our observations, the amplitudes of variability are ∼40%, calculated from Equation (3). The photometric and spectroscopic results of F var are consistent with each other and show that the variability amplitudes of S5 0716+714 in the blue side are consistent with those in the red side as considering the relevant uncertainties (see Table 2). The band widths of the filters are hundreds to thousands angstroms, and the variability amplitudes of photometry are the average results of broad bands. The width of the spectral bin is much narrow than the filter band width. Though there are the differences between the photometric and spectroscopic bandwidths and bins, the very close wavelength coverage should result in their consistent F var for the photometric and spectroscopic observations. To compare the variability of different bands, we shift each photometric LC to the same level depending on the magnitude at JD∼ 2458545.12 (the median magnitude of each LC). Figure 4 shows the shifted results. In addition to the differences of the variability amplitudes of valleys, the LCs of different filters are nearly the same as each other. We measure the time delay among different photometric LCs. However, we do not find any reliable time lags. The result of interpolated cross-correlation function (ICCF, White & Peterson 1994;Wang et al. 2016) between I and B is shown in the right bottom panel of Figure 5. We also test the time delays between the photometric and spectroscopic LCs (see Figure 3), and the LCs are consistent with each other. Therefore, the variability in different wavelength ranges should originate from the same region, and the variations of brightness might cause the changes of color and spectral index. We find that the variability of different colors is similar to that of the photometric LCs (see Figure 5). The spectral index variability is also similar to that of the LC of each bin (see Figure 3). We test correlations between different colors and different magnitudes. Figures 5 and 6 show the test results. The results indicate that the bluer spectra usually occur at brighter phases, i.e., BWB. The Spearman rank correlation between B − I and B is significant, and other colors are also correlated with B. The BWB trend was often found in S5 0716+714 (see Section 1) and can be explained with a shock-in-jet model. The larger variability amplitude is inclined to occur at the shorter wavelength. Thus, the BWB tend will be more significant when the interval of effective wavelengths between two bands is larger. As mentioned in Section 1, there are some groups which do not find any correlations between the colors and magnitudes. The discrepancy might be caused by the following reasons: 1. For some extended sources, the contamination of the host galaxies might lead to some fake variability because of the change of seeing (e.g., Feng et al. 2017Feng et al. , 2018. As a result, the observed correlation of the color-magnitude may not be related to the radiation processes. For point sources, the strong host galaxies may dilute the variability amplitudes of AGNs, and then influence the correlation between flux and spectral index, especially during the weak states. S5 0716+714 is a point source and its host galaxy is more than four times darker than the target itself (Nilsson et al. 2008). Thus, the discrepancy should not be caused by the effect of the host galaxy. 2. The accuracy of photometry may also influence the variability of colors. Most photometric studies are based on the small telescopes (≤ 1 m). For most BL Lacs, the typical variability amplitudes of colors are ∼0.05 mag (e.g., Stalin et al. 2006;Hu et al. 2014;Agarwal & Gupta 2015). Furthermore, the variability amplitudes might be less than 0.02 mag for some adjacent bands. When the photometric accuracy is larger than 0.01 mag, the color-magnitude correlations will be seriously affected. The accuracy of our photometric measurement is less than 1% in most nights. So, the adjacent bands can show the mild BWB trends (see Figure 6). During our observations, the entire data show that the BWB trend exists in S5 0716+714. The S/N and sampling frequency of the data are high enough. Therefore, the BWB trend may be an intrinsic phenomenon of the source. The color-magnitude data roughly obey the BWB trend, but the data scatter is visible as well (see Figure 6). The variability of flux density and spectral index are similar to each other (see Figures 3 and 5). Thus, the variability rate of flux might influence the variability of spectral index. Another possibility is that the variability of flux density and spectral index may result from changes of relativistic electron distribution emitting the observed photons and may have a correlation between the relevant variability rates. Thus, we test whether a correlation exists between the variability rates of flux density and spectral index. The sampling of observational data is nearly homogeneous and the variability rates of flux density F λ , spectral index α, and spectral amplitude A are defined aṡ whereF λ i , α i , and A i are the flux density, spectral index, and spectral amplitude observed at the time series T i , respectively. Figure 7 shows a positive correlation betweenα andḞ λ for Bin1. The most of data of BWB behavior is distributed in I and III quadrants of coordinate system (see Figure 7). At the same time, there are strong positive correlations ofα-Ȧ andḞ λ -Ȧ for Bin1 (see Table 8). Also, there is a correlation between the variability rates of B and B − I and nearly the data of BWB behavior are distributed in I and III quadrants (see Figure 8). Hereafter, spectral index-flux density and color-magnitude relations are called as "color-brightness" relation. These correlations indicate that the variability rates of color and brightness are likely dominated by the cooling and accelerating processes of the relativistic electrons that generate the observed photons and the relevant variability. In Equations (6a)-(6c), the variability rates are calculated from the differences of adjacent data points. The adjacent data points may be considered to originate from the same flare. In order to compare the color-magnitude variability rate correlations with the spectral index-flux density variability rate correlation, a relative variability rate of flux density is defined aṡ If the flux variability is mainly caused by the variability of spectrum F λ = Aλ −α ,Ḟ λ /F λ will be a function ofȦ/A andα, whereȦ The observational data ofḞ λ /F λ andα can be linearly fitted withḞ λ /F λ = B+Cα. The Spearman's rank correlation test shows a strong positive correlation betweenḞ λ /F λ andα (see Table 8), and the BWB data of S5 0716+714 are mostly distributed in I and III quadrants (see Figure 9). B is almost close to zero, and C = 3.29 ± 0.23. The Spearman's rank correlation analyses show strong positive correlations ofα-Ȧ/A andȦ/A-Ḟ λ /F λ (see Table 8). Since three correlations exist amongα,Ḟ λ /F λ , andȦ/A, there should be a correlation like asḞ λ /F λ (Ȧ/A,α) (see Figure 10). In fact, there is a correlation amongα,Ȧ/A, andḞ λ /F λ at the confidence level of > 99.99%, F λ /F λ = 0.001 + 0.012Ȧ/A + 1.839α. Since I and III quadrants in Figures 7-9 correspond to the BWB, II and IV quadrants in Figures 7-9 should correspond to redder-when-brighter (RWB), which likely have F λ = Dλ α in the optical band. Spectroscopic and photometric observations show consistent BWB trends in the color-brightness diagrams (Figures 7-9). In order to confirm the Spearman's rank test results listed in Table 8, a Monte Carlo (MC) simulation is used to reproduce these parameters presented in Table 8. For each pair of these parameters, each data array generated by the MC simulation is fitted with the SPEAR (Press et al. 1992) and the fitting gives the relevant r s and P s , the Spearman's rank correlation coefficient and the p-value of hypothesis test. Considering the errors of X and Y and assuming Gaussian distributions of X and Y, r s and P s distributions are generated by the SPEAR fitting to the data of X and Y from 10 4 realizations of the MC simulation. Averages, r s (MC) and P s (MC), are calculated by the r s and P s distributions, respectively. Standard deviations of these two distributions are taken as the relevant uncertainties of r s (MC) and P s (MC) (see Table 8). These results given by the MC simulation confirm the ordinary Spearman's rank test results listed in Table 8. Thus, these correlations will be reliable. Discussion and Conclusion We also test the BWB trend using the bin flux and spectral index (see Figure 11). This BWB trend is slightly different from that of color-magnitude. The data are fitted with a fifth-order polynomial and a monotonically increasing trend appears in Figure 11. Figure 11 shows that the BWB trend might depend on the brightness. Thus, the relevant radiation of the BWB at least includes two components: one component is caused by the propagation of shocks in jet; another component is the underlying radiation which is not related with the shock process. If the particles in jet is homogeneous, the variability of BL Lacs should be caused by the disturbance of magnetic field (e.g., Chandra et al. 2015), the precession of jet (e.g., Camenzind & Krockenberger 1992), the inhomogeneous region of jet, etc. The variations of the underlying radiation of jet may not cause the change of spectral index. But, during a weaker phase the BWB trend caused by the shock will be more significant and during a brighter phase the underlying radiation might dilute the BWB trend. This possibility needs more observation evidences to test. There is a possible discrepant point, the one at the lower left quarter in Figure 11, that might affect the fitting result. We exclude this point and re-fit the rest data. The result is very similar to the previous one. The reason that one flux may correspond to several α values is that the spectrum fitting includes two parameters A and α. Different A and α combinations may give the same flux. This will result in the data point scatter of BWB for both of spectroscopic and photometric observations. Though the dispersion of α exists, the BWB trend roughly holds (see the best fittings in Figure 11). The BWB behavior is observed in our monitoring epoch with the 2.4 m optical telescope located at Lijiang Observatory of Yunnan Observatories. The BWB behavior can be explained by the shock-jet model. A relativistic shock propagating down a jet will accelerate electrons to higher energies, where the shock interacts with a nonuniform region of high magnetic field and/or electron density, likely observed to be knots in jets. The shock acceleration will cause radiations at different frequencies being produced at different distances. The synchrotron peak frequency depends on the relativistic electron distribution and the magnetic field, i.e., the distances behind the shock front, and the radiation cooling will make the synchrotron radiation peak decrease at the intensity and the frequency. Thus, frequency dependence of the duration of a flare corresponds to an energydependent cooling length behind the shock front, which will cause colour variations in blazars. Papadakis et al. (2007) proposed that the observations during early rising phase of the flux will give a bluer colour while those taken during later phases of the same flare will show more enhanced redder fluxes. The synchrotron peak of SED of S5 0716+714 is located very close to the optical wavelengths, and the corresponding broadband SED can be well explained by the synchrotron self-Compton (SSC) and the external radiation Compton (ERC) models, where the SSC soft photons are the synchrotron photons and the ERC soft photons in the IC scattering are emission from a broad-line region (BLR) and/or infrared (IR) emission from a dust torus (e.g., Liao et al. 2014). No emission lines were detected in the IR, optical, and UV spectra of S5 0716+714 (Chen & Shan 2011;Shaw et al. 2009;Danforth et al. 2013), and this may from the fact that thermal emission from accretion disk is not found in multiwavelength SED of S5 0716+714 (e.g., Liao et al. 2014). The ionizing radiation from accretion disk is so weak that broad emission lines are not observable, even though a BLR exists in S5 0716+714. Also, the dust emission is not observable because of very weak emission of accretion disk, even though a dust torus exists in S5 0716+714. The observational frequency band is at the left of the synchrotron radiation peak because F λ = Aλ −α (α > 0). This corresponds to the BWB behavior data in the I(+,+) and III(-,-) quadrants of coordinate system. If the observational frequency band is at the right of the synchrotron radiation peak, we may have F λ = Dλ α (α > 0). This may correspond to the RWB behavior in the II(-,+) and IV(+,-) quadrants of coordinate system. The first case is observed in our observations and the second one is not observed in our observations. The BWB trends usually arise in most BL Lacs (e.g., Villata et al. 2004;Bonning et al. 2012), and this is probably because that the synchrotron peaks are at optical-UV-X-ray bands for most BL Lacs and that the optical observations are usually at the left of the synchrotron radiation peak. No or only weak BWB trends are observed in many observations (e.g., Wierzcholska et al. 2015;Agarwal et al. 2016;Hong et al. 2017), and this may result from that the observational frequency ranges span the synchrotron peak frequencies. Also, the optical variability may be produced by a superposition of optical variability from different regions in jets for BL Lacs without the color-brightness correlations. The BL Lacs with the BWB trends may have a single emitting region of optical variability. The relativistic electrons in a single emitting region can produce the broadband SED containing the synchrotron and IC components (e.g., Liao et al. 2014). This single emitting region of optical variability will avoid superposing of optical variability from different regions and weakening of the color-brightness correlations. Thus, the variability of brightness, color, and spectral index is likely caused by the change of the underlying relativistic electron distribution that generates the relevant radiation behavior observed in S5 0716+714 as a shock passes through a high density region in jet. This passing of shock will produce SED's variability, such as SED's shape, peak frequency, and peak intensity. In order to research short timescale optical variability of γ-ray blazar S5 0716+714, quasisimultaneous spectroscopic and multi-band photometric observations were performed from 2018 November to 2019 March with the 2.4 m optical telescope located at Lijiang Observatory of Yunnan Observatories. As the BWB trends are detected in the photometric observations, what will the optical spectra show and how will vary? First, the observed spectra can be well fitted with a powerlaw F λ = Aλ −α . Then we studyα,Ȧ,Ȧ/A,Ḟ λ , andḞ λ /F λ for spectroscopic observations. We find correlations between these quantities, which are consistent with the BWB trends. Interestingly, α is correlated to F λ and the variations of α lead those of F λ . The variations of α indicate variations of relativistic electron distribution producing these optical spectra. A correlation amongα,Ȧ/A, andḞ λ /F λ is found as well. Colors, magnitudes, color variation rates, and magnitude variation rates are studied for photometric observations. We also find correlations between these quantities, which are consistent with the BWB trends. Moreover, the color variations lead the magnitude variations. The data of spectroscopic and photometric observations are mostly distributed in the I(+,+) and III(-,-) quadrants of coordinate system (see Figures 7-9). The observed BWB may be explained by the shock-jet model. Whether there are BWB trends may depend on the relative locations of the synchrotron peak frequencies with respect to the observational frequency ranges, e.g., at the left of the synchrotron peak frequencies. Both of spectroscopic and photometric observations give F var ∼ 40% which show violent variations in S5 0716+714. Moreover, the range of α is similar to those of colors computed from magnitudes and this similarity implies the reliability of BWB observed in our observations. There are ( Note. -X and Y are the relevant quantities of spectra fitted in section 2 and these presented in Figures 6 and 8.
2019-11-20T02:07:08.000Z
2019-11-20T00:00:00.000
{ "year": 2020, "sha1": "1a15e8b139f5727b32b23514d048e5dea6c99f7d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1911.08667", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1a15e8b139f5727b32b23514d048e5dea6c99f7d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
6553462
pes2o/s2orc
v3-fos-license
IL-4–Stat6 Signaling Induces Tristetraprolin Expression and Inhibits TNF-α Production in Mast Cells Increasing evidence has revealed that mast cell–derived tumor necrosis factor α (TNF-α) plays a critical role in a number of inflammatory responses by recruiting inflammatory leukocytes. In this paper, we investigated the regulatory role of interleukin 4 (IL-4) in TNF-α production in mast cells. IL-4 inhibited immunoglobulin E–induced TNF-α production and neutrophil recruitment in the peritoneal cavity in wild-type mice but not in signal transducers and activators of transcription 6 (Stat6)–deficient mice. IL-4 also inhibited TNF-α production in cultured mast cells by a Stat6-dependent mechanism. IL-4–Stat6 signaling induced TNF-α mRNA destabilization in an AU-rich element (ARE)–dependent manner, but did not affect TNF-α promoter activity. Furthermore, IL-4 induced the expression of tristetraprolin (TTP), an RNA-binding protein that promotes decay of ARE-containing mRNA, in mast cells by a Stat6-dependent mechanism, and the depletion of TTP expression by RNA interference prevented IL-4–induced down-regulation of TNF-α production in mast cells. These results suggest that IL-4–Stat6 signaling induces TTP expression and, thus, destabilizes TNF-α mRNA in an ARE-dependent manner. Recent works have revealed that mast cell-derived TNF-␣ also plays important roles in other inflammatory processes of both innate and acquired immune responses. It has been shown that mast cell-derived TNF-␣ is involved in the protection against gram-negative bacteria in experimental peritonitis (5,6), immune complex-mediated peritonitis (7), T cell-mediated delayed-type hypersensitivity reaction (8), and autoantibody-induced arthritis (9,10). Although TNF-␣ is beneficial in some situations such as bacterial infection (5,6), an excess of TNF-␣ seems harmful in other situations (4,(7)(8)(9)(10). Therefore, the production of TNF-␣ should be tightly controlled in mast cells. IL-4 is a multifunctional cytokine that plays a central role in causing allergic Th2-type immune responses (12,13). Binding of IL-4 to IL-4R results in the activation of signal transducers and activators of transcription 6 (Stat6) and induces the expression of IL-4-inducible genes, including class II major histocompatibility molecules, low-affinity IgE receptor (CD23), and IL-4R ␣ chain (12,13). IL-4-Stat6 signaling plays a central role in the commitment of CD4 ϩ T cells to the Th2 phenotype and IgE isotype switching in B cells (12,13). On the other hand, it has been shown that IL-4-Stat6 signaling enhances IL-10-induced apoptosis of IL-3-dependent mast cells (14) and decreases the expression of IgE receptors on mast cells (15). However, the regulatory role of IL-4 in TNF-␣ production in mast cells is still largely unknown. In this paper, we investigated the molecular basis for IL-4-induced regulation of TNF-␣ production in mast cells. We found that IL-4-Stat6 signaling down-regulated TNF-␣ production in mast cells by destabilizing mRNA in an AU-rich element (ARE)-dependent manner. We also found that IL-4 induced the expression of tristetraprolin (TTP), an RNA-binding zinc-finger protein that promotes decay of ARE-containing mRNAs (16)(17)(18), by a Stat6dependent mechanism and that the depletion of TTP by RNA interference (RNAi) prevented IL-4-induced downregulation of TNF-␣ production. Our results indicate that IL-4-Stat6 signaling induces TTP expression and subsequent ARE-dependent mRNA destabilization, resulting in the down-regulation of TNF-␣ production in mast cells. Materials and Methods Mice. Stat6-deficient (Stat6 Ϫ / Ϫ ) mice (19) were backcrossed for more than eight generations onto C57BL/6 mice (Japan SLC) or BALB/c mice (Charles River Laboratories), and the littermate WT mice were used as controls. C57BL/6 background mice were used except for the experiments of IgE-dependent latephase reactions. Stat6 ϪրϪ mice were obtained from S. Akira and K. Takeda (both from Research Institute for Microbial Diseases, Osaka University, Osaka, Japan). Mice were housed in microisolator cages under pathogen-free conditions. All experiments were performed according to the guidelines of Chiba University. Constructs. COOH-terminal-truncated Stat6 mutant at amino acid 673 (673 Stat6) was described previously (20). A constitutively active form of Stat6 (Stat6VT; reference 21), which has two alanine substitutions at amino acids 547 and 548, was generated using a PCR-based site-directed mutagenesis kit according to the manufacturer's instruction (Stratagene). All mutations were confirmed by DNA sequencing. Cell Culture. BMMCs were prepared and maintained as described previously (20). More than 98% of cells obtained after 4 wk of culture were morphologically mast cells and positive for c-kit expression. CFTL-15 cells (obtained from M.A. Brown, Emory University School of Medicine, Atlanta, GA), a murine mast cell line, were cultured in RPMI 1640 medium containing 10% heat-inactivated FCS, 50 mM ␤ -mercaptoethanol, 2 mM l -glutamine, antibiotics, and 10% (vol/vol) murine IL-3 transfectant X63 cell-conditioned medium as a source of IL-3 (complete RPMI 1640 medium). X63-IL-3 cells were obtained from H. Karasuyama (Tokyo Metropolitan Organization of Medical Science, Tokyo, Japan). IgE-dependent Late-phase Reactions. IgE-dependent late-phase reactions in the mouse peritoneal cavity were induced as described previously (22). In brief, mouse anti-DNP IgE (100 g per mouse) or PBS (as a control) was injected intravenously to BALB/c mice or Stat6 Ϫ / Ϫ mice. Murine IL-4 (1 g per mouse) or PBS (as a control) was injected intraperitoneally to the mice 24 h after anti-DNP IgE or PBS injection. 1 h later, DNP-HSA (6 g in 0.2 ml of saline) or saline (as a control) was injected intraperitoneally to the mice. Peritoneal lavage was performed with 1 ml of ice-cold PBS 8 h after DNP-HSA injection. The number of total cells in the lavage fluid was counted with a hemocytometer, and differential cell counts were determined on the cytospin cell preparations stained with Wright-Giemsa solution. The amount of TNF-␣ in the peritoneal lavage fluid was determined by ELISA as described in the next paragraph. Measurement of TNF-␣ by ELISA. The amount of TNF-␣ in the culture supernatant or in the peritoneal lavage fluid was measured using a murine TNF-␣ ELISA kit (BD Biosciences). The assay was performed in duplicate according to the manufacturer's instructions. The detection limit was 15 pg/ml. Intracellular Staining for TNF-␣ . BMMCs were stimulated with IgE engagement or A23187 in the presence or absence of 10 ng/ml IL-4 at 37 Њ C for 6 h, with 2 M monensin (Sigma-Aldrich) added for the final 4 h to prevent cytokine release. Cells were harvested, washed with PBS, and stained with anti-c-kit PE (2B8; BD Biosciences) for 30 min at 4 Њ C. Cells were washed with PBS, fixed with IC Fix (Biosource International), permeabilized with IC Perm (Biosource International), and stained with anti-TNF-␣ allophycocyanin (MP6-XT22; BD Biosciences) for 30 min at 4 Њ C. After washing, cells were analyzed on FACScali-bur™ using CELLQuest™ software. TNF-␣ Promoter Assay. Reporter construct of TNF-␣ promoter (pGL3TNF; reference 25), in which full-length murine TNF-␣ promoter (26) drives the luciferase gene, was a gift from E.W. Gelfand (National Jewish Medical and Research Center, Denver, CO). CFTL-15 cells were transfected with pGL3TNF in the presence of pRL-TK (Promega) in 800 l of serum-free RPMI 1640 medium at 960 F/300 V. Where indicated, WT Stat6 expression vector (pcDNA3 Stat6), Stat6VT expression vector (pcDNA3 Stat6VT), or pcDNA3 (as a control) was cotransfected. After cells were cultured in complete RPMI 1640 medium at 37 Њ C for 12 h, aliquoted cells were left treated or untreated with 10 ng/ml IL-4 for another 12 h. The luciferase activity was measured by the dual luciferase assay system (Promega) according to the manufacturer's instructions. Firefly luciferase activity of pGL3TNF was normalized by Renilla luciferase activity of pRL-TK. All values were obtained from experiments performed in triplicate and repeated at least three times. Analysis of mRNA Decay. pTet-BBB vector, in which tetresponsive element drives rabbit ␤ -globin transcription, and pTet-BBB ARE TNF , in which TNF-␣ ARE is inserted to pTet-BBB vector at downstream of rabbit ␤ -globin gene, were gifts from A.B. Shyu (The University of Texas Houston Medical School, Houston, Texas; reference 28). To measure the rate of mRNA decay, we modified the experimental system that was developed by Loflin et al. (28). In brief, CFTL-15 cells were first infected with MSCV-Stat6VT-IRES-Thy1.1 retrovirus or MSCV-IRES-Thy1.1 retrovirus (as a control). Infected cells (Thy1.1 ϩ cells) were purified by magnetic cell sorting and transfected with pTet-BBB ARE TNF or pTet-BBB in the presence of pTet-Off vector, which expresses the tet-responsive transcriptional activator (BD Biosciences and CLONTECH Laboratories, Inc.). G418-resistant clones were selected by limiting dilution and the presence of pTet-Off as well as pTet-BBB ARE TNF or pTet-BBB vector in the clone was confirmed by PCR. These clones were cultured in the presence of 100 ng/ml doxycycline (DOX) for 16 h, and DOX was removed from the culture for 4 h to resume transcription from pTet-BBB ARE TNF or pTet-BBB. These clones were added with DOX to block further transcription. At indicated times after the addition of DOX, total RNA was isolated and the amount of rabbit ␤ -globin mRNA was determined by Taqman PCR analysis using ABI PRISM 7000 (Applied Biosystems). The following primers and a fluorogenic probe were used: sense primer, 5 Ј -TCGCTGCAAATGCTGTTATGAAC-3 Ј ; antisense primer, 5 Ј -GAATTCTTTGCCAAAATGATGAGA-3 Ј ; and probe, 5 Ј -FAM-CTGGACAACCTCAAG-MGB-3 Ј . The levels of rabbit ␤ -globin mRNA were normalized to the levels of glyceraldehyde-3-phosphate dehydrogenase mRNA (Applied Biosystems). RT-PCR Assay. Total RNA was prepared and RT-PCR was performed as described previously (29). PCR primers for TTP cDNA were used as follows: 5 Ј -TCTCTGCCATCTAC-GAGAGCCTC-3 Ј and 5 Ј -GCTGATGCTTTGTCGCAGCA-CATG-3 Ј . RT-PCR for ␤ -actin was performed as a control. All PCR amplifications were performed at least three times with multiple sets of experimental RNAs. Stat6-dependent mechanism. (A) IgE engagement induces neutrophil recruitment into the peritoneal cavity. Anti-DNP IgE or PBS (as a control) was first injected intravenously to BALB/c mice. DNP-HSA or saline (as a control) was injected intraperitoneally to the mice 25 h after anti-DNP IgE sensitization. 8 h after DNP-HSA injection, the number of total cells, neutrophils, and mast cells in the peritoneal lavage fluid was determined. Data are mean Ϯ SD from five mice in each group. *, Significantly different from the mean value of control group (saline injection). *, P Ͻ 0.05. **, P Ͻ 0.01. (B and C) IL-4 inhibits IgE-induced neutrophil recruitment through a Stat6dependent mechanism. Similar to A, anti-DNP IgE or PBS was injected intravenously to Stat6deficient (Stat6 Ϫ/Ϫ ) mice or the littermate wild-type (WT) mice. 1 g recombinant IL-4 or PBS (as a control) was injected intraperitoneally to the mice 24 h after anti-DNP IgE sensitization. 1 h later, DNP-HSA or saline was injected intraperitoneally to the mice. 8 h after DNP-HSA injection, the number of total cells, neutrophils, and mast cells (B) as well as the amount of TNF-␣ (C) in the peritoneal lavage fluid was determined. Data are mean Ϯ SD from four mice in each group. ND, not detectable. *, Significantly different from the mean value of control group (PBS). *, P Ͻ 0.05. **, P Ͻ 0.01. Immunoblotting. Whole cell extracts were prepared and immunoblotting was performed as described previously (20). Antiserum to TTP (H-120) was purchased from Santa Cruz Biotechnology, Inc. TTP Promoter Assay. Murine TTP promoter, either Ϫ691 to ϩ59 or Ϫ524 to ϩ59, was amplified by PCR using a 2.1-kb fragment of murine TTP promoter (a gift from P.J. Blackshear, National Institute of Environmental Health Sciences, Research Triangle Park, NC; reference 30) as a template and inserted into KpnI-XhoI site of pGL3-basic vector (Promega) to generate TTP-691Luc or TTP-524Luc. TTP-691mtLuc, in which Stat6 binding site (TTCctaaGAA from Ϫ576 to Ϫ567) was mutated to TTTctaaGAA, was generated using a PCR-based site-directed mutagenesis kit (Stratagene). CFTL-15 cells were infected with MSCV-Stat6VT-IRES-Thy1.1 retrovirus or MSCV-IRES-Thy1.1 retrovirus (as a control) and infected cells (Thy1.1 ϩ cells) were purified by magnetic cell sorting. The purified cells were transfected with either TTP-691Luc, TTP-524Luc, or TTP-691mtLuc in the presence of pRL-TK at 960 F/300 V. After cells were cultured in complete RPMI 1640 medium at 37ЊC for 24 h, firefly luciferase activity of TTP-691Luc, TTP-524Luc, or TTP-691mtLuc was measured and normalized by Renilla luciferase activity of pRL-TK. Data Analysis. Data are summarized as mean Ϯ SD. The statistical analysis of the results was performed by the unpaired Student's t test. p-values Ͻ0.05 were considered significant. IL-4 Inhibits IgE-induced TNF-␣ Production and Neutrophil Recruitment through a Stat6-dependent Mechanism. First, we investigated the role of IL-4-Stat6 signaling in IgE-mediated inflammatory responses in vivo. We used a murine model of IgE-dependent late-phase reaction, in which neutrophil recruitment is induced into the peritoneal cavity upon IgE engagement through the activation of mast cells (22). Mice were passively sensitized with anti-DNP IgE, and subsequently IgE was engaged by an intraperitoneal injection of DNP-HSA. As shown in Fig. 1 A, at 8 h after DNP-HSA injection, the number of leukocytes in the peritoneal cavity was increased in the mice that were sensitized with anti-DNP IgE. The number of neutrophils recovered from the peritoneal cavity was significantly increased by DNP-HSA injection in sensitized mice (saline (Fig. 1 A), whereas the number of mast cells was not affected by DNP-HSA injection (Fig. 1 A). IL-4 significantly inhibited IgE-induced neutrophil recruitment in the peritoneal cavity by 67% in WT mice without affecting the number of mast cells (n ϭ 4; P Ͻ 0.01) (Fig. 1 B). By contrast, IL-4 did not inhibit IgEinduced neutrophil recruitment in Stat6 Ϫ/Ϫ mice (Fig. 1 B). IL-4 also inhibited IgE-induced TNF-␣ production in the peritoneal cavity in WT mice but not in Stat6 Ϫ/Ϫ mice (Fig. 1 C). These results suggest that IL-4-Stat6 signaling inhibits TNF-␣ production and neutrophil recruitment during IgE-and mast cell-dependent late-phase reactions. Next, we examined whether Stat6 activation is sufficient to down-regulate TNF-␣ production in mast cells using a constitutively active form of Stat6 (Stat6VT) (21). As shown in Fig. 2 In noninfected populations (GFP Ϫ cells), IL-4 did not inhibit IgE-induced TNF-␣ productions even when WT Stat6-expressing cells coexisted in the culture (Fig. 3, f vs. h). These results indicate that the transcriptional activity of Stat6 is required for IL-4-induced down-regulation of TNF-␣ production. IL-4 Does Not Inhibit Transcription from TNF-␣ Promoter. To further address molecular mechanisms of IL-4-induced down-regulation of TNF-␣ production in mast cells, we next examined the effect of IL-4 on TNF-␣ promoter activity. In this experiment, pGL3TNF (25) that contains full-length murine TNF-␣ promoter (26) was used as a reporter construct. CFTL-15 cells were transfected with pGL3TNF and the effect of IL-4 on A23187-induced transcription of pGL3TNF was examined. On the contrary to our expectation, IL-4 did not inhibit A23187-induced transcription of pGL3TNF (Fig. 4). IL-4 did not inhibit the transcription of pGL3TNF, even when WT Stat6 was coexpressed in CFTL-15 cells (Fig. 4). In addition, the expression of Stat6VT did not inhibit A23187-induced transcription of pGL3TNF (Fig. 4), although Stat6VT did inhibit A23187-induced TNF-␣ production in CFTL-15 cells (Fig. 2 D). These results suggest that IL-4-Stat6 signaling does not inhibit TNF-␣ promoter activity. IL-4-Stat6 Signaling Down-regulates TNF-␣ mRNA Stability by an ARE-dependent Mechanism. The amount of mRNA is controlled not only by the de novo transcription but also by the stability of mRNA (18,31). Given that the AREs residing in the 3ЈUTR of TNF-␣ mRNA has been shown to be important for the regulation of gene expression (27,32,33), next we examined whether the ARE is involved in IL-4-induced down-regulation of TNF-␣ production in mast cells. We prepared two reporter constructs: pGL3 TNF ARE(ϩ), in which 3ЈUTR of TNF-␣ mRNA was inserted just after the luciferase gene of pGL3-promoter vector and pGL3 TNF ARE(Ϫ), in which 69 bp of ARE was deleted from pGL3 TNF ARE(ϩ) (Fig. 5 A). CFTL-15 cells were transfected with pGL3 TNF ARE(ϩ) or pGL3 TNF ARE(Ϫ) in the presence of pcDNA3 Stat6, pcDNA3 Stat6VT, or pcDNA3 (as a control) and stimulated with or without IL-4. As shown in Fig. 5 B, IL-4 inhibited the luciferase activity of pGL3 TNF ARE(ϩ). IL-4-induced down-regulation of pGL3 TNF ARE(ϩ) was more profound when WT Stat6 was coexpressed in CFTL-15 cells (Fig. 5 B). The expression of Stat6VT also inhibited the activity of pGL3 TNF ARE(ϩ) even in the absence of IL-4 stimulation (Fig. 5 B). In contrast, IL-4 stimulation or the expression of Stat6VT did not inhibit the activity of pGL3 TNF ARE(Ϫ) (Fig. 5 B). These results suggest that ARE residing in the 3ЈUTR of TNF-␣ mRNA is involved in IL-4-induced down-regulation of TNF-␣ production in mast cells. Because it has been demonstrated that the ARE in the 3ЈUTR of TNF-␣ mRNA controls both mRNA stability Stat6, pcDNA3 Stat6VT, or pcDNA3 (as a control). 12 h later, cells were stimulated with or without 500 ng/ml A23187 in the presence or absence of 10 ng/ml IL-4. The luciferase activity was measured by the dual luciferase reporter system another 12 h later. Data are mean Ϯ SD from five independent experiments. inhibits the luciferase activity of pGL3 TNF ARE(ϩ) but not of pGL3 TNF ARE(Ϫ) by a Stat6-dependent mechanism. CFTL-15 cells were transfected with pRL-TK and either pGL3 TNF ARE(ϩ) or pGL3 TNF ARE(Ϫ) in the presence of pcDNA3 Stat6, pcDNA3 Stat6VT, or pcDNA3 (as a control). 12 h later, cells were stimulated with or without 10 ng/ml IL-4. The luciferase activity was measured by the dual luciferase reporter system another 12 h later. Data are mean Ϯ SD from five independent experiments. *, P Ͻ 0.05. **, P Ͻ 0.01. and translation (27), next we examined the effect of Stat6 activation on the regulation of ARE-dependent mRNA stability using a more direct system established by Loflin et al. (28). CFTL-15 cells that were infected with Stat6VT retrovirus or control retrovirus were transfected with pTet-BBB ARE TNF or pTet-BBB (Fig. 6 A) in the presence of pTet-Off vector. pTet-BBB ARE TNF -or pTet-BBBexpressing cells were cultured in the absence of DOX for 4 h to resume transcription of rabbit ␤-globin from pTet-BBB ARE TNF or pTet-BBB. After further transcription was blocked by adding DOX, the amount of rabbit ␤-globin mRNA was examined by Taqman PCR analysis (Fig. 6 B). Even in the absence of Stat6VT expression, the decay of rabbit ␤-globin mRNA was more rapid in pTet-BBB ARE TNF -expressing cells than that in pTet-BBBexpressing cells (Fig. 6 B). Furthermore, the decay of rabbit ␤-globin mRNA was significantly enhanced by the expression of Stat6VT in pTet-BBB ARE TNF -expressing cells (Fig. 6 B). On the other hand, the expression of Stat6VT did not affect the decay of rabbit ␤-globin mRNA in pTet-BBB-expressing cells (Fig. 6 B). These results indicate that Stat6 activation induces TNF-␣ mRNA destabilization in an ARE-dependent manner. Stat6 Activation Induces the Expression of TTP in Mast Cells. Recently, it has been shown that TTP, the prototype member of a zinc-finger family of RNA binding proteins (16)(17)(18), regulates the expression of certain cytokines including TNF-␣ by destabilizing the mRNA in an AREdependent manner (16,27,33). To determine whether TTP is involved in IL-4-induced down-regulation of TNF-␣ production, we first examined whether IL-4 induced the expression of TTP in mast cells. Interestingly, the expression of TTP mRNA was induced in WT BMMCs within 1 h after IL-4 stimulation (Fig. 7 A). However, the induction of TTP mRNA was absent in IL-4stimulated Stat6 Ϫ/Ϫ BMMCs (Fig. 7 B), indicating that IL-4-induced TTP expression requires the presence of Stat6. Enforced expression of Stat6VT also induced TTP mRNA even in the absence of IL-4 stimulation (Fig. 7 C). IL-4-induced TTP expression was also detected at protein levels in WT BMMCs but not in Stat6 Ϫ/Ϫ BMMCs (Fig. 7 D). We further examined whether Stat6-mediated TTP expression resulted from the direct activation of TTP promoter by Stat6. As shown in Fig. 7 E, TTP-691Luc, a reporter construct in which murine TTP promoter (Ϫ691 to ϩ59) drives the luciferase gene, was significantly activated in CFTL-15 cells that expressed Stat6VT but not in control CFTL-15 cells (P Ͻ 0.01). In contrast, when the Stat6-binding site was mutated (TTP-691mtLuc), the expression of Stat6VT did not activate the reporter construct (Fig. 7 E). In addition, the expression of Stat6VT did not These clones were cultured in the absence of DOX for 4 h to resume transcription from pTet-BBB ARE TNF or pTet-BBB, which was followed by the addition of 100 ng/ml DOX to block further transcription. At indicated times after the addition of DOX, total RNA was isolated and Taqman PCR analysis for rabbit ␤-globin and glyceraldehydes-3-phosphate dehydrogenase (as a control) was performed. Representative data from five independent experiments are shown. activate TTP-524Luc, a construct in which the Stat6binding site was deleted (Fig. 7 E), although TTP-524Luc exhibited an equivalent baseline activity to TTP-691Luc (Fig. 7 E). These results suggest that the activated Stat6 directly induces the transcription from TTP promoter in mast cells. TTP Is Required for IL-4-induced Down-regulation of TNF-␣ Production in Mast Cells. Finally, we examined the effect of TTP depletion on IL-4-induced down-regulation of TNF-␣ production in mast cells. We prepared several shRNA RNAi constructs and tested the efficiency of the depletion. As shown in Fig. 8 A, TTP shRNA A significantly inhibited the expression of TTP mRNA in IL-4stimulated CFTL-15 cells. In contrast, TTP shRNA B or TTP shRNA C did not inhibit the expression of TTP mRNA at all (Fig. 8 A). We selected several clones that stably expressed TTP shRNA A and found that IL-4-induced TTP expression was severely decreased in A1 and A2 cells (Fig. 8 B). In contrast, Ctrl 1 cells that were stably transfected with a control construct (pSuppressor Neo) expressed a significant amount of TTP mRNA upon IL-4 stimulation (Fig. 8 B). We compared the effect of IL-4 on A23187-induced TNF-␣ production in these clones. Interestingly, A1 and A2 cells, but not Ctrl1 cells, were resistant to IL-4-induced down-regulation of TNF-␣ production (Fig. 8 C). These results suggest that TTP is required for IL-4-induced down-regulation of TNF-␣ production from activated mast cells. Discussion In this paper, we show that IL-4 inhibits TNF-␣ production from activated mast cells through a Stat6-dependent TTP expression. First, we found that IL-4 inhibited TNF-␣ production in mast cells in vitro as well as in vivo by a Stat6-dependent mechanism (Figs. 1-3). Second, we found that IL-4-Stat6 signaling down-regulated TNF-␣ mRNA stability in an ARE-dependent manner (Figs. 5 and 6). Third, we found that IL-4 induced the expression of TTP, which promotes ARE-dependent mRNA destabilization (16)(17)(18), in mast cells by a Stat6-dependent mechanism (Fig. 7). Finally, depletion of TTP expression by RNAi blocked IL-4-induced down-regulation of TNF-␣ production in mast cells (Fig. 8). These results indicate that Stat6-induced TTP expression and subsequent ARE-dependent mRNA destabilization are responsible for IL-4-induced down-regulation of TNF-␣ production in mast cells. We show that IL-4-Stat6 signaling inhibits TNF-␣ production from mast cells that are stimulated not only with IgE engagement (Figs. 1-3) but also with LPS stimulation (not depicted). The antiinflammatory properties of IL-4 are well recognized as important negative regulators of proinflammatory gene expression, especially in monocytes and macrophages (34). Thus, our results indicate that mast cells are also targets of IL-4 to function as an antiinflammatory cytokine. Our findings are consistent with a previous finding by Matsukawa et al. (35) that TNF-␣ production in the peritoneal cavity in experimental peritonitis, in which mast cell-derived TNF-␣ plays a critical role in the protection of bacterial infection (5,6), is enhanced in Stat6 Ϫ/Ϫ mice. We have found that transcriptional activity of Stat6 is required for IL-4-induced down-regulation of TNF-␣ production in mast cells (Fig. 3). In addition, we have found that the expression of D685A Stat6, which exhibits a stronger transcriptional activity than WT Stat6 in mast cells (20), enhances IL-4-induced down-regulation of TNF-␣ production in mast cells (unpublished data). We have also found that the expression of a constitutively active Stat6VT down-regulates TNF-␣ production in mast cells (Fig. 2). Although, in addition to Stat6, IL-4R mediates its responses through activation of other pathways, including insulin receptor substrate 1/2 (12), our results indicate that Stat6 is essential for IL-4-induced down-regulation of TNF-␣ production. These cells were stimulated with or without 10 ng/ml IL-4 for 60 min and the amount of TTP mRNA was evaluated by RT-PCR. Representative data from three independent experiments are shown. (C) IL-4 does not inhibit A23187-induced TNF-␣ production in A1 and A2 cells. A1 cells, A2 cells, and Ctrl1 cells were stimulated with or without 500 ng/ml A23187 for 24 h in the presence or absence of 10 ng/ml IL-4. The amounts of TNF-␣ in the supernatant were measured by ELISA. Data are mean Ϯ SD from five independent experiments. ND, not detectable. *, P Ͻ 0.01. We demonstrate that IL-4-induced down-regulation of TNF-␣ production results from the ARE-dependent mRNA destabilization (Figs. 5 and 6), but not from the inhibition of TNF-␣ promoter activity (Fig. 4). Increasing evidence has shown that the presence of ARE in the 3Ј-UTR of transcripts is associated with the regulation of mRNA stability (16)(17)(18). Indeed, in the case of TNF-␣, the importance of ARE-dependent mRNA destabilization has been demonstrated in vitro as well as in vivo (27,33), although transcription (36), splicing (37), and protein processing (38) are also involved in TNF-␣ production. Because ARE is found in a number of genes (17,18), it is plausible that IL-4 may inhibit the expression of some other genes through the destabilization of mRNA. This possibility is under investigation in our laboratory. We show that IL-4-Stat6 signaling induces the expression of TTP in mast cells through Stat6-mediated activation of TTP promoter (Fig. 7). Therefore, together with the findings of IL-4-induced TTP-dependent down-regulation of TNF-␣ production in mast cells (Fig. 8), our results indicate that Stat6-induced TTP expression mediates ARE-dependent destabilization of TNF-␣ mRNA in mast cells. The importance of TTP in the regulation of TNF-␣ production has been clearly demonstrated using TTP-deficient (TTP Ϫ/Ϫ ) mice (33,39,40). The phenotype of TTP Ϫ/Ϫ mice, including cachexia, dermatitis, conjunctivitis, and destructive arthritis, can be largely prevented by the neutralization of TNF-␣ (39), implicating an excess of circulating TNF-␣ in the pathogenesis of TTP Ϫ/Ϫ mice. In addition, it has been demonstrated that macrophages derived from TTP Ϫ/Ϫ mice produce more TNF-␣ mRNA than macrophages from WT mice (40). Moreover, TNF-␣ mRNA has been shown to be markedly stabilized in TTP Ϫ/Ϫ cells (33), implicating TTP as an important stimulator of decay of TNF-␣ mRNA. It has also been shown recently that TTP recruits the exosome to ARE-containing mRNA and thereby promotes the rapid decay of the mRNA (41). Thus, our findings that IL-4-Stat6 signaling induces the expression of TTP provide a novel insight into the ARE-dependent gene regulation in IL-4-rich environments such as allergic diseases or parasitic infection. As aforementioned, our results indicate that IL-4 prevents TNF-␣ production from mast cells through Stat6induced TTP expression (Figs. 7 and 8). IL-10, another antiinflammatory cytokine, also inhibits the production of TNF-␣ through an ARE-dependent mechanism (42). However, interestingly, the molecular basis for the IL-10induced inhibition is different from that of IL-4. It has been shown that IL-10-induced down-regulation of TNF-␣ production does not require the presence of TTP and does not alter mRNA stability (42). Instead, IL-10-induced down-regulation of TNF-␣ production is exerted through the inhibition of p38 mitogen-activated protein (MAP) kinase-mediated translation of TNF-␣ (42). It has also been demonstrated that p38 MAP kinase phosphorylates TTP protein (43,44), and that the phosphorylated TTP loses its activity (44). Because it has been shown that IL-10 inhibits p38 MAP kinase (42), IL-10 may also inhibit TNF-␣ production by inhibiting p38 MAP kinase-mediated inactivation of TTP. In conclusion, we have shown that IL-4-Stat6 signaling induces the expression of TTP in mast cells and, thus, down-regulates TNF-␣ production by destabilizing mRNA in an ARE-dependent manner. Because an excess of TNF-␣ is involved in many inflammatory diseases, including rheumatoid arthritis (45) and idiopathic inflammatory bowel diseases (46), the modulation of IL-4-Stat6 signaling may be useful as a therapeutic tool for rheumatoid arthritis or inflammatory bowel disease through the inhibition of TNF-␣ production.
2014-10-01T00:00:00.000Z
2003-12-01T00:00:00.000
{ "year": 2003, "sha1": "1ce7124f32c1b1b678112fa4cdcda6ce863a7201", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/198/11/1717.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1ce7124f32c1b1b678112fa4cdcda6ce863a7201", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
13647682
pes2o/s2orc
v3-fos-license
Subwavelength line imaging using plasmonic waveguides We investigate the subwavelength imaging capacity of a two-dimensional fanned-out plasmonic waveguide array, formed by air channels surrounded by gold metal layers for operation at near-infrared wavelengths, via finite element simulations. High resolution is achieved on one side of the device by tapering down the channel width while simultaneously maintaining propagation losses of a few dB. On the other, low-resolution side, output couplers are designed to optimize coupling to free space and to minimize channel cross talk via surface plasmons. Point sources separated by {\lambda}/15 can still be clearly distinguished. Moreover, up two 90% of the power of a point dipole is coupled to the device. Applications are high-resolution linear detector arrays and, by operating the device in reverse, high-resolution optical writing. I. INTRODUCTION inear detector arrays are popular in many applications ranging from spectrometers and particle counters to position encoders and autofocussing systems. The achievable resolution can be limited by a variety of factors, such as the detector pixel size or the magnification of the imaging system, but ultimately for free-space optical methods it is determined by the diffraction limit of light. Vice versa, optical encoding of linear systems like long nanoparticles [1][2][3][4] and the detection of fluorescent markers along linear biological systems such as DNA has the same resolution limitations [5]. The diffraction-limited resolution can be improved by using high-index liquids for immersion microscopy, solid immersion lenses [6], high-index scattering media [7], and by superresolution techniques such as STED, STORM or PALM [8]. An alternative method for achieving high resolution makes use of plasmons in metals. The most recent developments exploit metamaterials with a negative refractive index to form "superlenses" and "hyperlenses" [9]. A more classic method exploits an array of subwavelength metal wires operating in "canalization" regime [10][11][12][13]. In this method plasmons propagating along metallic wire arrays can carry and transform a subwavelength scaled field and thus can be used to get high resolution images e.g., in near-field microscopy. The price to pay for systems with metals is either high losses due to the below-cut-off operation of subwavelength waveguides or unwanted cross coupling between closelyspaced nanowires for metal guiding, an effect already studied for 3D plasmonic wire arrays [13]. Another limitation to such a metallic wire array is that the length of the wires should obey the Fabry-Pérot resonance condition, which also introduces limitation to the operation bandwidth [13]. We show here that the 2D geometry profits from the best of two worlds: By making a stack of thin parallel dielectric waveguides in a metal, the resolution along one dimension can be subwavelength by the confining effect of the metal, while simultaneously the losses for plasmons polarized perpendicular to these 'ribbons' are small. In this paper we design and optimize in detail a subwavelength plasmonic line image transformer for nearfield imaging. The device consists of a two-dimensional fanned-out plasmonic waveguide array that transmits and magnifies a near field on a pixel-by-pixel basis creating a discrete image. Because of this we avoid calling the devise a "lens" (or "hyperlens"), and use the term "image transformer" or "imaging system" instead. We investigate propagation losses and thus corresponding maximum device lengths depending on waveguide dimensions and study how short, tapered sections at the input can increase resolution while still maintaining low overall losses and small channel cross coupling. At the device output we introduce a properly tailored output coupler that enhances optical coupling to free space. This is shown to be an effective method to reduce backreflection, thus minimizing Fabry-Pérot effects, and also to suppress channel to channel coupling at the output facet via surface plasmon waves. Finally we show that two point dipole sources separated by λ/15 can be effectively distinguished by such a device. II. DESIGN OF SUBWAVELENGTH PLASMONIC WAVEGUIDES Utilizing plasmonic waveguides is an established method for confining light to a subwavelength scale. There are two principal types of plasmonic waveguides known as metal slab waveguides, consisting of a thin metal stripe within a dielectric or air cladding, and metal-clad waveguides, consisting of a guiding dielectric channel surrounded by metal [14][15][16]. Although the metal slab waveguide was shown to support a low attenuation propagating mode in the form of long-range surface plasmon polaritons, low attenuation is achieved at the cost of reduced confinement [14,15]. Thus, to avoid cross coupling between two parallel waveguides of this type they must be separated by a relatively large distance, of the order of one wavelength. On the other hand, in metal-clad waveguides, the light is mainly concentrated in the dielectric channel and extends only a few tens of nanometers into the L metal allowing micron-scale propagation with nanometerscale confinement [15,16]. For example, the plasmon penetration depth into gold on a gold-air interface at λ=1550 nm can be estimated as 21.5 2 nm [17], where the dielectric constant of gold at this wavelength is 132 m     [18]. Therefore a relatively thin subwavelength layer of gold (around 50 nm) will be sufficient to optically isolate two air waveguides. For this reason the metal clad type of waveguides is our choice for constructing an imaging device with subwavelength image resolution. Fig. 1 shows a schematic of the plasmonic line imaging system. One could consider it as a subwavelength plasmonic version of a pitch conversion device [19]. A similar planar fanned array of waveguides was proposed for a dark field microscope [20]. Our device consists of a row of tailored air guiding plasmonic waveguides in a gold slab. In the main part of the device (part A in Fig. 1), the waveguides have constant widths but their separation increases gradually from the input towards the output. Curved, S-shaped waveguides are used to ensure that at input and output all waveguides are orthogonal to the device edge and thus exhibit the same input and output coupling efficiencies. At the input (part B), the waveguide widths are tapered down to achieve closer packing and thus higher resolution. At the output (part C), the waveguides are terminated by a set of funnel-like couplers allowing effective field radiation into free space. Below we will discuss the optimization of parts A, B and C of the imaging system to minimize losses and to ensure that there is no cross coupling between the waveguides. By suppressing cross coupling between the waveguides we ensure that each waveguide transmits optical fields independently and acts as an individual "pixel". The device thus effectively maps a near field at the high-resolution input to the low-resolution output, allowing in this way image magnification. However, the device can also work in the opposite way, mapping the optical field of a large object onto the subwavelength scale. Note that as an image is transferred by propagating plasmons, the device operates at TM incident polarization, i.e., at polarization in the image plane of Fig. 1(b). In principle, broadband operation in both infrared and visible light is possible as the operation principle does not rely on a resonance. Gold is known to be a good metal for operation in the near infrared, while silver exhibits lower absorption in the visible part of the spectrum. In the following we focus on device design for operation in the near-infrared, specifically at a wavelength of 1550 nm. Plasmon propagation is usually associated with large losses compared to e.g. dielectric waveguides. We thus start our discussion by analyzing optical losses of the fundamental plasmonic mode supported by a subwavelength rectangular air aperture in gold [21] depending on the aperture height and width, shown in Fig. 2(a). Numerical simulations were performed using a fully vectorial finite-element method (Comsol Multiphysics) for a working wavelength of 1550 nm assuming a gold refractive index of 0.55+11.5i [18]. The calculated mode profile and propagation length, defined as the distance at which the optical field amplitude decreases by a factor of e, depending on the waveguide dimensions, are shown in Fig. 2. For a given targeted propagation length (i.e. a contour line in Fig. 2(b)) a range of waveguide sizes can be chosen. Choosing a waveguide with a high aspect ratio (ratio between height and width) allows the transverse dimension to be of subwavelength values. In other words, the cost of increasing resolution in one direction is a reduction of resolution in the other direction. For example, a propagation length of 50 μm can be achieved by waveguide cross sections of 400 nm × 1 μm or 200 nm × 3 μm. If we target a linear imaging array with high resolution in one direction and low transmission losses, the suitable waveguide geometry for the waveguides in part A (Fig. 1) of the device is thus a thin air slit of thickness 200-300 nm and height >3 μm. Such a choice of waveguide widths and with individual waveguides separated by sufficiently thick metal layers to avoid optical cross coupling will limit the periodicity of the structure at the input of part A to about 400 nm. A further increase in resolution can only be achieved by thinner waveguides albeit at the cost of increased losses, as shown by Fig. 2. However, this may be acceptable for the device if the length of such narrow channels is kept as short as possible. We therefore consider a short section of the device at the input (part B in Fig. 1) where the waveguide thickness is tapered down. Specifically, we modelled light propagation in the central channel of a 2 μm long tapered part B, the geometry of which is sketched in the inset of Fig. 3(a) and estimated losses in the channel depending on the taper dimensions at the input (air channel width d 0 , and gold layer thickness g 0 ). The channel width at the output of the taper (boundary between parts B and A) was fixed to d = 250 nm and the gold layer thickness between channels was g = 250 nm (total periodicity of 500 nm). Note that the channel width could be significantly reduced by using a high index dielectric instead of air. The high aspect ratio of the waveguide in this geometry allowed us to reduce the complexity of the problem and perform rigorous simulations in only two dimensions; the calculated losses induced by the tapered part B and the fraction of optical power cross-coupled between channels are shown in Fig. 3. Losses depend on the air channel width at the input but are almost independent of the gold layer thickness in-between channels for thicknesses above 75 nm. However, by reducing the gold layer thickness, cross talk between channels increases, as shown in Fig. 3(b) by the fraction of input power cross-coupled to the other channels. When the gold layer thickness is 50 nm or more, less than 0.4% of power is coupled between channels. 2 μm long sections of a waveguide with a constant width of d 0 = 250 nm exhibit a loss of 0.27 dB. If the waveguide thickness is tapered down from 250 nm to 50 nm at the input, ~0.3 dB of additional loss appears. Note that overall propagation losses of 0.55 dB as shown in Fig. 3 correspond to huge losses of ~2750 dB/cm, but over the short distance of 2 μm these are nearly negligible. By tapering the waveguide thickness down to 50 nm and having a 50 nm separation between the waveguides, a periodicity of 100 nm (≈λ/15) can thus be achieved at the input of the device (input of part B) while still maintaining moderate overall propagation losses below 1 dB and negligible cross coupling. Next, we consider the design of the output port of the device, part C in Fig. 1. Fig. 4(a) shows the electric field amplitude of the propagating wave inside a straight and 250 nm wide waveguide excited by a point dipole source at the input (top of the figure) at wavelength λ=1550 nm. Two problems appear with abrupt terminating the plasmonic waveguide at the output: (i) Back reflection of plasmons at the gold-air interface at the output (bottom) creates a standing wave inside the waveguide and thus makes transmittance dependent on waveguide length and on wavelength. (ii) A close inspection of the electric field at the output (not visible in Fig. 4(a)) shows the excitation of propagating plasmons at the bottom surface of the device that will cause cross coupling between waveguides and thus will reduce the signal-to-noise ratio of the transmitted image. Both of these problems can be significantly reduced by introducing an output coupler reminiscent to a microwave horn antenna at the end of each channel. We modelled light propagation in a straight, 250 nm wide waveguide with a coupler at the output, see Fig. 4(b). The coupler shape was described by two sine-shaped lines allowing for a smooth design without sharp corners. The symmetric shape of the coupler ensures that two plasmons propagating along the opposite surfaces remain in phase. The coupler is characterized by two parameters, its length H out and its output width d out , which we can optimize for minimum back reflection. As the waveguide widens, standard dielectric TM slab waveguide modes come into existence as they cross cutoff, and light propagating in the plasmonic modes of the narrow waveguide can be transferred adiabatically into the symmetric waveguide modes. The first-order mode was observed at the coupler output when its width was around 1 μm, while the formation of a higher-order mode was seen when the output width was above 1.5 μm, see Fig. 4(c). The electric field profile of the first-order mode has a maximum in the middle of the channel and decays at the edges. As most of the light in the waveguide mode propagates in the middle of the channel, it would not "see" the gold-air interface, and thus the back reflection is reduced. A minimum of standing-wave amplitude along the waveguide in Fig. 4(b) was found at H out = 2 μm. For low back reflection and mostly fundamentalmode output the optimal coupler output width of d out = 1.2 μm is chosen in the following. By using an output coupler of these optimal dimensions, the amplitude of the standing wave in the narrow part of the channel was reduced by a factor of 6.5. The residual back reflection seen in Fig. 4(b) is caused by some fraction of light still propagating in the plasmonic modes along the output coupler edges. Significant reduction of the surface waves at the bottom edge of the device is also observed with the use of the output coupler. An alternative solution for reducing back reflection, i.e. optimizing transmission, could utilize plasmonic antennas at the output. Suitable antenna geometries that improve light radiation into free space have already been reported [22,23]. It was shown that a significant reduction in back reflection can be achieved when resonance and impedance-matching conditions between antennas and waveguides are met. By improving light radiation into free space, antennas would also reduce light coupling to the surface waves along the bottom surface. However, employing antennas, whose performances are based on a resonance, can introduce limitations to the operation bandwidth of the device and their design will in general be more challenging to fabricate than the simpler output funnel proposed in Fig. 4. A further option for the output coupling would be to employ Si-plasmonic couplers [24] to couple the optical field from the plasmonic waveguides to Si waveguides which then transmit the light to the detectors [20]. III. DEVICE IMAGING OPERATION AND RESOLUTION To illustrate the imaging properties of the fanned-out plasmonic waveguide array, we numerically simulated imaging two electric dipole point sources radiating at 1550 nm wavelength separated by a subwavelength distance. The dimensions of the plasmonic waveguides are the following. At the high-resolution input side (part B in Fig. 1) the air channels are 100 nm thick with a 150 nm gold layer between them (i.e., a periodicity of 250 nm). The tapered part is 2 μm long, along which the waveguides are widened to 250 nm and the gold layer thickness becomes 250 nm. In the main part of the device (part A) the fanned-out waveguides of a constant thickness are bent to increase the channel separation to 2.5 μm (total magnification factor of 10). To simplify the geometry, we modelled 7 waveguides forming a symmetrical structure and assumed the central straight waveguide to be 8 μm long. There is an output coupler (part C of the device) at the end of each waveguide (2 μm length, 1.2 μm output width). Firstly, we find an optimal distance of a single point source from the input interface at which coupling of the light into the corresponding channel is maximized. Objects in the plane at this distance in front of the waveguide array can be imaged with the best possible resolution: when the source is placed behind this plane, the light spreads out before reaching the waveguide facet and thus couples to more than one waveguide; likewise, a source too close to the input excites plasmonic surface waves at the front of the device which also couple light to other channels, decreasing the image contrast at the output. We found that the optimal distance depends on the waveguide width and periodicity at the input: smaller periodicity requires an object to be placed closer to the device. For example, for the parameters chosen here (100 nm thick waveguides with 150 nm gold layer separation), the optimal object distance is 90 nm. When the waveguide width is reduced to 50 nm and the gold layer thickness to 50 nm, the optimal distance becomes 30 nm. Secondly, we show that optical fields from an emitter can be efficiently coupled into subwavelength plasmonic waveguides. Previous studies showed that due to the tight confinement the optical emission can be almost entirely coupled to the propagating plasmon modes of metallic nanowires [25]; this strong enhancement of the emission from a source in the vicinity of the plasmonic wire is due to the Purcell effect. Strong coupling between quantum dots and metallic nanowires was also demonstrated experimentally [26]. Here, we calculated the coupling efficiency between a point source with a fixed dipole moment and the plasmonic waveguide of the modelled geometry. A coupling efficiency of almost 80% was obtained for a point source positioned at 90 nm distance from waveguides with 250 nm periodicity at the input, and over 90% efficiency for a source positioned at 30 nm distance from waveguides with 100 nm periodicity. Fig. 5 shows the power flow of propagating waves inside the plasmonic imaging waveguides excited by one or two point electric dipole sources. Two point sources of equal amplitude are placed at the optimal distance of 90 nm from the input, centered in front of channels and separated by 250 nm and 500 nm, corresponding to one and two waveguide array periods, respectively. Images created by two coherent in-phase sources are shown in Fig. 5(a), while Fig. 5(b) presents images of two out-of-phase sources. By standard optical microscopy, two point sources separated by these subwavelength distances are indistinguishable: the in-phase sources appear as a single one, while radiation from the out-of-phase sources interferes destructively and nearly perfectly cancels in the far field, as can be seen at the top part of the figures. However, placed at the optimal distance from the waveguide input, the light from each source in all cases couples effectively and independently into a single channel and propagates towards the output where it is re-emitted into free space, creating interference patterns. At the output side of the imaging system the two sources are separated by well over one wavelength and can thus be clearly resolved. We also performed calculations for a single source placed in-between two channels, see Fig. 5(c). In this case the light is coupled to two neighboring waveguides, generating output equivalent to the case of two coherent sources positioned in front of two neighboring channels. Thus, two coherent sources can be clearly distinguished from a single source only if they are separated by another unilluminated channel in-between. In other words, the system resolution is given by twice the waveguide periodicity at the input. In the current geometry two coherent point sources separated by 500 nm (<λ/3) can be unambiguously distinguished by the optical system. With further reducing the input taper periodicity down to 100 nm (see Fig. 3), 200 nm (<λ/7.5) resolution can be achieved. However, if the sources are non-coherent, see Fig. 5(d), the case of two sources separated by the waveguide periodicity can be clearly distinguished from the case of one source inbetween channels by the absence of an interference pattern at the output (as the waveguide separation at the output is larger than the wavelength). Thus, twice smaller resolution is achieved in the general case of non-coherent sources, namely λ/15 for the 100 nm input periodicity. Finally, we calculated total propagation losses of the device. Fig. 6 shows losses of each channel depending on the channel number calculated for three device lengths L = 8, 12, and 20 μm. We also analyzed losses induced by each part of the channel, parts A, B, and C, separately. As expected, the loss of the output coupler (part C) is the same in each channel and equals to 0.14 dB, while losses of parts A and B gradually increase in channels positioned further away from the center because of their longer lengths. The loss induced by part A also increases proportional to the devise length L. For example, the total loss of the central channel is 1.7 dB and increases up to 4.2 dB in the 9 th channel at L = 8 μm; while losses of 2.24 dB and 6.2 dB are induced by the central and 13 th channel, respectively, at L = 12 μm. The rapid increase in losses (marked by arrows in Fig. 6) is associated with light leaking to neighboring channels due to sharp channel bending in part A. The channel thickness in part A is constant, so the tilt and bend lead to narrowing of the gold layer between two channels, which at some point becomes too thin and cross coupling between channels occurs. This effect therefore limits the total number of channels in the device. By increasing the device length, the allowed number of channels can be increased. For example, at L = 8 μm the device can contain up to 19 channels, while at L = 12 μm the total number of channels can be 27. At L = 20 μm, however, the total channel number is restricted to 41 by the rapid increase of losses in part B rather than in part A. The total loss of channel 20 reaches 12 dB. For a further increase in channel number, a wider channel width would have to be chosen to decrease losses, see Fig. 2(b). Finally, we tested the spectral range of the device of Fig. 5. A priori we expect the device to have a broad operation range, since it does not depend on resonances and since the design is optimized to suppress reflections and interference. Indeed we found consistent operation in the whole wavelength range from approximately 1 μm to above 2 μm. At wavelengths below 1 μm the device performance is limited by gold material absorption and thus a device operating in the visible would have to be based on a different material choice. At wavelengths above 2 μm cross-coupling between channels occurs. However, the device dimensions could be easily optimized for operation in the mid-infrared part of the spectrum. IV. CONCLUSION In conclusion, we have demonstrated by finite-element simulations that it is possible to make a linear array of air guided plasmonic waveguides with fanned out geometry to create a magnifying line image transformer device. The device can effectively couple a large fraction (up to 90%) of the light from small objects or emitters to tapered plasmonic waveguides on a high-resolution side. The waveguides transmit the signal to a low-resolution side via propagating plasmon modes without cross talk and with moderate losses, magnifying the image. The output couplers are designed to enhance coupling to free space. A resolution of λ/15 is achieved. Experimental realization of the structures investigated here is possible with existing 3D lithography systems [27]. Alternatively, deep reactive ion etching could be used to cut the tapered channels in a few micrometers deep gold layer. The device can be applied as a high-resolution linear detector or, by operating the in reverse, high-resolution optical writing.
2017-02-21T20:25:33.585Z
2015-01-13T00:00:00.000
{ "year": 2015, "sha1": "08159cd45c02121dd0ddbb39ea3111ab12d57f25", "oa_license": null, "oa_url": "https://eprints.soton.ac.uk/376880/1/6969.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "00c088086378c8b0bb9a32c8750f1cead82e6762", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
169677362
pes2o/s2orc
v3-fos-license
Research on Linkage and Path Selection of College Students’ Physical Health Quality Improvement System Background: The status of college students’ physical health in China is not optimistic. The specific conditions, influencing factors, related obstacles and promotion mechanism of college students’ physical health quality have yet to be discriminated. Purpose: This study was from the perspective of system linkage of “sports management and student management”, focused on the internal and external factors that affect the quality of college students’ physical health, then targeted combing the theoretical errors and operational difficulties of current college students’ physical health problems and their quality improvement. Method: The literature collection, experimental comparison, expert interviews, and logical reasoning methods were used in this study. Result and Conclusion: The proper introduction of featured physical education courses and continuous improvement of teaching quality; strengthening of guidance, systematic linkage and actively promoting the extracurricular fitness activities; adapting to the school appropriately and promoting the balanced fusion of academic performance points and physical fitness; adapting a multi-pronged approach to work together to improve the quality of physical health. Introduction At present, the status of college students' physical health in China is not optimistic. It is mainly manifested in several aspects including the seriously poor aerobic respiratory capacity of college students, the expanding trend of obese people among the college students; college students lacked of sufficient attention and consciousness to physical health combined with weak initiative and enthusiasm; they were unable to master scientific fitness methods effectively and lacking the effective regulation of follow-up examinations and exercise prescriptions; the problems of college students physical health were lack of comprehensive management of multiple departments and lack of integrity and continuous improvement strategies of students' physical health quality (Dai & Lin, 2012). The previous literatures shown that, during the 20 years prior to 2010, the physical fitness of Chinese college students continued to decline. Although in recent years, there were some data (2010) showing that the physical indexer of some age groups had "stopped decline", there was no clear evidence supported that decline was the "inflection point" of the decline in the health of adolescents physical health, and the related improvement strategies was slightly weak due to lack of system linkage and effectiveness (Yang, Tang, & Zhang, 2014). Therefore, there is more urgent to research on the improvement of college students' physical health quality and the breakthrough path. Twenty universities in Hangzhou were investigated in this study. We established classification fitness programs for them. Based on mustering their baseline physical health condition, through the collected literature, questionnaire, interview investigation and logic analysis and other methods especially riled on FG/AG contrast experiment and the results of the analysis, from the "physical education and labor system linkage" and "integration of sports lesson inside and outside" the dual Angle of view, analyzed and argued the current measures to improve the quality of students' physical health, to explore the quality of the possible path of ascension, and finally to provide the feasible plans for the promotion strategy. Experimental method This paper conducted the comparative test, questionnaire survey and comparative analysis on the basis of FG/AG platform. Conventional physical education is faced with difficulties in reality and featured physical education adapts to objective demands 3.1.1. Experimental design of college students' physical health FG/AG management platform The US Cooper University FG (FITNESSGRAM)/AG (ACTIVITYGRAM) management platform-the report of physical fitness test and physical activity-refers to the integration of physical health assessment, sports tracking, information feedback and physical education courses, after-school fitness, and reward programs for adolescents, which is an adolescent physical fitness monitoring and physical activity promotion system based on computer technology (Zhang, 2010). This system could statically analyze and evaluate and dynamically intervene to improve the health of students which has gained internationally recognized (Cheng, 2012). This study used the FG/AG management platform to set the experimental design. The experimental period was 15 weeks. The subjects of this experiment were college sophomores with a total of 200 students divided into 5 groups of 40 people each. The first group was the control group, which was conducted according to the conventional school teaching. The second group was a quality intervention group for regular teaching and quality exercises. The third group was a technical intervention group for conventional teaching and physical testing exercises. The fourth group was a mixed group of quality and technical interventions for conventional teaching and technical exercises. The fifth group was a featured course group, which consists of "Featured Projects + Quality Skills Training + Extra-curricular Fitness Program + Acade·mic Performance Point Plan + Self-Physical Health Check-up and Feedback". It was centric on featured items mainly including quality development, sports games, team formation and other contents. The extra-curricular fitness program included 1-3 extra-curriculum exercises per week. About the reward plan, it was needed to consult with the student's school to score points for outstanding students. The self-constitution health check feedback refers to inputting student information into physical fitness management software. Students can make appointments at any time, and they can get their test results and test reports and exercise prescriptions immediately after testing. After the experiment, the system conducts a comparative analysis. 3.1.2. Featured physical education could effectively promote college students' physical health consciousness According to the research of Liang Jianxiu, college students' health awareness is an internal motivation to promote their participation in sports, enhance their athletic ability and physical fitness level, Therefore, it is of great significance for college students to improve their physical health awareness and form good fitness habits (Liang, 2005). Dou Jian proposed that sports intervention could improve the college students' health awareness and level (Dou, 2013). As the Table 1 shown, after the experiment, the students' attention to their own health was significantly increased, they were willing to acquire fitness knowledge, and their acceptance to physical education was increased; the proportion of people who comparable and fully understood the definition of "health" increased from 27% to 72%; the proportion of mastering in physical health education increased from 21% to 61.5%; the awareness of physical education in phl education was also raised to 61.5%; more than 3/4 participants had some understanding of physical fitness, and began to pay attention to its concept, definition, connotation and denotation, and began to distinguish the similarities and differences between "health" and "physical health", which indicated that the featured curriculum -had achieved corresponding results, and? a positive impact on participants' health consciousness. Yu Lan et al (Yu, 2013). reported that the sports health knowledge and awareness of all of experimental groups have improved after exercise intervention for 15 weeks or more-The Table 2 indicated that featured courses had the most significant effects on physical fitness awareness among participants: The proportions of people who were very willing to understand and recognize their own physical health were 93.5%, 63%, 66.5%, 69.5%, and 44.5% for the partition of characteristics, quality, technology, and mix and contrast respectively. The proportions of pretty known the physical health knowledge were 76%, 45.5%, 71.5%, 69% and 58.5% for the partition of characteristics, quality, technology, and mix and contrast respectively. There was no significant difference from the featured course group to the technical intervention group and the quality-technical mixed intervention group, but there was a significant relationship between the quality intervention group and the control group, which showed that the distinctive physical education Completely known Understanding of the health definition 11.0/3.0 51.5/23.5 10.5/1.5 14.5/37.5 12.5/34.5 Understanding of the health standards 42.5/17.0 31.0/24.5 10.5/3.5 10.5/43.5 5.5/11.5 Understanding of the health acknowledge 13.5/4.5 50.5/33.0 9.5/5.5 19.5/38.5 7.0/18.5 Understanding of the fitness guidance of sports course 12.0/6.0 38.5/13.5 23.5/13.5 14.5/38.5 11.5/28.5 Understanding of the health improvement of sports course 7.5/6.0 54.0/21.0 16.5/11.5 15.5/41.0 6.5/20.5 (Note: X/Y and XY represents the questionnaire data before and after the experiment respectively.) curriculum had a different impact on the participants. In terms of generating strong interest, the proportion of featured courses reached 92.5%, far exceeding the proportions of 56.5%, 72.5, 66.5%, and 47.5% in the other four groups, showing significant correlations. Fu Dong et al., put forwarded a similar view-There was a positive correlation between the overall assessment of college students' sports attitudes and physical health (r = 0.485, P < 0.01), but the correlation between sports cognition and physical health is not significance, there was a moderate to strong correlations among the sports emotions, behavioral intentions and physical health of college students (Dong, 2014). (0.40 ≤ r < 0.80, P < 0.05 or P < 0.01). The conclusion was that training a correct and active sports attitude was conducive to improving the physical health of college students, and it was necessary to pay attention to stimulating and cultivating students' sports emotions and improving their intention of sports behavior. 3.1.3. The featured course could moderately improve the college students' physical health quality. (should adopt n) The advanced physical education teaching mode should be dominated by cultivating students' positive emotional experience, focus on cultivating students' sports awareness, interests and habits, guide students to participate in enjoying sports, and enhance physical fitness so as to achieve the goal of physical education, physical education and educating people (Liu & Zheng, 2015). According to the experimental design, featured course teaching is highly interesting, puzzled, competitive, and challenging. The learning atmosphere of the classroom is strong and the team interaction is frequent, which greatly stimulates the participants' interest in learning and passion for sports. According to Table 3, the participants of the distinctive course grouphad significantly changed in terms of physical form, physical function, and athletic quality. The increase in endurance quality was greatest, the speed of sensitiveness was slightly improved, and the increasing rate of girls was higher. That was mainly manifested in following aspects: both male and female students had a decrease in body weight, indicating that sports were suitable for intensity and exercise which could achieve corresponding weight loss. Males' body fat rate dropped from 17.82 ± 4.46 to 15.99 ± 5.73, and the number of declines was as high as 2.0. Females also decreased from 29.10 ± 3.87 to 26.87 ± 3.59, and the number of declines was 2.23, which was higher than that of males, -indicated that similar sports played more obvious role for females than males. The vital capacity significantly increased 58 (3745-3687) for males and 59 (2377-2318) for females, which were basically the same, and the vital capacity index also increased. The number of step tests for boys increased from 52.83 to 54.27, an increase of 1.44, and females increased from 47.29 to 48.19, an increase of 0.9. In contrast, males' growth was more pronounced. The reaction speed was improved both of them, the reductions in response time were 0.01 seconds for males (0.39-0.38) and 0.03 seconds for females (0.48-0.45). The decrease rate for females was 3 times that of males, indicating that the experiment was more effective in improving girl response. It shows that female college students could mprove their space more than males. Changes in grip strength, the added value of males and females were 1.36 and 1.59 respectively, and females' growth was slightly higher than that of males. The grip strength and weight ratio also changed accordingly. According to Table 4, the results of physical fitness tests of featured courses, especially the explosive power and strength, were greatly improved, the body fat rate was decreased, the physical form had been optimized, and the vital capacity and body mass index had shifted from passing to well. Specifically, the training of aerobic and anaerobic respiration, bouncing ability, and strength flexibility coordination what is mean in specialty courses could effectively improve the quality of students' performance and the ratio of height to weight (2.809 for males, 3.056 for females) and body fat percentage (15.99 for males, 26.87 for females), improvement in vital capacity, female vital capacity (2377), vital capacity body mass index (60.97 for males, 45.42 for females), step test index (54.27 for males, 48.19 for females), and dominant ratio for reaction time (7.37 for males and 9.74 for females). In the other four groups, the quality intervention .67 for females) in the technical intervention group slightly prevailed,-and the difference in height and weight was not significant. The comprehensive index of the quality and technology mixed intervention group was superior to the quality intervention group and the technical intervention group but with -(CHN). The control group ranked first in the standing and long jump project, and the females' grip strength also ranked in the front row, indicating that the experiment had a greater impact on the improvement of students' cardiopulmonary function, but did not substantially improve the ability to bounce, and had less impact on absolute power. Dou Jian's research results are similar to the above results. Extra-curricular fitness plans and academic rewards can accelerate the impact on college students' physical health The school, family, and community have a well-recognized model of intervening in adolescents' physical health. The family is the guarantee for the promotion of adolescent health, the school is the foundation for health promotion and the community is the link for health promotion (Liu, 2008). For undergraduates who are studying in school for a long time, universities can provide the most important external driving force for their health promotion11. Therefore, the role of universities and their management departments on the improvement of college students' physical health quality are indispensable and irreplaceable. The academic achievement point reward program, which is affiliated to the college student management department, is not only innovative but also decisive in the current situation in the promotion of university students' fitness and even the function of supervision and management. Under the premise of being constrained, the special course group carried out several times of extracurricular collective fitness every week, and was led by the teacher or the designated group leader. In the later period, the students carried out extra-curricular collective fitness according to gender and interest, and their consciousness gradually increased. According to a questionnaire survey, the participants in this group generally feel that the extra-curricular fitness brings physical and mental pleasure. According to the experiments and interviews, physical fitness skills have been generally improved, and all the results have been better than the other four groups. At the same time, the concepts of sports, curriculum, health, and lifelong sports of this group have been improved. They also become more interesting in sports. These students learned the pleasure from hardships, they felt that they had grown up in the uncertainty and had made progress in their efforts and their fitness consciousness had become self-conscious. According to Table 5, participants in this group generally expressed that they would like to continue with the experimental intervention, which means that this intervention has a positive and continuous effect. After the experiment, participants in each group held positive views on sports, curriculum, extra-curricular fitness, rewards, and self-examination and feedback. However, there were significant differences in the ratio: the main indicators of the special course group were relatively leading. While there is a difference in the control group, the quality group, the technical group and the quality-technical mixed group, the significance is lower than that of the special course group, which means that the special course is rich in content, effective in measures, and highly stimulating. Substantial progress has been made in both exercise awareness and function enhancement. 40 students in the special course group recognized the academic reward program. Although they showed a certain degree of utilitarianism, with the development of activities and the establishment of the team, the advantage of the team's activities and the characteristics of collective ownership prompted the participants to shift from "external promotion" to "Intrinsic drive", and "Matthew effect" had faded in. It can be concluded that extra-curricular fitness programs with additional academic reward program have had a positive impact on the physical health of college students, and the actual results have been significant. The improvement of college students' physique and health quality needs comprehensive management of "In-Class and Extra-curricular Activities" According to Table 6, 20 universities under investigation have combined physical education with physical fitness testing. On the whole, the physical fitness test system for college students is well-established and the test methods are flexible. The physical education and its quality integrated into the physical fitness test have become the key to the development of the system, and they are also important parts that need to be improved (Zong & Cai, 2008). According to Liu Haiyuan's investigation13, regular physical education curriculum that is short of interesting and attraction can no longer satisfy the current need of college students for physical health and quality improvement, which lays a foundation for the demanding of special physical education (Faqiang & Fan, 2014). According to investigations, 20 college students' physical health monitoring system in Hangzhou is in lack of communication and correlation with college students' extra-curricular fitness, and it is also short of linkage with their student management systems. As a result, the former one leads to the lack of physical activity and exercise intensity in the scientific sports fitness of college students and the latter one is reflected in the failure to effectively supervise and ensure the quality of physical fitness. Wu Zongxi proposed that the promotion of college students' physical health by college student management systems, especially the development of extra-curricular fitness activities, lacks strong support and related systems (Xie, 2013). And there is also a rare "system linkage" between sports departments and student management departments. Therefore, the improvement of path and strategy of college students' physical health requires urgent breakthroughs in both practice and theory. In addition to providing targeted special physical education courses, the cooperation between academic achievement points and extra-curricular fitness programs can effectively promote the improvement of students' physical fitness quality. This requires the linkage and integration of college sports courses "in and out of class". The linkage management system between college sports and students has yet to be built According to Du Faqiang's research and the results of this study, the physical condition, function and exercise quality of subjects have a big gap with the requirements of the "Standards", and the physical health of students is not optimistic (Du, 2014). According to interviews and surveys, the fears and evasion of female college students for the 800-meter and other endurance projects are very common and representative, which is an important link that afflicts the improvement of female college students' physical fitness. According to Table 7, the height, weight, waist-to-lumbar ratio and body fat ratio of all subjects were within the normal range, and the weight was within the lower limit of normal and between the lower body weight and the normal body weight. The body fat rate of male college students is close to 18, which is an overweight range. The body fat rate of female college students is as high as 28.48, and it is nearly obese. Although the subjects' body weight was normal, the body fat rate was in the overweight range, which was close to the edge of obesity, indicating that there was a serious imbalance between the body fat and the muscles of the subjects, and there was a lack of aerobic exercise. The middle and long distance exercises should be strengthened to increase the muscle mass and reduce fat weight. From the analysis of functional data, the average level of vital capacity index and step test of college students only reached the lower limit of the "Standards", indicating that college students generally have poor cardiopulmonary function. Female college students are below the pass line in terms of speed, strength and endurance, which are serious shortcomings. Chen Peiyou pointed out that the diversity of subjects and objects in management determines the complexity of physical fitness promotion system (Chen, 2014). A single organizational structure obviously cannot meet the needs of diversification and specialization of physical health management. In terms of organizational strategy, a mixed matrix organizational structure is more conducive to work. The study also pointed out that the expert-led project, the linear organization and the physical health management rectangular organizational structure can actively intervene in the physical health of adolescents, which lays a theoretical foundation for the development of special sports courses and extra-curricular fitness programs. From the reality of colleges, the lack of enthusiasm for college students' after-class fitness and the lack of health awareness is obviously not a problem that can be solved only by a physical education teacher or a sports department. According to the jurisdictional authority and affiliation analysis, the responsibility of guiding ideological and political work of college students and the jurisdiction of extracurricular activities belong to the college student management department and counselor system, rather than the jurisdiction of the sports department, and these require both the sports department and the student management system. Therefore, the improvement of the quality of college students' physical health calls for the construction of the management system of "sports-study interaction" in college sports departments and student-work systems. Self-examination feedback on the comprehensive effects of college students' physical health Grasping their own physical health status can effectively guide the college students' after-class fitness, so as to benefit their physical health. According to the experimental design, the self-physique health check and feedback of the special course group mainly means that all the subjects' information is entered into the management software so that the subject can make an appointment test at any time, and after the test, the test report and a professional exercise prescription can be obtained immediately. According to Table 8, subjects in the special course group have a greater increase in sports views, especially these subjects before the experiment who believed that sports are much important have shifted to the opinion that sports are very important, and the change of views on self-physical examination feedback is also similar. Properly introducing sports special courses and continuously improving teaching quality The sports special courses have played a certain role in promoting the health consciousness and level of college students. Suggestion: first of all, we should ensure the priority of arranging courses for the compulsory majors in the first and second year of physical education; secondly, we must meet the requirements of students' "four-self" in choosing courses, namely, "self-selected teachers, self-selected semester, self-selection time, and self-selected projects"; thirdly, we should adjust course content appropriately according to the learning interests, and also we should focus on "sports games" and design appropriate amount of special sports items and teaching contents that can arouse interest in exercise and meet various needs of students; Fourthly, optional course content should be as rich as possible to satisfy needs; Finally, according to the specific circumstances, we will strive to establish elective courses in the third year and the contents should be flexible and diverse so that students' participation and enthusiasm would be improved. Strengthening guidance, system linkage, and actively promoting extra-curricular fitness activities Although the extra-curricular fitness program is an artificially imposed exercise content, it can gradually become internalized as the subjects' self-conscious behavior. Especially in the later period, students have been separated from supervision and are seriously involved in exercise. This fitness program can be used as a continuation of classroom teaching to supplement effectively in terms of exercise volume and intensity. However, the premise of consciously involving in extra-curricular fitness is that students are awarded grade point average for their performance. If this is not the case, the fitness continuity is doubtful. Therefore, in the promotion of extra-curricular fitness program, four aspects of work must be done well: First, we should pay close attention to whether the premise of experimental design is feasible; Second, whether the hardware and software can meet the relevant requirements; Third, we must strictly control the experimental process to ensure quality; The fourth is to gradually explore the establishment of a long-term cooperation mechanism for after-class fitness. The appropriate integration of academic grade points and physical fitness should be promoted according to the situation of school Judging from the experimental process, students are fully committed and their enthusiasm is unprecedentedly high when combining academic grade points with extra-curricular fitness activities. However, Xie Hongguang pointed out that physical health beliefs do not affect healthy behaviors with simple direct relationships, but influence the behavioral intentions and habits in a progressive and cumulative manner (Xie, 2013). The reward of academic grade point is driven by external forces, and has not yet completely formed as a consciously inner motivation. After the experiment, the subjects exhibited certain withdrawal behavior. With the time pass, the number of withdrawers gradually increased, which means that the conscious exercise has not yet formed as a fixed and continuous role both in the psychology and the action and it is still need to be strengthened and cured. The proposal consists of two points as follow: first, the communication and coordination of student management departments should be well done in an effort to reach consensus; second, overemphasis on the award of academic grade point may lead to the formation of speculative psychology among participants, resulting in the formation of non-benign mentality, and finally misleading the experimental results. A multi-pronged approach working together to improve physical health In summary, the FG/AG platform can be used as one of the path to improve physical health of college students. We use its "sports special program" in physical education classrooms to stimulate students' sports enthusiasm and interest and use "quality skills training" to improve the quality of sports and technical skills in the short term. In the same time, we also use "extra-curricular fitness program" to effectively supplement the classroom teaching and use the academic grade points to guide and motivate students to exercise after class and then use "self-physique health check and feedback" to keep abreast of the static status and dynamic changes in their physical health. However, as a new thing, the platform still needs to be based on the actual situation in the domestic universities. And after the implementation of local experiments, it will continue to be adjust, and will steadily advance after a stable effect is achieved.
2019-05-30T23:45:08.058Z
2018-07-26T00:00:00.000
{ "year": 2018, "sha1": "62683d67673d5bbce327b313b07a0bcf435fe370", "oa_license": "CCBY", "oa_url": "http://paahjournal.com/articles/10.5334/paah.12/galley/17/download/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "92318a04ea435502e4c4b30dcbbaded41f14f36e", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
243468280
pes2o/s2orc
v3-fos-license
Structural, Magnetic, and Optical Properties of Mn2+ Doping in ZnO Thin Films MnxZn1−xO thin films (x = 0%, 1%, 3%, and 5%) were grown on corning glass substrates using sol–gel technique. Single-phase hexagonal wurtzite structure was confirmed using X-ray diffraction. Raman analysis revealed the presence of Mn content with an additional vibrational mode at 570 cm−1. The surface morphology of the samples was observed by scanning electron microscopy which suggested that the grain size increases with an increase in Mn concentration. The optical bandgap increases with increasing Mn concentration due to a significant blueshift in UV–visible absorption spectra. The alteration of the bandgap was verified by the I–V measurements on ZnO and Mn-ZnO films. The various functional groups in the thin films were recorded using FTIR analysis. Magnetic measurements showed that MnxZn1−xO films are ferromagnetic, as Mn induces a fully polarised state. The effect of Mn2+ ions doping on MnxZn1−xO thin films was investigated by extracting various parameters such as lattice parameters, energy bandgap, resistivity, and magnetisation. The observed coercivity is about one-fifth of the earlier published work data which indicates the structure is soft in nature, having less dielectric/magnetic loss, and hence can be used as ultra-fast switching in spintronic devices. Introduction In recent years, ZnO-based metal oxide semiconductors have drawn significant attention due to their versatility and tuneable optical, electrical, and magnetic properties. These materials can be synthesised in different physical forms such as nanoparticles, single crystals thick, and thin films [1][2][3][4][5]. Among these, thin films form an important and useful structure for various applications such as gas sensors, optoelectronic devices, transparent electrodes for solar cells, and as a catalyst [6][7][8][9]. It is also being considered as a potential candidate in the new frontiers of research such as spintronics [10]. Diluted magnetic semiconductors (DMSs), generally obtained by substituting a small amount of transition metal (TM) in oxide semiconductors, are expected to show ferromagnetism at room temperature due to the interaction between the spins of the carriers and the localised moments of TM impurities [11][12][13]. The ferromagnetic behaviour of TM-doped ZnO thin films has been extensively studied due to their potential to control both spin and electric charge which makes these materials suitable for spintronic applications at or above room temperature. In particular, Mn can be used as a transition metal dopant for obtaining ferromagnetic ordering due to its high magnetic ordering [14,15]. However, very few reports studied the influence of Mn 2+ doping on magnetic properties in detail [16][17][18]. Gallegos et al. have studied structural, electronic, and magnetic properties of ZnO and Mn-doped ZnO theoretically by first-principles calculations based on density functional theory (DFT) [19]. Hexagonal wurtzite crystal structure of ZnO with Zn 2+ and O 2− ions tetrahedrally coordinated and stacked along c-axis alternatively exhibit large exciton binding energy (60 meV) and high optical transparency (3.3 eV). Mn-doped ZnO has been reported recently by many researchers for showing electrical, magnetic, and optical properties simultaneously [20][21][22][23]. Mn x Zn 1−x O shows room temperature ferromagnetism which makes it a promising candidate for spintronic applications. However, there are few reports emphasising the ferromagnetic and electrical properties of the Mn-doped ZnO thin films. TM-doped ZnO thin films have been synthesised by various deposition techniques such as spray pyrolysis, reactive ion-assisted evaporation, molecular beam epitaxy, RF magnetron sputtering, chemical vapour deposition, and sol-gel [24][25][26][27][28][29]. Among these, the sol-gel technique is the simplest and most cost-effective method that enables the formation of different structures by changing the experimental conditions using the same composition of the material [30]. In this work, we investigated the effect of Mn 2+ ions doping in ZnO thin films. We varied the manganese concentration in ZnO films from 0% to 5%. The structural, morphological, optical, and magnetic properties were explored in these thin films using X-ray diffraction (XRD), Raman spectroscopy, Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), UV-visible spectrometer, and vibrating sample magnetometer (VSM), respectively. The present investigation has revealed that crystallite size and optical bandgap increase with the increase in Mn concentration in ZnO thin films. Room temperature ferromagnetism was observed in Mn-doped ZnO thin films, which can be a potential candidate specifically for spintronic applications. I-V measurements demonstrated that the energy separation between the uppermost layer of the valence band and unoccupied states in the conduction band increases with Mn doping in ZnO thin films. Experimental Procedure The sol-gel technique was employed for the synthesis of 0%, 1%, 3%, and 5% NO] were used as the solvent and stabiliser, respectively. All the chemicals were of analytical grade and used without further purification. In addition, 0.5 M zinc acetate dihydrate was used as a starting material. The Zinc precursor was weighed on the 6-digit accuracy weighing balance with the four parts of the same quality mass. The manganese acetate dihydrate as a dopant was added to them, and the atomic mole ratio of dopant Mn to Zn was varied at different concentrations of 0%, 1%, 3%, and 5%, respectively. These four mixtures were then dissolved at room temperature in four amounts in a homogeneous mixture of MEA and DME solution. The MEA served as a stabiliser. The molar ratios of MEA to zinc acetate were maintained at 1:1. A homogeneous, and clear solution was obtained by stirring the final mixture vigorously at 70 • C for 2 h using a magnetic stirrer. Further, it was held for aging at room temperature for one day to receive the desirable sol. For thin film deposition, the corning glass substrates (5 × 5 cm 2 ) were cleaned first with de-ionised water by ultrasonication for 5 min, followed by ultrasonication in isopropanol alcohol for 10 min. The cleaned substrates were then coated with a homogeneous solution using a spin coater at 3000 rpm and 30 s time interval. After first spinning, the substrates were preheated at 300 • C for 5 min to evaporate the solvent in the films. The spin-coating and preheating processes were repeated 20 times for the formation of uniform thin films. The estimated thickness of the films at present conditions was around 10 µm [31]. These coated films were annealed at 500 • C in the muffle furnace under an air atmosphere for 2 h and then cooled to room temperature. The structural properties of the films were investigated at room temperature by using Miniflex 600 X-ray diffractometer from Rigaku with monochromatic Cu Kα radiation (λ = 1.5418Å) at 40 kV and 15 mA in the diffraction angle (2θ) range of 20 • -60 • , with a step size of 0.02 • and a scanning speed of 5 • /min. We used Match software, available online, for the phase analysis of XRD data. The Raman spectroscopic analysis was performed at room temperature to investigate the vibrational modes using Enspectr Raman spectrometer in the range 150-800 cm −1 using a green laser source with a power of 300 mW and a wavelength of 532 nm. The surface morphology of the films was studied using scanning electron microscopy (SEM; JSM-IT200) at 10,000 magnification. For obtaining the SEM images, a thin gold layer (~30 nm) was coated on the film surface. The samples were then mounted on the SEM holders. The Fourier transform infrared (FTIR; Thermo Scientific NICOLET iS50, ABX) transmittance measurements were performed in the range from 400 to 4000 cm −1 with a resolution of 2 cm −1 . UV-visible spectrophotometer was used to measure the optical absorption and transmittance spectra, and optical bandgap was derived using Tauc's plot of transmission spectra on T90+ UV-vis spectrometer. The magnetic measurements performed by vibrating sample magnetometer (VSM-Cryogenic Ltd., London, UK) were used to obtain saturation magnetisation, coercivity, and remanence. The magnetic field was applied parallel to the film plane surface (in-plane geometry). I-V measurements were performed using Palmsen4 with applied voltage from ±5 V. Prior to measurements, silverpoint contacts were made on the top surface of the films. Results and Discussion The X-ray diffraction pattern of ZnO and Mn-doped ZnO samples annealed at 500 • C are shown in Figure 1. It is confirmed from the XRD pattern that all the films showed a single crystalline phase with no additional impurity of Mn metal and oxides. From the diffraction peaks, it is clear that all the samples showed a wurtzite structure with (002) preferred orientation which matches with the JCPDS file No. 79-206 for ZnO. The Mn-doped samples also showed other reflection planes at (100) and (101) of negligible intensity. The crystallite sizes of all the samples were determined using Debye-Scherrer formula [32]. where k denotes the Scherrer constant of proportionality and can be considered to be 0.9, λ denotes the wavelength of the incident X-ray radiation and for CuKα, λ = 1.5418Å, β corresponds to the full-width half maxima of interested peak, and θ is the Bragg angle. It is observed that the crystallite size increased from 36.2 nm to 51.1 nm for an increase of 5% Mn in ZnO thin films. This can be attributed due to the large ionic radius of Mn, as compared with Zn causes the grain size to increase. The lattice parameters a and c of the wurtzite structure of ZnO can be calculated using the relation in [33]. For the (002) plane, the lattice parameters were observed to be a = 3.245Å and c = 5.227Å which is similar to the reported work [33,34]. We observed that the lattice parameters increase with the Mn doping in ZnO thin films. Table 1 shows the lattice parameters obtained for ZnO thin film and 5% Mn-doped ZnO thin film samples. A comparison of our experimental results with the first-principles calculatio lated data shows that ZnO bond length practically was not affected by the incorp of the Mn atom. The obtained results of a comparison of the cell parameters ref the Mn-doped ZnO had a small change with respect to the ZnO system, which cordance with the DFT calculations [19]. The parameter increased about 0.3% 3.12 Å to 3.13 Å), while the parameter was reduced 0.6% from first-principles tions [19], whereas our experimental results show that increased about 0.25% parameter was reduced 0.07%. Figure 2 shows the scanning microscope images of all the deposited thin fil clearly visible from the SEM spectra that the films were uniformly deposited on t ing glass substrates, and the grain size increased with the increase in Mn concentr the ZnO films. The Raman spectra demonstrate the effect of Mn concentration on the vibratio microscopic properties of MnxZn1−xO thin films. Figure 3 shows the Raman peak cm −1 which corresponds to A1(LO) mode due to the generation of Zn vacancy to with Mn dopant. A significant redshift was observed in the Mn-doped ZnO thin A comparison of our experimental results with the first-principles calculation simulated data shows that ZnO bond length practically was not affected by the incorporation of the Mn atom. The obtained results of a comparison of the cell parameters reflect that the Mn-doped ZnO had a small change with respect to the ZnO system, which is in accordance with the DFT calculations [19]. The parameter a increased about 0.3% (from 3.12 Å to 3.13 Å), while the c parameter was reduced 0.6% from first-principles calculations [19], whereas our experimental results show that a increased about 0.25%, and c parameter was reduced 0.07%. Figure 2 shows the scanning microscope images of all the deposited thin films. It is clearly visible from the SEM spectra that the films were uniformly deposited on the corning glass substrates, and the grain size increased with the increase in Mn concentration in the ZnO films. (CH3COO−) of MnxZn1−xO thin films, respectively. It can further be related to the combination of Raman scattering and luminescence [35][36][37]. From Figure 3, it is obvious that the characteristics modes centred at 570 cm −1 and 1092 cm −1 were sharp and intense. The OH vibrational mode due to absorption of water molecules at the sample surface occurred at absorption band around the wavenumber 3400 cm −1 . The peak at 2250-2400 cm −1 corresponded to CO2 modes. The absorption peak at 1540 cm −1 corresponded to antisymmetric stretching mode C=O bonding. The antisymmetric stretching mode of Mn-O vibration mode occurred at wavenumber 880 cm −1 [33]. The observed FTIR results are in good agreement with the earlier reported works [38,39]. The Raman spectra demonstrate the effect of Mn concentration on the vibrational and microscopic properties of Mn x Zn 1−x O thin films. Figure 3 shows the Raman peaks at 570 cm −1 which corresponds to A1(LO) mode due to the generation of Zn vacancy to be filled with Mn dopant. A significant redshift was observed in the Mn-doped ZnO thin films as the ZnO wurtzite A1 vibrational mode occurred at 580 cm −1 which could be the result of oxygen vacancy and zinc interstitials defect states. The peak centred at 1092 cm −1 corresponded to the glass substrate and C-C vibration mode present in the organic radical (CH 3 COO−) of Mn x Zn 1−x O thin films, respectively. It can further be related to the combination of Raman scattering and luminescence [35][36][37]. From Figure 3, it is obvious that the characteristics modes centred at 570 cm −1 and 1092 cm −1 were sharp and intense. (CH3COO−) of MnxZn1−xO thin films, respectively. It can further be related to the combination of Raman scattering and luminescence [35][36][37]. From Figure 3, it is obvious that the characteristics modes centred at 570 cm −1 and 1092 cm −1 were sharp and intense. The OH vibrational mode due to absorption of water molecules at the sample surface occurred at absorption band around the wavenumber 3400 cm −1 . The peak at 2250-2400 cm −1 corresponded to CO 2 modes. The absorption peak at 1540 cm −1 corresponded to antisymmetric stretching mode C=O bonding. The antisymmetric stretching mode of Mn-O vibration mode occurred at wavenumber 880 cm −1 [33]. The observed FTIR results are in good agreement with the earlier reported works [38,39]. The UV-visible absorption spectra of MnxZn1−xO thin films were taken in the wavelength range 300-900 nm. It is observed that all the films were transparent with small absorption in the visible region of the electromagnetic spectrum and the absorption peaks show redshift with the addition of Mn dopant. Figure 5A reveals the corresponding transmittance spectra of the deposited films. It is clear from the transmittance spectra that the films had smooth reflecting surface which can be attributed to low scattering loss at the surface and also due to the appearance of interference fringes that originated from the light reflected between air-film and film-substrate interface. The decrease in transmittance with increasing Mn concentration can be attributed due to the formation of lattice defects at ZnO interstitial sites. In order to achieve the optical bandgap of deposited thin films, we used Tauc's plot analysis [40][41][42] as follows: where is the absorption coefficient which depends upon the thickness of the film, ℎ is the Planck constant, is proportionality constant, and = 1 2 ⁄ for direct bandgap semiconductors. The absorption coefficient can be estimated by using [43], where is the transmittance and is the film's thickness. The Tauc's plot for all the films is shown in Figure 5B, and the optical bandgap was determined by extrapolating the graph on the energy axis, as shown by the dashed line in Figure 5B. It was observed that the bandgap increased from 3.23 eV to 3.28 eV with increasing Mn doping concentration. The blueshift in the bandgap can be attributed to the replacement of Zn 2+ ions by Mn 2+ ions in the ZnO lattice, causing the energy separation between the uppermost layer of the valence band and unoccupied states in the conduction band [44]. Further, Shaaban et al. reported that the bandgap of Mn-doped ZnO thin films is increased since the bandgap of MnO (4.2 eV) is greater than that of ZnO [45]. The bandgap also increases with the increase in particle size which can be explained as the bulk defects induce a delocalisation of the conduction band edge and create vacancies in electronic energy causing a blueshift of the absorption spectra [46]. The UV-visible absorption spectra of Mn x Zn 1−x O thin films were taken in the wavelength range 300-900 nm. It is observed that all the films were transparent with small absorption in the visible region of the electromagnetic spectrum and the absorption peaks show redshift with the addition of Mn dopant. Figure 5A reveals the corresponding transmittance spectra of the deposited films. It is clear from the transmittance spectra that the films had smooth reflecting surface which can be attributed to low scattering loss at the surface and also due to the appearance of interference fringes that originated from the light reflected between air-film and film-substrate interface. The decrease in transmittance with increasing Mn concentration can be attributed due to the formation of lattice defects at ZnO interstitial sites. In order to achieve the optical bandgap of deposited thin films, we used Tauc's plot analysis [40][41][42] as follows: where α is the absorption coefficient which depends upon the thickness of the film, h is the Planck constant, A is proportionality constant, and n = 1/2 for direct bandgap semiconductors. The absorption coefficient can be estimated by using [43], where T is the transmittance and d is the film's thickness. The Tauc's plot for all the films is shown in Figure 5B, and the optical bandgap was determined by extrapolating the graph on the energy axis, as shown by the dashed line in Figure 5B. It was observed that the bandgap increased from 3.23 eV to 3.28 eV with increasing Mn doping concentration. The blueshift in the bandgap can be attributed to the replacement of Zn 2+ ions by Mn 2+ ions in the ZnO lattice, causing the energy separation between the uppermost layer of the valence band and unoccupied states in the conduction band [44]. Further, Shaaban et al. reported that the bandgap of Mn-doped ZnO thin films is increased since the bandgap of MnO (4.2 eV) is greater than that of ZnO [45]. The bandgap also increases with the increase in particle size which can be explained as the bulk defects induce a delocalisation of the conduction band edge and create vacancies in electronic energy causing a blueshift of the absorption spectra [46]. The magnetic hysteresis loop of the MnxZn1−xO thin films is shown in Figure 6. The graph demonstrates that with the addition of Mn concentration (1%, 3%, and 5%), the films became ferromagnetic in nature. The ZnO film is observed to be nonmagnetic, in agreement with the null magnetic moment, due to a deficiency of unpaired electrons. With the increase in Mn dopant in ZnO films, the ferromagnetism increased which may be due to the increase in oxygen vacancies, resulting in a bound magnetic polaron [17,18]. The introduction of Mn in the ZnO induced a magnetic moment. This is due to five unpaired 3d electrons in the outermost shell. Our experimental findings are consistent with the behaviour of the density-of-states calculations [19]. The magnetic saturation, coercivity, and remanence of the MnxZn1−xO thin films are tabulated in Table 2. The saturation magnetisation increased with the increase in Mn concentration in ZnO thin films. A closer look into the magnetism of the undoped ZnO and Mn-doped ZnO reveals that the undoped ZnO had an equal number of spin-up and spindown states. When Mn was incorporated into ZnO, a difference of five electrons between the states appeared in the Mn-ZnO system. This caused an increase in a magnetic moment of 5 μB, as obtained through DFT calculation [19]. We observed the coercivity to be onefifth of reported data [16]. This implies that our studied thin films were much softer magnetically which indicates much lower dielectric/magnetic losses. The observed coercivity can be correlated to much narrower ferromagnetic linewidth, indicating much smaller The magnetic hysteresis loop of the Mn x Zn 1−x O thin films is shown in Figure 6. The graph demonstrates that with the addition of Mn concentration (1%, 3%, and 5%), the films became ferromagnetic in nature. The ZnO film is observed to be nonmagnetic, in agreement with the null magnetic moment, due to a deficiency of unpaired electrons. With the increase in Mn dopant in ZnO films, the ferromagnetism increased which may be due to the increase in oxygen vacancies, resulting in a bound magnetic polaron [17,18]. The introduction of Mn in the ZnO induced a magnetic moment. This is due to five unpaired 3d electrons in the outermost shell. Our experimental findings are consistent with the behaviour of the density-of-states calculations [19]. The magnetic hysteresis loop of the MnxZn1−xO thin films is shown in Figure 6. The graph demonstrates that with the addition of Mn concentration (1%, 3%, and 5%), the films became ferromagnetic in nature. The ZnO film is observed to be nonmagnetic, in agreement with the null magnetic moment, due to a deficiency of unpaired electrons. With the increase in Mn dopant in ZnO films, the ferromagnetism increased which may be due to the increase in oxygen vacancies, resulting in a bound magnetic polaron [17,18]. The introduction of Mn in the ZnO induced a magnetic moment. This is due to five unpaired 3d electrons in the outermost shell. Our experimental findings are consistent with the behaviour of the density-of-states calculations [19]. The magnetic saturation, coercivity, and remanence of the MnxZn1−xO thin films are tabulated in Table 2. The saturation magnetisation increased with the increase in Mn concentration in ZnO thin films. A closer look into the magnetism of the undoped ZnO and Mn-doped ZnO reveals that the undoped ZnO had an equal number of spin-up and spindown states. When Mn was incorporated into ZnO, a difference of five electrons between the states appeared in the Mn-ZnO system. This caused an increase in a magnetic moment of 5 μB, as obtained through DFT calculation [19]. We observed the coercivity to be onefifth of reported data [16]. This implies that our studied thin films were much softer magnetically which indicates much lower dielectric/magnetic losses. The observed coercivity can be correlated to much narrower ferromagnetic linewidth, indicating much smaller The magnetic saturation, coercivity, and remanence of the Mn x Zn 1−x O thin films are tabulated in Table 2. The saturation magnetisation increased with the increase in Mn concentration in ZnO thin films. A closer look into the magnetism of the undoped ZnO and Mn-doped ZnO reveals that the undoped ZnO had an equal number of spin-up and spindown states. When Mn was incorporated into ZnO, a difference of five electrons between the states appeared in the Mn-ZnO system. This caused an increase in a magnetic moment of 5 µ B , as obtained through DFT calculation [19]. We observed the coercivity to be one-fifth of reported data [16]. This implies that our studied thin films were much softer magnetically which indicates much lower dielectric/magnetic losses. The observed coercivity can be correlated to much narrower ferromagnetic linewidth, indicating much smaller Gilbert damping and hence can be used as faster magnetically switching devices [47]. Our results were compared with previous reported works in Table 3. Figure 7 represents the I-V measurements of undoped and Mn-doped ZnO thin films with an applied voltage of ±5 V. It is observed from the plot that for a specific voltage with the increase in Mn doping, the current decreased which corresponded to an increase in resistivity. This is consistent with the increase in bandgap with the increase in Mn concentration, as the energy separation between the uppermost layer of the valence band and unoccupied states in the conduction band increased with Mn doping in ZnO thin films [44,48]. Surfaces 2021, 4 Gilbert damping and hence can be used as faster magnetically switching devices [4 results were compared with previous reported works in Table 3. Coercivity (Oe) 10.75 50.5 Figure 7 represents the I-V measurements of undoped and Mn-doped ZnO th with an applied voltage of ±5 V. It is observed from the plot that for a specific volta the increase in Mn doping, the current decreased which corresponded to an inc resistivity. This is consistent with the increase in bandgap with the increase in Mn tration, as the energy separation between the uppermost layer of the valence ba unoccupied states in the conduction band increased with Mn doping in ZnO th [44,48]. Conclusions We successfully fabricated MnxZn1−xO thin films on the corning glass substr sol-gel technique. The structural analysis ratified the wurtzite structure of ZnO film preferred orientation along the c-axis. The crystallite size and grain size were obse increase with increasing Mn concentration. The UV-visible analysis revealed a b with an increase in Mn dopant which may be due to oxygen vacancy and zinc inte Conclusions We successfully fabricated Mn x Zn 1−x O thin films on the corning glass substrates by sol-gel technique. The structural analysis ratified the wurtzite structure of ZnO films with preferred orientation along the c-axis. The crystallite size and grain size were observed to increase with increasing Mn concentration. The UV-visible analysis revealed a blueshift with an increase in Mn dopant which may be due to oxygen vacancy and zinc interstitials defect states. The optical bandgap increased with the increase in Mn concentration. The increase in bandgap was verified through I-V measurements, with an increase in resistance in Mn-ZnO films. Typical ferromagnetic behaviour was observed for Mn x Zn 1−x O thin films due to five unpaired 3d electrons in the outermost shell. Enhancement of magnetisation in Mn-doped ZnO thin films indicated its potential applications in spintronic devices. The low coercivity makes them suitable for magnetically ultrafast switching devices.
2021-11-05T15:08:21.701Z
2021-10-31T00:00:00.000
{ "year": 2021, "sha1": "9c6fe01cf90498b08f1112087c87eb4e3c923f4a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2571-9637/4/4/22/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "95e918ee7cb3bccfb9711cbf4aaa91e10e98be42", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [] }
259199214
pes2o/s2orc
v3-fos-license
Management of Partial-Thickness Tears of the Distal Bicep Tendon: Evaluation of 111 Patients With 10-Year Follow-up Background: There is a paucity of research on the management of partial-thickness tears of the distal bicep tendon, and even less is known about the long-term outcomes of this condition. Purpose: To identify patients with partial-thickness distal bicep tendon tears and determine (1) patient characteristics and treatment strategies, (2) long-term outcomes, and (3) any identifiable risk factors for progression to surgery or complete tear. Study Design: Case-control study; Level of evidence, 3. Methods: A fellowship-trained musculoskeletal radiologist identified patients diagnosed with a partial-thickness distal bicep tendon tear on magnetic resonance imaging between 1996 and 2016. Medical records were reviewed to confirm the diagnosis and record study details. Multivariate logistic regression models were created using baseline characteristics, injury details, and physical examination findings to predict operative intervention. Results: In total, 111 patients met inclusion criteria (54 treated operatively, 57 treated nonoperatively), with 53% of tears in the nondominant arm and a mean follow-up time after surgery of 9.7 ± 6.5 years. Only 5% of patients progressed to full-thickness tears during the study period, at a mean of 35 months after the initial diagnosis. Patients who were nonoperatively treated were less likely to miss time from work (12% vs 61%; P < .001) and missed fewer days (30 vs 97 days; P < .016) than those treated surgically. Multivariate regression analyses demonstrated increased risk of progression to surgery with older age at initial consult (unit odds ratio [OR], 1.1), tenderness to palpation (OR, 7.5), and supination weakness (OR, 24.8). Supination weakness at initial consult was a statistically significant predictor for surgical intervention (OR, 24.8; P = .001). Conclusion: Clinical outcomes were favorable for patients regardless of treatment strategy. Approximately 50% of patients were treated surgically; patients with supination weakness were 24 times more likely to undergo surgery than those without. Progression to full-thickness tear was a relatively uncommon reason for surgical intervention, with only 5% of patients progressing to full-thickness tears during the study period and the majority occurring within 3 months of initial diagnosis. is quite common, with some patients commonly experiencing symptoms for 8 to 10 months before formal diagnosis. 8,19,23 Previous reports on partial-thickness tears of the distal bicep tendon have been largely limited to case reports or small series. 5,9,15,16 A recent systematic review identified 19 studies reporting on the operative management of partial-thickness distal bicep tears. Despite the wide scope and inclusive methodology of this review, the authors were only able to report on 86 patients treated operatively and 5 patients treated nonoperatively for partial distal bicep tendon tears. 4 Additionally, because there are little to no long-term data published on treatment outcomes of partial-thickness tears, the fate of these injuries is not well understood. While risk factors for complete distal bicep tendon tear, such as smoking and elevated body mass index (BMI), have been reported, 7,13,18 risk factors for progression of a partial-thickness tear to surgical intervention or fullthickness tear have not been fully elucidated. Therefore, the primary purpose of this study was to determine the long-term outcomes of patients with magnetic resonance imaging (MRI)-confirmed partial-thickness distal bicep tendon tears. More specifically, we sought to describe (1) patient characteristics, (2) treatment strategies, (3) longterm outcomes, and (4) any identifiable risk factors for progression to surgery or tear completion. Study Design This study was determined to be exempt from institutional review board approval. Patients who had been diagnosed with partial-thickness distal bicep tendon tear confirmed on MRI between 1996 and 2016 were identified by a musculoskeletal radiologist (A.C.J.) through review of institutional radiographic records. Patients were included if they had complete medical records and had been seen at least once on follow-up after their diagnosis. Patients with inflammatory arthritis and enthesitis, polytrauma, or incidental findings without clinical symptomatology attributable to the bicep tendon were excluded. Patients who were documented to have MRI evidence of a fullthickness distal bicep tendon tear were also excluded. Those who met criteria were then cross-referenced in the Rochester Epidemiology Project (REP) to minimize the risk of missing patients. The REP is an electronic collection system of complete medical records involving a US-based geographic cohort of .600,000 patients, all of whom were residents in Olmsted County, Minnesota, and neighboring counties in southeast Minnesota and western Wisconsin. The methodology and generalizability of the REP have been previously described in detail. 21,22 Medical records were reviewed to confirm the diagnosis and obtain patient characteristics and details relevant to the study, including patient symptomatology and injury characteristics. Patient characteristics recorded included age at diagnosis, sex, BMI, laborer status, dominant hand involvement, anabolic steroid usage, and associated chronic medical conditions. Components of the physical examination included the hook test, range of motion (ROM), supination/flexion weakness, tenderness to palpation (TTP), description of injury, and pain at initial consult. Treatment strategies were nonoperative and operative, with the latter defined as surgical intervention at any time point. Surgical details, such as repair methodology and incisional technique (1 vs 2 incisions), were also recorded. Outcomes of interest for all patients included both physical and work function after injury. This included flexion, extension, supination, and pronation ROM, in addition to flexion and supination strength through manual testing. These data were gathered from documented physical examination findings during review of patient medical records. Progression from partial-thickness to full-thickness tear and progression to surgery were also investigated. Returnto-work status, if time was missed, and exact time missed from work were obtained from the medical records. Statistical Analysis Collected data were stored in Microsoft Excel (2010; Microsoft Corp) and analyzed with JMP Pro (Version 14.1.0; SAS Institute). Patient characteristics are presented with descriptive statistics using means, medians, percentages, and 95% confidence intervals of the mean when appropriate. After analyzing data for parametric/nonparametric assumptions, continuous variables were compared between One or more of the authors has declared the following potential conflict of interest or source of funding: This study used the Rochester Epidemiology Project (REP) medical records linkage system, which is supported by the National Institute on Aging (NIA; AG 058738), by the Mayo Clinic Research Committee, and by fees paid annually by REP users. This study was partially funded by the National Institute of Arthritis and Musculoskeletal and Skin Diseases for the Musculoskeletal Research Training Program (T32AR56950). Support was received from the Foderaro-Quattrone Musculoskeletal-Orthopaedic Surgery Research Innovation Fund. A.J.T. has received hospitality payments from Stryker and Zimmer Biomet. A.C.J. has received hospitality payments from Zimmer Biomet. J.S.-S. has received consulting fees from Acumed, Exactech, and Stryker; speaking fees from Acumed; and royalties from Stryker. J.D.B. has received education payments from Arthrex, consulting fees from Stryker, speaking fees from Arthrex, and hospitality payments from Wright Medical. C.L.C. has received consulting fees and nonconsulting fees from Arthrex. AOSSM checks author disclosures against the Open Payments Database (OPD). AOSSM has not conducted an independent investigation on the OPD and disclaims any liability or responsibility relating thereto. This study was partially funded by the National Institute of Arthritis and Musculoskeletal and Skin Diseases for the Musculoskeletal Research Training Program (T32AR56950). Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH. The authors would like to acknowledge the support from the Foderaro-Quattrone Musculoskeletal-Orthopaedic Surgery Research Innovation Fund. Ethical approval for this study was waived by Mayo Clinic (reference No. 20-002302). groups utilizing Student t tests or Wilcoxon rank-sum tests, and categorical variables were similarly compared utilizing chi-square analysis or Fisher exact tests. Statistical differences in the survival analysis were reported with a logrank P value, and proportional odds ratio (OR). P values \.05 were considered to represent statistical significance. Multivariate logistic regression models were created using baseline characteristics, injury details, and physical examination findings to predict progression to operative intervention. Predictor screening with a bootstrap forest model with 100 trees was used to identify predictors for inclusion in the initial logistic regression model. The following top 10 predictors identified by predictor screening were included: supination weakness; age at initial consultation; TTP at the distal bicep tendon; sex; sensation of pop, rip, or tear at injury event; flexion weakness; BMI; supination motion; supination pain; and hook test results. After initial inclusion, the predictors with the lowest log worth were removed from the model in a stepwise fashion until the model stabilized. The final model included 3 variables: supination weakness, age at initial consultation, and TTP. Study Population Overall, 308 patients were evaluated with distal bicep tears, confirmed by MRI, during the study period. A total of 207 individuals demonstrated full-thickness tear on MRI and were thus excluded; the final cohort included 111 MRIconfirmed partial-thickness bicep tears, of which 54 were treated operatively and 57 nonoperatively ( Table 1). The mean age at evaluation was 53.6 years (range, 15.1-102.6 years); 80% of the patients were men, 56.7% worked as laborers, and 53.2% injured their nondominant arm. Men were significantly younger at evaluation than women (51.5 vs 62.5 years; P \ .001). No patients had undergone previous ipsilateral biceps surgery. The mean follow-up time from initial consultation to most recent clinical contact was 10.1 years for all patients, with 95% (105/111) having .24-month follow-up. The mean follow-up time after surgery was 9.7 6 6.5 years. Six (5%) patients experienced progression from a partial-to a full-thickness tear at a mean of 34.9 months (range, 0-205 months) from the time of their index partial-tear diagnosis. Notably, the majority (5/6) progressed within 3 months of the initial diagnosis. Importantly, 4 of these 6 progressions to full tear were not identified until the time of surgery, which was recommended for persistent pain despite nonoperative management. Only 2% of patients were indicated for surgical intervention for documented preoperative progression to a fullthickness tear. All 6 patients were men with pain at evaluation, and 5 of the 6 reported a pop, rip, or tear at the time of injury; none of these patients reported a new inciting Presented as unit odds ratio. event after the initial consultation. These 6 patients had a mean age of 46.7 6 8.1 years (range, 31-53 years), and 50% were laborers. All 6 patients underwent operative repair, despite 5 (83.5%) initially attempting nonoperative management. One patient developed scarring of the tendon remnants to the median nerve, which was discovered during an initial repair attempt at an outside hospital; this patient underwent successful repair. For the 54 patients treated operatively, the median time from injury to surgery was 100 days (IQR, 22-256 days), with 46 patients undergoing operative repair within 1 year (Table 2). Compared with nonoperatively treated patients, operatively treated patients were more likely to be men (68% vs 93%; P . .001); experience a sensation of popping, ripping, or tearing at the time of injury (P = .009); and demonstrate the following physical examination findings: abnormal hook test (P = .018), TTP at tendon insertion (P = .001), weakness of elbow flexion (P = .035), and supination weakness (P = .011). More than half of the operatively treated patients (59%) underwent repair with a double-incision technique, and cortical button fixation was the most common method of fixation (35% of patients) ( Table 2). Postintervention mechanical and functional outcomes are recorded in Table 3. Patients in the nonoperative group demonstrated a mean extension-flexion arc of 1°to 139°a nd a mean pronation-supination arc of 80°to 80°at final follow-up. They were less likely to miss time from work compared with patients treated operatively (12% vs 61%; P \ .001) and averaged fewer days missed (30 vs 97 days; P \ .016). Patients treated nonoperatively demonstrated no statistically significant differences in ability to return to work without limitations, motion loss, flexion strength, or supination strength when compared with patients treated operatively at long-term follow-up (Table 3). DISCUSSION The primary findings of this long-term study of partialthickness distal bicep tendon tears were that only 5% of patients progressed to full-thickness tears at a mean of 35 months, approximately 50% of patients ultimately required surgery, and supination weakness (OR, 24.8) at initial evaluation was the strongest predictor for progression to surgery. Overall, patients were most often men, were laborers in their 50s, and were without significant associated risk factors. Just .50% of tears occurred in the nondominant extremity. Regardless of treatment (surgical or nonoperative) patient outcomes were very favorable, with nearly all patients achieving full motion, adequate strength, and the ability to return to work without modifications. Partial-thickness distal bicep tendon tear characteristics and treatment strategies are incompletely reported in the literature to date. The true incidence of partialthickness distal bicep tendon tears is unknown, as many cases may not be formally evaluated. The age at evaluation for a partial tear has been previously reported to be in the mid-50s, which is consistent with our study. 10,23 This patient cohort included 20% female patients, which is a significantly higher proportion than has been previously reported in patients with distal bicep injury. Although nonsurgical treatment has been considered a staple of care for partial distal bicep tendon tears, there are mixed data on the overall efficacy of nonsurgical interventions. 2,3,23 Historical treatments with plaster splinting and/or bracing are reported but not commonly utilized in contemporary practice. 5,9 Successful surgical treatment of partialthickness tears has been reported in numerous previous case series and reports and is further validated by our study. 6,11,12,17 Our cohort that was treated nonoperatively was met with good clinical and functional results. patients with nonoperative management in the current literature. Our study reported on the outcomes of a larger number of partial tears treated (N = 111), and the longterm outcomes were quite favorable for those treated nonoperatively and those treated surgically. In the current literature, little is known about the factors that are associated with the risk of progression from a partial tear to a complete tear or for the need for surgical intervention. Because of the limited number of patients (n = 6), this study was unable to draw statistical conclusions regarding risk factors for progression to full-thickness tears. However, it is notable that only a small minority (5%) of partial-thickness tears progressed to full-thickness tears, and only 2 of those that did progress exhibited signs or symptoms concerning for progression before surgical intervention. This information should be helpful in clinical discussion and decision-making with patients. There were several risk factors correlated with progression to surgery for this cohort. Univariate analysis risk factors included supination weakness (OR, 24.8), younger age at initial consultation (unit OR, 1.1), and pain with palpation of the bicep tendon (OR, 7.5). During multivariate modeling, supination weakness was shown to be the greatest predictor of conversion to surgical intervention. Because the biceps are the primary supinator of the forearm, loss of supination strength is difficult to compensate for. Although these symptoms were present in our nonoperatively treated patients, the statistical significance of their contribution to operative treatment cannot be overlooked. To our knowledge, this is the first study to report a significant, multifactorial risk model for progression to operative treatment in patients with partial-thickness distal bicep tendon tears. Limitations There are several limitations in our study that merit discussion. This study was performed in a retrospective fashion and was subject to the common limitations of retrospective research. Not all data were initially collected in a uniform and consistent way at the time of initial patient evaluation, and as such the model may be slightly underpowered because of missing data. Additionally, strength measurement is an area of the physical examination that lacks rigorous standardization and, as such, is a limitation. Treatment and postoperative course were not standardized across all patients and providers. Only patients with MRI-confirmed partial tears were included in the study. While this strict definition of a partial tear substantially increased diagnostic accuracy, there were likely patients who may have had partial tears but never received advanced imaging and were not included in the study. CONCLUSION In this long-term study of patients with partial-thickness distal bicep tendon tears, clinical outcomes were favorable for patients treated either nonoperatively or surgically. Approximately 50% of patients were treated surgically; patients with supination weakness were 24 times more likely to undergo surgery than those without. Progression to full-thickness tear was a relatively uncommon reason for surgical intervention, with only 5% of patients progressing to full-thickness tears during the study period and the majority occurring within 3 months of initial diagnosis. These data should be helpful in counseling patients diagnosed with partial-thickness tears of the distal bicep tendon.
2023-06-21T05:05:40.545Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "25c30a69b29a0d74805ae076aabfc6ae613916d1", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "25c30a69b29a0d74805ae076aabfc6ae613916d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231753281
pes2o/s2orc
v3-fos-license
MSC Based Therapies to Prevent or Treat BPD—A Narrative Review on Advances and Ongoing Challenges Bronchopulmonary dysplasia (BPD) remains one of the most devastating consequences of preterm birth resulting in life-long restrictions in lung function. Distorted lung development is caused by its inflammatory response which is mainly provoked by mechanical ventilation, oxygen toxicity and bacterial infections. Dysfunction of resident lung mesenchymal stem cells (MSC) represents one key hallmark that drives BPD pathology. Despite all progress in the understanding of pathomechanisms, therapeutics to prevent or treat BPD are to date restricted to a few drugs. The limited therapeutic efficacy of established drugs can be explained by the fact that they fail to concurrently tackle the broad spectrum of disease driving mechanisms and by the huge overlap between distorted signal pathways of lung development and inflammation. The great enthusiasm about MSC based therapies as novel therapeutic for BPD arises from the capacity to inhibit inflammation while simultaneously promoting lung development and repair. Preclinical studies, mainly performed in rodents, raise hopes that there will be finally a broadly acting, efficient therapy at hand to prevent or treat BPD. Our narrative review gives a comprehensive overview on preclinical achievements, results from first early phase clinical studies and challenges to a successful translation into the clinical setting. Introduction Taking embryological lung development into mind, preterm infants born ≤32 weeks of gestation have their lungs in the late canalicular and saccular stages of lung development. After birth, most of these infants depend on respiratory support and oxygen supply due to lung immaturity and surfactant deficiency. They are exposed to pre-and postnatal infections, antibiotic therapy and a potentially harmful microbial environment in the NICU. Physiologic nutritional supply via the umbilical cord is interrupted and metabolic processes can be deranged by immaturity and stress. Furthermore, genetic predispositions, intrauterine growth restriction, smoke exposure and necessary clinical measures including fluid supply to stabilize the cardiovascular system need to be enumerated as potentially harmful. All these factors are highly recognized to constitute risk factors for restrictions of physiologic lung development in this critical period resulting in lifelong persisting limitations in lung function called bronchopulmonary dysplasia (BPD) [1][2][3][4][5]. From a histopathologic perspective, BPD is characterized by the distortion of epithelial and vascular development and of the extracellular matrix composition. Most studies evaluated the impact of mechanical ventilation, oxygen toxicity and bacterial infections on disease development. They identified the inflammatory response in the immature lung initiated before birth or shortly after birth to cause an imbalance of growth factors and anti-inflammatory cytokines on the one side and pro-inflammatory activation on the other [1]. One key feature of disease pathogenesis represents the dysfunction and rarefication of mesenchymal stem cells (MSC) in the lung that has been reviewed in detail by us recently [6]. Briefly, in the physiologic situation, MSC are located mainly at the tips of the secondary septa and orchestrate the complex interplay between epithelium and endothelium. The inflammatory response leads to fundamental changes in MSC phenotype and function and MSC rarefication. That has been described in rodent models in detail and was confirmed in lungs from deceased infants and in studies on tracheal aspirates of preterm infants [7][8][9]. These congruent data fundaments the therapeutic potential of strategies to reprogram resident lung MSC or to substitute them by exogenous transfer into the lung. While therapeutic reprogramming is just an upcoming issue faced with many obstacles and uncertainties despite the advantage of the full therapeutic potential due to the lung specific phenotype, the multitude of successful studies on MSC application to the injured lung in preclinical models raises hopes that there will be a more efficient strategy at hand in the near future than the currently available therapeutics used in preterm infants [7,10]. These therapies with proven efficacy in meta-analyses and systematic reviews comprise exogenous surfactant application, vitamin A, caffeine, azithromycin and postnatal corticosteroids summarized in Table 1. Despite the initial enthusiasm about for instance exogenous surfactant and the immediate tremendous benefits on gas exchange, they all have in common an only modest impact on the outcome of BPD [11][12][13][14][15][16][17][18][19]. Ureaplasma and mycoplasma species are hold responsible to aggravate inflammation and lung injury but their eradication by macrolide antibiotics remains a clinical challenge and inflammation is not only fostered by these pathogens but by the life-saving therapies of mechanical ventilation and oxygen supply. Despite, azithromycin but not erythromycin reduced the risk of BPD that might be ascribed to its overall anti-inflammatory properties [15,20]. Corticosteroids as the last class of drugs to be mentioned here are well-known for their impressive anti-inflammatory action in animal studies that accounts for one of the key mechanisms to preserve the immature lung when given prenatally. Identical effects are evident when given postnatally. However, their use needs to be restricted to most severely affected infants since they are known to impair the long-term outcome: Negatively affected psychomotor development is the best studied parameter while potential risks for somatic growth, endocrine homeostasis, cardiovascular function and metabolic diseases later in life still await a better database [20][21][22][23][24]. These considerations argue for a thorough wrap up of the existing data on MSC based therapies to prevent BPD. BPD originates from intrauterine and early postnatal pathologies, therefore we put special considerations to the therapeutic potential to treat developing BPD as pursued in most preclinical and clinical studies. What Are the Proven Benefits of MSC Application within the Injury-Repair-Regeneration Cascade? MSC have been studied in nearly all relevant diseases in preclinical animal models. MSC have in common that they mainly act via the release of anti-inflammatory cytokines and growth factors that have the potential to at least partially revert the inflammatory response in the damaged and inflamed organ which constitutes the initial phase of an evolving disease. Therefore, the biggest effects were detected in animal models when applied simultaneously or shortly after injury but due to their growth and repair promoting properties, therapeutic benefits can be detected even for the retarded application depending on the disease entity and disease status [10]. Both the innate and adaptive cellular immune response get reverted towards the physiologic status and further immune cell attraction is prohibited. Thereby, MSC have the potential to redirect any leukocyte population towards beneficial phenotypes, including the shift from M1 to M2 macrophages, the propagation of Treg T cells and the inhibition of Th17 response, neutrophil extracellular trap formation and modulation of antigen presenting cells like B cells and dendritic cells. The attenuation of inflammation and protease action is completed by the release of further beneficial factors including prostaglandin E2, lipoxin A4 and nitric oxide. Besides the paracrine release, direct cell-cell transfer of these factors via extracellular vesicles extends the beneficial effects to the transfer of genetic material including DNA, mRNA and microRNA and of cell membrane components including cell surface receptors. Further important fields of action comprise their multiple antimicrobial activities, inhibition of epithelial-mesenchymal transition and lung fluid clearance. Besides the direct paracrine action, the transfer of vesicles and cell organelles via nanotube formation is a further important mechanism that amongst others enables cell energy stabilization via mitochondrial transfer. The initially described phenomenon of cell transdifferentiation and cell replacement at the site of injury is of only marginal relevance. This is underlined by the fact that exogenous MSC can only shortly engraft in the recipient lung [10,25,26]. In principle, MSC can be derived from nearly every human tissue but the most widely studied sources are bone marrow, peripheral blood, umbilical cord, Wharton's jelly, placenta and adipose tissue. Recent advances in understanding the similarities and dissimilarities of MSC from different origin and the importance of aging on the functional properties of MSC have drawn the focus towards MSC obtained from the newborn infants' umbilical cord and Wharton's Jelly that possess superior anti-inflammatory and immunomodulatory functionality. Further frequently studied sources of MSC include bone marrow and adipose tissue [10]. Is the Therapeutic Efficacy of MSC to Prevent or Treat BPD in the Preclinical Setting Well-Founded? First descriptions of the therapeutic potential of MSC for BPD emerged more than a decade ago. They demonstrated within rat rodent models that the deleterious effects of hyperoxia can be attenuated by exogenous MSC application [27,28]. Two subsequent pioneering studies published in 2010 paved the way towards the further evaluation of MSC to prevent or treat evolving BPD. Both studies were performed in the hyperoxia exposure rodent model and provided convincing evidence that classical injury patterns of BPD provoked by the immature lung's inflammatory response were reverted including lung alveolar and vascular structures, right ventricular hypertrophy, pulmonary function and hemodynamics [29,30]. Already at these early stages, it came clear that the beneficial effects of MSC were not mainly executed by cell transdifferentiation but by the MSC secretome when MSC cell culture supernatants were included into the series of experiments [30]. Furthermore, it could be shown that therapeutic efficacy was achieved with both the intratracheal instillation and the intravenous injection administration routes. In summary, these two studies provided convincing evidence that exogenous MSC application has the therapeutic potential to overcome the deleterious consequences of lung resident MSC scarcity caused by hyperoxic exposure on further lung development. The initial study results were in the meantime reproduced in overall 28 subsequent studies applying mesenchymal stem cells from different origin [27,28,. Study designs, key findings, follow-up investigations for safety and efficacy and details on origin, dosage, time point and route of MSC administration are presented within Table 2. Comparable results were obtained when lipopolysaccharide was applied mimicking the situation of amniotic infection or when hyperoxia was preceded by lipopolysaccharide exposure (for details see Table 3) [57][58][59]. Both prenatal and postnatal application of MSC proved therapeutic efficacy [57][58][59]. While the initial studies were designed to demonstrate the therapeutic potential of preventive MSC application, this is not feasible in the clinical setting. However, data on beneficial effects of MSC based therapies on alveolar and vascular structures and functional parameters during delayed application when BPD is already evolving opens the window for therapeutic interventions [60]. Examinations of the difference between initial and delayed MSC application yielded controversial results. One study systematically investigated early and late application during injury prevailed better results for the early and the early plus late intratracheal application of umbilical cord MSC to newborn rats exposed to hyperoxia while another study demonstrated beneficial effects for both early and rescue application [54,61]. A recent study on delayed MSC application after hyperoxic injury revealed that even intratracheal instillation in early adulthood and repeated retarded application in later adulthood have therapeutic potential within the rat model of hyperoxia. Interestingly, both alveolar and vascular structures were improved [62]. These data argue for reconsideration of the dogma that lung regeneration in BPD after initial injury is not possible. This is in line with findings that lung regeneration after MSC therapy was observed for other lung diseases like chronic obstructive pulmonary disease (COPD) as well. Therefore, these preliminary data and subsequent dissection of the underlying mechanisms can lay the basis for studies on MSC based therapies in severely affected former preterm infants beyond the NICU [63]. Studies on the optimal route of MSC administration confirmed the initial results of efficient delivery by intratracheal, intravenous or intraperitoneal injection (please refer to Table 2). Direct comparisons between application routes were performed only in one study for intratracheal versus intravenous and in one study for intratracheal versus intraperitoneal administration. Both demonstrated superiority of intratracheal application but equivalency of MSC dosages for different routes cannot be judged from these datasets [28,35]. In contrast, the intranasal route did not describe a therapeutic benefit when MSC were applied once but recently the repetitive application during and after a moderate injury proved therapeutic efficacy [32,48]. This allows the conclusion that the intranasal route is principally feasible but effects are more pronounced when direct routes are used. Beneficial effects were obtained mainly for umbilical cord and bone marrow derived MSC but three studies investigated placental tissue derived MSC and two studies amniotic fluid or amniotic membrane derived MSC with improvements in classical hallmarks of MSC action (compiled within Table 2) [47,52,64]. Only one study investigated MSC from different origin and reported MSC derived from the umbilical cord superior to MSC from adipose tissue ( Table 4) [33]. Dose response studies demonstrated improved efficacy with increasing dosages up to 5 × 10 5 cells given intratracheally [51]. Therapeutic efficacy was similar for preventive and rescue therapy at a dosage of 3 to 6 × 10 5 cells given intratracheally per animal. Of note, improvements in lung structures persisted into adulthood and the systematic review confirmed efficacy in mice and rats within a dose range from 5 × 10 4 to 5 × 10 6 cells per animal [61,65]. Gender specific differences need to be carefully monitored in the future, as in the rat hyperoxia model, MSC derived from female donors' bone marrow displayed improved reduction of vascular remodeling in male recipients [66]. Importantly, results using MSC of human and rodent origin displayed comparable benefits that are detailed within Table 2. It needs to be taken into account that the presence of exogenous MSC is restricted to a short time interval after application that was confirmed in one study on BPD that evaluated MSC presence in more detail [32]. That might argue towards repeated applications to improve therapeutic efficacy. The preclinical pathomechanistic studies give a robust overview about the main actions of exogenous MSC application in the BPD models of mice and rats: MSC application at least partially or completely reverts the inflammatory response and cytokine disbalance in the lung evoked by hyperoxia. MSC application attenuates the increase in pro-inflammatory cytokines like IL-1α, IL-1β, IL6, IF-γ, CCL5, CXCL7, MIP-1α, MIP-2, TNF-α, TGF-β1, factors like CTGF, inhibitors like TIMP1 and cell adhesion molecules like L-selectin (CD62L) and sICAM-1 (CD54) while the level of anti-inflammatory cytokine IL-10 and growth factors angiopoietin-1, VEGFA, HGF and PECAM is retained. As a consequence, recruitment of pro-inflammatory M1 macrophages and neutrophils, myeloperoxidase activation and oxidative stress to the lung is dampened while retention of macrophages with an antiinflammatory, lung resident M2 phenotype is preserved [28,31,[33][34][35][36][37][38][39][40][42][43][44]46,[49][50][51][52][53][54][55][56][57]67]. The beneficial effects on the intrapulmonary cytokine balance account for the documented attenuation of inflammatory cell influx to the site of injury, mainly macrophages and neutrophils, the reduced release of proteases and the attenuation of the disequilibrium of pro-and anti-apoptotic Bcl-2 family member expression and attenuated apoptosis induction in the lung [28,29,32,67]. Decorin and pentraxin-related protein PTX3/tumor necrosis factor-inducible gene 14 protein (PTX3) were identified as critical drivers towards M2 macrophage polarization, which was associated with reduced inflammatory cytokine release and a better-preserved lung structure [46,49]. The inhibition of upregulation of formyl peptide receptor-1 (FPR-1) which is known for its sensor function in inflammation is one further finding that accounts for MSC function as MSC transplantation to wildtype mice exposed to hyperoxia was as efficient as FPR-1 knockout [43]. The attenuation of pro-fibrotic TGF-β1 is accompanied by decreased collagen 1 and aberrant elastin deposition and reduced lung elastase activation [31,41,51]. Looking in detail at further aspects important for proper lung function, exogenous MSC application was associated with better preserved alveolar type II cell counts and aquaporin-5 channel expression that is responsible for fluid secretion [67]. Besides the potent inhibition of inflammation, MSC application was associated with suppression of sonic hedgehog pathway signaling and of hyperoxia-induced activation of the renin-angiotensin system in the lung [40,47]. In one study, exogenous MSC application even increased the number of bronchioloalveolar stem cells extending their beneficial effects to the lung stem cell progenitor pool [68]. Although not all hallmarks of MSC action described for lung injury across ages have been reproduced in BPD models, it comes clear that exogenous MSC act via the identical key mechanisms described in other lung diseases and beyond [10]. The studies in rodents were systematically reviewed in the meantime and came to the conclusion that MSC have a robust and overall positive effect on the pulmonary outcome. Beneficial effects were stated for initial and delayed MSC application and different sources, dosages and routes of exogenous MSC application as detailed above [65,69]. Benefits of MSC therapy were not restricted to the lung but also included reduced brain injury among others even when applied topically to the lung. The main mechanism of MSC action was again attributed to the attenuation of inflammation and cell death in the brain [37]. Therefore, it is intriguing to look beyond BPD to incorporate results and considerations from other disease entities to bring the MSC strategy as quickly as possible to clinical success. Some preliminary results indicate that only specific subpopulations of MSC account for their beneficial effects. Therefore, a further research focus that investigates differences between MSC subpopulations needs to be established [70,71]. Most investigations comparing freshly isolated and thawed cell products demonstrated optimal results for fresh MSC preparations but one study did not detect any difference arguing towards a detailed investigation in future BPD studies as the use of deep-frozen cell products eases the application [72]. In some studies, the retention of sparce MSC in the lung has been described for even prolonged periods of time issuing questions to the long-term safety of this approach. Although no long-term side effects of infectious complications, therapy associated deaths or malignancies were observed following MSC application so far [55,73,74], safety concerns need to be carefully monitored as pointed out recently [75]. Consequently, we will discuss the results from cell-free MSC based strategies and their advantages in a later chapter but previously, we will take a look at available clinical trial data. MSC-mesenchymal stem cells; BPD-bronchopulmonary dysplasia; Pn-postnatal day n; ATMSC-adipose tissue mesenchymal stem cells; UCBMSC-umbilical cord blood mesenchymal stem cells; i.t.-intratracheal. Do the Results from MSC Application within First Phase I Clinical Trials Justify to Further Pursue This Approach? The highly promising results from most preclinical studies in rodents raised great hopes that allogenic MSC therapy can effectively alleviate the consequences for the lung following preterm birth. The first phase I study performed in Korea was performed in n = 9 extremely low birth weight infants that required ventilator support beyond postnatal day 5 of life for respiratory insufficiency. The first three infants received a dosage of 1 × 10 7 human umbilical cord derived MSC per kilogram body weight once intratracheally, the further six infants 2 × 10 7 . Of relevance, the MSC preparation went through a freezethaw cycle before application. Cell application and clinical follow-up revealed no acute toxicities or side effects. The combined analysis of all nine infants demonstrated statistically significant differences in the pulmonary outcome when the BPD criterion at 36 weeks of gestation was used. While 3/9 treated infants fulfilled the criterion moderate or severe BPD, there were 13/18 infants in the historic control group [77]. The reduced BPD rate in cases was further substantiated by the attenuation of the lung's proinflammatory cytokine response. Studies on levels of the characteristic pro-inflammatory cytokines IL6, IL8 and TNF-α and on matrix metalloproteinase 9 revealed an attenuation in tracheal aspirates on day 7 after MSC administration. In line, the respiratory severity score tended towards lower values than in controls (p = 0.05) [77]. Results from the follow-up of treated infants are currently available until the age of 2 years. Persistent beneficial effects with respect to oxygen supply at home, rehospitalizations for pulmonary reasons and somatic growth were confirmed. No adverse effects were detected on the psychomotor outcome [78]. These results raise great hopes that MSC application is an efficient approach in the future. A subsequent phase II study by the same authors completed recruitment already, we are awaiting publication of results from this trial (NCT01828957) and from the ongoing follow-up. A second early intervention phase I study from the US investigated the identical MSC preparation obtained from the human umbilical cord given to 12 preterm infants with a birth weight <1000 g and gestational age <28 weeks once on day 6-14 of life including a dose escalation step. Cell numbers applied were identical to the initially published study. No randomization and no historic cohort were included. The intratracheal application was well tolerated without any predefined serious adverse events of the cardiorespiratory system, anaphylactic reactions or deaths recorded. Ten out of the twelve treated infants developed severe BPD, two infants mild BPD. Severe affection of the lung was further testified by a median duration of mechanical ventilation of 35 days, a median of 114 days on oxygen supply and corticosteroid use in 8 of these infants. Of notice, one death on day 161 of life after end of study observation due to pulmonary hypertension and 3 sepsis events were recorded that argues towards careful surveillance in any future study [79]. Compared to the initial study, infants were more immature and displayed more severe respiratory disease that might account for differences in outcome results. One further single center open-label phase I study investigated the safety of late allogenic amnion cell transplantation to severely affected infants with established BPD requiring invasive ventilation or non-invasive respiratory support with oxygen fractions of 0.3-0.5. Cells were given intravenously with a dosage of 1 × 10 6 per kilogram body weight. The first infant showed cardiorespiratory compromise that was traced back to potential pulmonary embolism by the infusion. After changing the administration technique including further cell dilution and inserting an in-line filter infusions were tolerated without adverse events in the other five infants. No side effects were observed during follow-up in the ward and the death in one child one month after infusion was ascribed non-related to the intervention. Within the first week following the infusion, no improvements in respiratory support were observed but in 3 infants FiO 2 requirements decreased [80]. The follow-up until the age of 24 months did not reveal any therapy-associated side effects although validity remains restricted in a population of severely affected infants with ongoing problems of somatic growth and cardiorespiratory and psychomotor function [81]. The subsequent phase I dose escalation study increasing the dosage up to 3 × 10 7 cells per kilogram body weight commenced already with results including the 24 month follow-up expected for 2022. While the initial study aimed to address only safety issues, the authors now based the dose-escalation on data derived from studies in humans and animals that proved therapeutic efficacy [80][81][82]. For a comprehensive list of further ongoing phase I and II MSC trials, we refer to this recently published reference [83]. All three published studies to date have in common that they were designed to monitor safety aspects but not benefits for the pulmonary outcome. Reliable responses to the questions of optimum source and preparation, dosage and route of application are a prerequisite for stepping towards the next level. To speed this process up, the look at MSC trials for other pulmonary diseases is intriguing as lung diseases across entities and ages have several key pathomechanisms in common. Shortly summarizing, none of these more than twenty trials prevailed any safety concerns. Most studies used the intravenous route of application and the most frequently applied dosage was 1x10 6 cells per kilogram body weight. Outcomes were heterogenous: While some studies did not detect any benefit, some studies displayed the expected reduction in markers of inflammation and others demonstrated short-or longer-term beneficial effects for lung function or personal wellbeing [10]. As for preterm infants, studies in adults mostly included severely affected patients. That might conceal the therapeutic potential as these patients have established disease and inflammatory processes. Interpretation of the available results is hampered by the heterogeneity of study outlines and different origin, preparation techniques, dosages, timing and administration routes. Results from the first BPD studies are confirmed by the summary of results from adult trials where significant beneficial effects on the lung became only visible when applied MSC dosages exceeded 1 × 10 6 cells per kilogram body weight or when repeated infusions were administered [10,77,84]. The timing of MSC application remains a critical issue despite the described benefits of delayed application in animal trials. Early application before aggravation of lung injury might pave the way to prove efficacy. Safety concerns need to be critically monitored within all ongoing and future studies that are discussed in the next chapter and need to include sepsis and pulmonary hypertension. Besides the presumed potential side effects including drug-drug interactions of MSC and surfactant need to be closely monitored [39]. Is MSC Application Safe? A recent systematic review and meta-analysis on MSC therapy safety concluded that no safety concerns are evident [73]. Despite, it remains an unresolved question whether MSC can be applied safely to the immunocompromised preterm infant. Although plenty of studies investigated persistence of the MSC in the lung, preclinical studies did mostly not detect permanent engraftment and most publications described disappearance within a few days. There still remains uncertainty whether the lack of MSC persistence was due to inappropriate techniques. Concerns are fostered by the fact that MSC possess the capability to adapt their immunologic function enabling them to escape immunologic detection and cell control. In theory, this can result in uncontrolled cell proliferation and malignancy. Furthermore, MSC can release a plenty of pro-inflammatory cytokines like interleukins and macrophage stimulating factors to activate host defense mechanisms against infection and injury [26,85]. Special focus needs to be directed towards the release of TGF-β and its immunomodulatory and pro-fibrotic functions [86]. Whether this is a real obstacle to therapy success needs further studies. The published results hint towards a distinct reaction of lung resident MSC and bone marrow derived MSC but this needs further confirmation [87]. The data from the amnion cell transplantation study confirm that pulmonary embolism after MSC application is an ever-present concern that needs to be considered in any study outline [80]. Adequate dilution, slower infusion rates, in-line filtration and anticoagulation represent therapeutic options to resolve this issue. Furthermore, the timing of MSC application needs critical review. A probably optimal efficacy of the intervention needs to be weighed against the therapeutic aim to restrict this new therapy to infants with pulmonary sequelae that requires delayed application. Although the late administration of amniotic cells did not prevail any adverse effects, the therapeutic efficacy might be restricted as inflammatory damage to the immature lung has already been executed. A solution to this is the early identification of infants at high risk for BPD by biomarker approaches to provide MSC early on [88,89]. Besides these impeding safety concerns of MSC application, there still remain a plenty of unresolved issues about the optimal source and stability of cell preparations [90]. It is well known that during passaging, MSC undergo molecular changes and experience phenotype changes with increasing donor age that impair therapeutic efficacy [91,92]. Therefore, the continuous availability of freshly isolated MSC and harmonization of cell product preparation remain immanent issues [93]. Frozen MSC preparations were studied in BPD but results from other disease entities clearly indicate that this approach will not provide identical therapeutic efficiency. Furthermore, regular supply to the premature infant requires the provision of large scaled MSC preparations compared to the low amount required for studies in the rodent model. These capacities are not available and need to be build up before performing any larger scale clinical study. Therefore, alternative strategies with the aim of having a stable product in sufficient quantities readily available off the shelf are required and therefore discussed within the next chapter. Is the Secretome the Key to Practicality and Safety of MSC Application? As discussed in detail before, the main action of MSC is via their secretome. These so-called extracellular vesicles (EV) contain many factors of MSC that account for beneficial effects on inflammation, organ development and repair. Besides cytokines and growth factors, EV contain gene products, mRNAs and microRNAs [94]. Lower immunogenicity and smaller size with higher ease in crossing of biological barriers compared to MSC make them especially attractive for BPD prevention and treatment [95]. Today, EV and cell culture supernatants from MSC called conditioned media demonstrated therapeutic efficacy in 14 published preclinical studies in rodent models of BPD that are summarized in Table 5. Already at an early stage of research, it was demonstrated that EV have comparable therapeutic efficacy on alveolar and vascular development and exercise capacity [29]. The repeatedly described short presence and low engraftment rates of MSC deliver the reason for equal potency. Follow-up studies ensured that there was no long-term disadvantage compared to MSC application [61]. Application during the acute phase of injury but even after 14 days of hyperoxic exposure proved efficient to attenuate or completely revert the deleterious consequences for lung structure and functional properties [29,45,60,61,68,[96][97][98][99][100][101]. Detailed structural and functional analyses demonstrated benefits not only for pulmonary hypertension but for peripheral vascular remodeling and for the pool of bronchioloalveolar stem cells as well [60,68,96,97,100]. Dissection of the inflammatory response revealed a reduced influx of neutrophils, an M1 to M2 shift in macrophages, suppression of pro-inflammatory cytokines and augmentation of anti-inflammatory cytokines and growth factors [53,97,99]. These actions observed for BPD animal models are in line with the main actions described for EV in general [102]. In the context of EV delivery to the immature lung, further drivers of BPD as infections and microbial dysbiosis have not been evaluated so far [103,104]. However, promising studies have been conducted on the treatment of neonatal sepsis [105]. Lastly, the so far not studied EV properties of stabilizing the energy balance of target cells and direct antibacterial activities have the potential to demonstrate further therapeutic efficacies and to extend the indications for EV application to the preterm lung [106,107]. The incorporation of EV is not cell type specific and EV were verified in type II cells, in lung fibroblasts and pericytes when given intratracheally [42]. Building the bridge into the clinics, studies included EV of clinical grade quality [45]. As for the studies on MSC from different origin, EV from umbilical cord and bone marrow MSC demonstrated equal potency [97]. Direct comparison of exosomes and MSC was executed with human and rodent derived MSC from umbilical cord and bone marrow. There was no obvious discrepancy in therapeutic efficacy between cells from both origins [60,68,97,99]. In one experimental setting, exosome delivery better preserved alveolar and vascular development in animals exposed to hyperoxia but the outcome measures did not reach statistical significance. Two other settings were unable to display differences in outcome [45,53,61]. One study using EV derived from adipose tissue MSC revealed reduced efficacy compared to the direct cell application that argues towards an in-depth comparison of preparations before using them in clinical settings [64]. These data suggest a thorough evaluation of EV potency needs to be conducted when comparing different treatment approaches. This requires the harmonization of EV production, purification, storage, quantification and the establishment of standardized potency assays [108]. EV potency does not only depend on the EV content of cytokines and cytoplasmic components but also on their surface biomarker and receptor expression [94,109]. The former obstacles of having high amounts of EV available for the conduction of clinical studies in the preterm infant, are in the meantime overcome by the rapid increase in companies stepping into the field of EV production and provision for clinical trials [110]. Adequate and uniform distribution of EV in the lung after intravenous or intratracheal application need to be verified. At least for the i.v. route, close monitoring of successful distribution in the injured lung is a prerogative for therapy success [111]. Further aspects include the optimization of the first application time point and required repetition of EV administration. Safety issues need to be monitored precisely as the transfer of genetic material, cytokines and growth factors might as well cause undesirable effects including malignancy transformation. But the available results from the rodent hyperoxia models argue towards focusing efforts on EV therapy as the next level. One phase I study of EV therapy to prevent BPD registered at the NIH clinicaltrials.gov homepage (NCT03857841) is currently recruiting infants. Results will be available by the end of 2021 and might shed light on the safety of the EV approach in the preterm infant. Taking a look at EV therapy in adult patients across disease entities, results displayed some beneficial effects but the great enthusiasm about this new therapeutic strategy arisen from preclinical models still awaits further confirmation [95,102]. For the time being, BPD seems to be one special candidate for EV therapy within the many approaches to lung diseases [112]. Reviewing the next steps to further improve therapeutic efficacy is the aim of the following and last section. Is Cell Engineering the Ultimate Step to Therapy Success? Although the discrepancies between the consistently beneficial effects in the rodent BPD models and heterogenous results in preterm infants can be partly explained by differences in dosing, the heterogeneity of BPD severity in participating patients and the approach of clinical studies to demonstrate safety but not efficacy, the further improvement of efficacy is the next necessary step in MSC research. We will review the available data on MSC and EV together due to the scarce data available for the BPD setting. In principle, strategies can be separated into application of MSC together with a further therapeutic ( Table 6) and biochemical (Table 7) or genetic (Table 8) modification of MSC before therapeutic application. The combinatorial approach of MSC plus recombinant erythropoietin was the first studied in the hyperoxia BPD model. All readout parameters displayed further improvements for the combinatorial approach including alveolar lung structures, better preserved VEGFA and reduced MMP-9 activation [36]. While erythropoietin augmented the efficacy of exogenous MSC, this was not observed for MSC plus surfactant. While each treatment alone attenuated the hyperoxia induced alveolar hypoplasia, no additional benefit was detected for the combination [39]. MSC preconditioning has also been tested in the experimental setting of BPD. One study performed in the initial phase of BPD research constituted that conditioned medium harvested from bone marrow derived MSC exposed to hyperoxia proved superior efficacy with respect to alveolar structures and pulmonary hypertension than naïve MSC [96]. These data need reproduction as mostly hypoxic but not hyperoxic preconditioning improves MSC functionality [113]. Gene modification studies are suited to describe the therapeutic relevance of a specific factor in the prevention of BPD pathology with the vision of engineering MSC or EV to gain the best possible treatment results. For TSG-6, which is known for its capacity to modulate amongst other actions macrophage plasticity towards an anti-inflammatory phenotype, knockdown in human umbilical cord derived MSC exosomes markedly attenuated the beneficial effects on lung and heart [99]. Knockdown of VEGFA nearly completely abrogated the beneficial effects of MSC and exosome application in two independent studies [42,56]. In this way TSG-6 and VEGFA were ascribed a dominant role in MSC and EV activity and reverse strategies of MSC transduction will aim to improve the activity of these two proteins. Transduction of stem cells from the amniotic fluid for VEGFA augmented all key features of BPD while naïve cells only improved inflammation and vascular development confirming a central role of VEGFA across stem cell entities [114]. Similarly, knockdown of stromal-derived factor-1 (SDF-1) in MSC before transplantation partially abrogated the beneficial anti-inflammatory and pro-angiogenic activity attributing SDF-1 an important function during MSC action that was recapitulated in preterm infant lung autopsy studies [38,115]. For decorin and PTX3, their action was ascribed to modulation of macrophage function towards an M2 status using siRNA experiments and thereby better preserving lung development [46,49]. In line, MSC transduction with 7ND-CCL2, a potent CCR2 antagonist, proved efficacy to prevent hyperoxia induced distortion of alveolar and vascular development in the immature lung which was associated with the reduced influx of M1 macrophages and pro-inflammatory cytokine expression [76]. MSC engineering has been evaluated much more detailed in other lung diseases, mostly acute lung injury. The therapeutic efficacy has been demonstrated for a plenty of growth factors and anti-inflammatory cytokines that stipulated the resolution of inflammation, lung repair and regeneration. Cell surface receptors constitute another suitable target to improve MSC recruitment to the site of injury [10]. Overall, these results raise great hopes to make MSC based therapies more efficient and safer in the near future. MSC-mesenchymal stem cells; BPD-bronchopulmonary dysplasia; Pn-postnatal day n; BMMSC-bone marrow mesenchymal stem cells; i.p.-intraperitoneal. Concluding Remarks Complex interactions between inflammatory injury and disruption of physiologic lung development represent key features of BPD. Therefore, it is inspiring to search for broadacting therapeutics. The MSC-based approach is especially attractive as it combines antiinflammatory and growth and repair promoting properties. It has finally the potential to add a powerful strategy to the short list of available medications to prevent BPD. Although it is much too early to judge the ultimate efficacy in preterm infants, the congruent results obtained so far exclusively in rodent models are promising. Of course, the criticism of studies in mouse models is justified as none of the actual therapeutics evolved from studies in mice [116]. As for all other new therapies, a careful and complete evaluation of MSC-based approaches covering the lung microenvironment and the comprehensive documentation of side effects under particular conditions like the recently described increased pulmonary artery embolism in a sheep extracorporeal membrane oxygenation model of acute respiratory distress syndrome is indispensable [117,118]. On the other hand, even a much lower efficacy in preterm infants can alleviate the tremendous disease burden of BPD where the best available medical approaches need a number of 10 treated infants to prevent one BPD case [12,20]. And even preterm infants not fulfilling the BPD criterion have relevant lung function restrictions persisting throughout life [5]. Within the established medications to prevent BPD, corticosteroids display the highest therapeutic efficacy. This is not surprising as they are used as anti-inflammatory agents. But an important side effect is often neglected in the situation where successful extubation or even the survival of the child depends on their application. Corticosteroids are powerful agents that disrupt further lung development [119]. With the MSC approach, two birds can be tackled with one stone. The next steps to a successful introduction of MSC-based therapies into the clinics include the standardization of experimental approaches with respect to dosing, timing and selection of precisely defined patient groups and the production and efficacy testing of cell preparations. Ongoing investigations within the baboon model of borderline viability are suitable to confirm the results from rodents and can help to determine functional improvements in more detail than mouse or rat models [120]. For these reasons, the just published metaanalysis was not able to provide final conclusions [69]. The robust comparison of different therapeutic approaches and the standardization of experimental settings are necessary prerogatives to dissolve these unanswered questions. Special focus needs to be drawn to detect unexpected adverse and inter-species effects [121]. The studies from recent years provided convincing evidence that inflammatory and lung growth promoting pathways use common signaling pathways. Thereby, the overshooting on the one hand but the complete interruption on the other hand might both end up with aggravated lung injury what has been mostly detailed for NFκB [7,[122][123][124]. Lastly, inflammation in the preterm infant is a systemic disease that is not limited to the lung. Other important outcomes arising from inflammation include brain injury, necrotizing enterocolitis or retinopathy of prematurity. It is not surprising that the rare studies on further organs beside the lung provide convincing evidence that systemic MSC based therapies alleviate these injuries [37]. With this in mind, the focus of research needs to be expanded. A crucial moment is reached to join experiences, research efforts and results to translate this promising therapy successfully into the clinics. The high efficacy of MSC under optimized laboratory conditions where the setting is designed to dissect effects does not reflect the complex and multifactorial reality in the clinics. The efforts of MSC engineering designed to strengthen the efficacy and to abrogate side effects need to be followed closely and are of high therapeutic potential. It is intriguing that a one-time application shall prevent or cure BPD. This approach was probably preferred due to the high costs of MSC and EV production. But the therapy is applied in an inflamed milieu where the causes of lung injury get not immediately abrogated by the application per se but the infant still depends on ventilation and oxygen supply. These facts together with the only short-term presence of MSC in the lung argue towards repetitive or permanent application as standard of care that might provide the ultimate key to cure BPD. Funding: This work was supported by research grants from the Von Behring-Röntgen-Stiftung (65-0019 to JB) and clinical research unit KFO309-2 (projects P6 and P7). The sponsors had no role in the design, execution, interpretation or writing of the study.
2021-02-03T06:16:48.087Z
2021-01-24T00:00:00.000
{ "year": 2021, "sha1": "2487bd44c3d82861b60007831f7616f1fb4fcace", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/3/1138/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5422ee246f016514ff056767d1c5b5443eea049d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57265461
pes2o/s2orc
v3-fos-license
Measuring Students’ Information Literacy Skills through Abstracting: Case Study from a Library & Information Science Perspective New education models based essentially on competencies and skills are gradually displacing the old systems based on teacher instruction and passive and memory-based learning in students, as these new competencies allow the student to learn actively with better levels of performance. We consider abstracting as a transcendent learning tool to analyze the basic role of information analysis and synthesis skills within the learning processes and their relation to the abstracting processes. Using an action-research methodology, we analyze the abstracting skill of students on the first and final courses of the Faculty of Library and Information Science at the University of Granada (Spain). Based on postulates from information literacy, analysis and synthesis competencies are studied through the students' modus operandi at the different abstracting stages. Similarities and differences between the two groups of students are perceived and displayed, with reference to the relation between the learned subjects and the levels of competence and skill. In the light of these results, meaningful patterns and recommendations for improving students' skill levels are proposed. New education models based essentially on competencies and skills are gradually displacing the old systems based on teacher instruction and passive and memory-based learning in students, as these new competencies allow the student to learn actively with better levels of performance. We consider abstracting as a transcendent learning tool to analyze the basic role of information analysis and synthesis skills within the learning processes and their relation to the abstracting processes. Using an action-research methodology, we analyze the abstracting skill of students on the first and final courses of the Faculty of Library and Information Science at the University of Granada (Spain). Based on postulates from information literacy, analysis and synthesis competencies are studied through the students' modus operandi at the different abstracting stages. Similarities and differences between the two groups of students are perceived and displayed, with reference to the relation between the learned subjects and the levels of competence and skill. In the light of these results, meaningful patterns and recommendations for improving students' skill levels are proposed. o date, students have devoted a great deal of their mental efforts to memorizing data. However, global-scale changes in communication processes, largely due to the development of information and communication technologies (ICT), have led to the emergence of new education models. Whereas instruction was previously based on teacher instruction and student learning, education models now focus much more on active learning by the student. This situation has forced a change in the roles of the actors involved in teaching-learning processes. Today's student can no longer be a mere passive subject who memorizes the material he or she is given; students must now have a series of skills and abilities that allow them to approach any information-based problem and tackle it coherently. This information literacy, based on a set of competencies and skills, some general and others specific to each discipline, are linked to the competencies students need to be able to learn by themselves in the best possible conditions. The European Higher Education Area (EHEA), the aim of which is to harmonize and create convergence among university studies in Europe, advocates a change in the philosophy of higher education to prioritize proficient management of learning tools over the mere accumulation of knowledge. The Tuning 1 project was set up to achieve these aims, centered on educational structures and content of studies. The project has identified a series of 30 competencies known as transversal or generic competencies. The issue of education based on competencies and skills has been growing in importance over recent years in the field of Information Science 2 , and has led to a research line known as information literacy, which focuses on information-use competencies (search, organization, processing, representation and management). Although many definitions of information literacy 3 have been put forward, one of the most cogent is that advanced by Webber and Johnston 4 : "Information literacy is the adoption of appropriate information behaviour to identify, through whatever channel or medium, information well fi ed to information needs, leading to wise and ethical use of information in society". A range of research studies have explored the measurement and assessment of information literacy skills 5 . From an academic perspective, it must be recognized that very few competencies related to information literacy are explicitly taught in universities. However, the Spanish Library and Information Science degree includes two core curriculum courses (Document Abstracting and Indexing and Abstracting Techniques) that are directly related to two of the core information competencies in international information literacy standards: analysis and information synthesis. Technological advances have not actually reduced the need for abstracting; in fact, the opposite is true: the development of the Internet has created a growing need for a variety of ways to filter information, of which abstracting is the pièce de résistance 6 . As a consequence, these courses have become true laboratory situations, where action-research methodology is used to examine aspects related to information literacy. The experience of teaching these subjects has allowed us to observe students' skills in these competencies and the processes involved in learning them. The main objective of this pioneering study is precisely to observe and measure, using action-research methodology, how skilled students are in these competencies by specifying the stages necessary in abstracting processes and observing the extent to which the curricular development of these subjects affects the students' skills. Learning Information Analysis and Synthesis Skills: Literature Review The OECD (Organization for Economic Co-operation and Development) 7 defines a competency as the ability to meet individual or social demands or to perform an activity or task. The advantage of this external or functional approach, based on demand, is that it exposes the personal or social demands facing individuals. The generic competencies higher education students need have been dealt with by many education-related institutions 8 and can be outlined according to the Alfin-EEES project 9 : • Learn to learn. • Learn to search for and evaluate information. • Learn to analyze and systemize. • Learn to generate knowledge. • Learn to work together. • Use technology to learn. The competencies directly linked to our study objectives are those known as "Information literacy competencies," and refer to the search for, organization, processing, representation and legal and March 2008 ethical use of information. According to Andre a, 10 the information-literate person recognizes the need for information and determines the nature and extent of the information needed; accesses needed information effectively and efficiently; evaluates information and its sources critically and incorporates selected information into his or her knowledge base and value system; uses information accurately and creatively; applies prior and new information to construct new concepts or create new understandings; contributes positively to the learning community and to society; practices ethical behavior in regard to information and information technology. Of all the competencies covered by the INFOLIT International Standards (ACRL, AASL, AECT, SCONUL, CAUL, and ANZIIL), 11 we focus on information analysis and synthesis, as they are the most closely associated with abstracting processes. Given that meaningful learning should consciously and intentionally integrate the individual's new and prior knowledge, the abstracting process favors this integration by involving not only the selection of relevant information but also the identification of the textual structure of the original document. In the subsequent representation process, the abstracter organizes the information obtained and generates new knowledge; but because of the complexity of the abstracting process, it is extremely complicated to approach as a complete entity. The learning model we use in abstracting instruction is therefore broken down into subprocesses to analyze and synthesize the original information. Information Analysis and Evaluation Skills A superficial reading of a text can provide clues about its content, but a slightly greater effort is required to understand it. Meta-cognitive research has shown that the ability to identify and remember the main ideas is one of the bases for reading comprehension 12 , and one of the factors that differentiates "good" from "poor" readers 13 . All reading comprehension processes eventually detect the text structure, its main subject ma er and, particularly, the author's intention. The ACRL highlights that "the information literate student summarizes the main ideas to be extracted from the information gathered" and to do this, he or she must be able to "read the text and select main ideas, restate textual concepts in his/her own words and select data accurately and identify verbatim material that can be then appropriately quoted." Since abstracts are reduced, autonomous, and purposeful textual representations of original texts, 14 a certain, varying amount of the original text's objective content is captured through the pertinent abstracting process, depending on the targets set. However, the abstract depends not only on the original document, but even more so on the abstracter's base knowledge and on his or her learning targets. The abstract should result from the convergence of an objective reality, the original document, and a subjective reality, the abstracter who has a certain level of knowledge and personal, nontransferable targets. There are two key moments in this process of learning through the technique of abstracting, in which we assume an acceptable level of comprehension of the original text. First, the selection of what is considered to be relevant content, and second, the structuring of this content for subsequent incorporation in the knowledge base of the recipientabstracter. Selection is a process of purposeful elimination. Through contraction, reduction and condensation strategies, the aim of selection is to retain only the relevant information. 15 In both the selection and the structuring of the original content, the only assistance that may be offered takes the form of suggestions and recommendations that will help the task to be carried out more efficiently. Once the information has been analyzed, the student is then able to evaluate and decide whether it fits in with his or her information needs. According to the ACRL standards, the information-literate student articulates and applies the following initial criteria for evaluating both the information and its sources: a) examining and comparing information from various sources to evaluate reliability, validity, accuracy, authority, timeliness, and point of view or bias; b) analyzing the structure and logic of supporting arguments or methods; c) recognizing prejudice, deception, or manipulation; d) recognizing the cultural, physical, or other context within which the information was created and understanding the impact of context on interpreting the information. Information Incorporation, Synthesis and Use Skills The synthesis subprocess allows the conceptual results derived from the previous stages to be represented. But representation is not an independent and self-fulfilling exercise. It will be necessary to investigate the linguistic, communicative, and organizational aspects of representation from a multiplicity of sociocognitive perspectives and within the full range of discourse domains and knowledge communities. 16 In any case, conditions of relevance, consistency, accuracy, and completeness are needed for the abstract. 17 Most university students already know how to select and structure the main ideas and include them in an abstract, but they tend to fall down when asked to present these subjects accurately and thoroughly in coherent sentences. Depending on the type of original information and the abstract objectives, some forms of graphic representation may also be effective. The information-literate student synthesizes main ideas to construct new concepts in the following ways: a) recognizing interrelationships among concepts and combining them into potentially useful primary statements with supporting evidence; b) extending initial synthesis, when possible, at a higher level of abstraction to construct new hypotheses that may require additional information; c) utilizing computer and other technologies for studying the interaction of ideas and other phenomena. Below we list the competencies and skills necessary for abstracting: • Efficient reading of both wri en and graphic texts. • Awareness of the various types of abstracts. • Knowledge about how to select the type of abstract for each text, project, or context. • Knowledge about how to apply abstracting techniques to different types of documents. • Assessment and use of computer applications for automated abstracts. • Understanding of the potential and limitations of automated abstracts. • Learning to recognize and retrieve the appropriate information from a text. • Knowledge of the textual grammar and style of different abstract types. • Learning to classify and synthesize information in a text. • Learning to assess abstracts. • Rigor and accuracy, consistency and constancy. • Clarity in se ing out proposals and arguments. Analysis of the Skills and Competencies in Library and Information Science Students: A Case Study The scientific literature has analyzed the issue of problems and errors in writing scientific abstracts. 18 However, the analysis of the various stages that go into the production of an abstract, and how they are related to the skills and abilities the process requires, the aim of the present research, is a new area of study. This study, carried out within the context of abstracting training, analyzed how abstracts were produced in accordance with the stages followed throughout the process, as well as the skills and abilities related to each stage. We were thus able to discover the students' skill levels in a set of abilities related to document abstract-ing and to identify possible weak points in their training. To this end, we carried out a trial with library science students in which they were asked to write an abstract of a scientific text and to specify how they carried out each one of the stages involved. The analysis and assessment were made by experts in the field and consequently a certain element of subjectivity must be taken into account. This factor is not easily avoided when assessing such relatively intangible aspects as those we deal with in this paper. Material and Method The study was based on the premises of action-research methodology, which advocates the use of the classroom as laboratory. At the same time it enables problems or weak points to be detected and rectified, thus contributing valuable information for the scientific community. If it is well designed and implemented, action research offers the possibility of generating data to support theorizing, to develop understanding and to create new knowledge. 19 Taking this extra step demands a rigorous, critical, and systematic approach and makes heavy demands on participant researchers. 20 Action-research methodology is defined by Elliot 21 as "the study of a social situation with a view to improving the quality of action within it'' and by Howard 22 as follows: "Action research is the process of reflective problem solving conducted at the school level. This process allows us to identify an issue we want to study to determine if we can change our process or procedures to improve our program. Action research leads to program improvement and increased academic achievement for our students [and] offers possibilities for practical work that is also a form of learning for those involved." 23 "AR differs from case study research in that the action researcher is directly involved in planned organisational change." 24 "One distinguishing feature of AR is, therefore, the active and deliberate self-involvement of the researcher in the context of his/her investigation." 25 The present study therefore falls within the frame of action-research methodology and specifically is an experimental study with an explanatory purpose: a trial or experiment was proposed on a set of students that consisted of the detailed wri en specification of the stages and processes involved in document abstracting; the experimental data were gathered in the classroom and analyzed with the aim of detecting the students' weaknesses and strong points in the skills related to scientific information abstracting. This information guided us in focusing the learning targets for this type of skill and information literacy activity. Data Source We examined the international scientific literature 26 to determine the most appropriate procedure when presenting the abstracting stages to the students. The stages we focused on, simplified and adapted to our study, are as follows: • Reading. • Identification of the text structure, the main subject ma er and the author's intention. • Selection of the most important sentences. • Generalization of the selected sentences. • Writing up. Based on the skills outlined by ANECA (Spanish Agency for Quality Assessment and Accreditation) 27 and in the e-coms 28 , Alfinees 29 and Cyberabstracts 30 portals, we drew up a list of skills related to the stages in the abstract creation process. The skills selected were as follows: • Comprehension: detected in the identification of the text structure, the main subject ma er and the author's intention, and in the selection of important sentences and keywords. • Analysis: detected in the identification of markers, in the text structure, the selection of keywords and important sentences. • Synthesis: detected in the generalization stage and in the writing up of the abstract. • Organization and structuring of the information: detected in the schema, sentence grouping, and visual organization. • Expression: analyzed from the way the abstract is wri en. Some skills are associated with more than one of the stages because of their transversal character and are present in numerous processes at different levels. Template The template designed by the research team was structured into four sections: • Student details: included any other university qualifications or professional activities where appropriate. • Evaluation of text to be abstracted: we asked students how familiar they were with the text subject and terminology. Their answers were graded on the following scale: very familiar, quite familiar, moderately familiar, slightly familiar, unfamiliar. • Procedure used to prepare the abstract: brief description of the procedure (stages) used to prepare the abstract. • Preparation of the abstract: this section covered the way students carried out the various abstracting stages. -Mark unknown words. This revealed the types of terms (specific or general) the students did not understand. -Identify the subject of the text and the author's intention. This stage was only required of fi h-course students who had more experience in information analysis. -Identify the general structure of the text. When abstracting, we must detect the structure of the original document. […] the learners must recognize what types of documents they are dealing with since this will help them greatly in subsequent selection, organizational, and construction tasks. 31 -Underline text markers. Interest in what are known as "markers" stems from their potential to help detect text structure, as they signal sentences of particular relevance and sections of the text. They can be classified into three types: additional information (also, in addition, moreover); contrast of idea or clarification (however, nonetheless, although, yet); and conclusion or summary (therefore, hence, as a result of, in summary). Only final-course students performed this stage. -Select the most important sentences. This stage allowed us to recognize the importance of what is superficial, thus reducing the text that needs to be worked on. -Generalize selected sentences. The aim of this stage was to see how the students rewrote the most important sentences chosen in the previous stage, making them more coherent and meaningful for the abstractor. This section was only required of final-course students. -Group the selected and generalized sentences. The purpose of this stage was to reveal the students' capabilities in finding associations between the sentences they had selected and generalized, by pu ing them into smaller groups. This stage was only required of final-course students. -Preparation of a graphic schema. Through this stage, we observed the type of schema the students used, together with their ability to structure the information. -Extract keywords and organize them graphically in a conceptual map. The representation of a text through keywords reveals the abstractor's comprehension and analysis-and to a certain extent, synthesis and expression-capabilities. We opted for free choice, rather than using controlled language. Final-course students were also asked to provide a conceptual map of the associations between the keywords. We were therefore able to observe the type of visual organization and the relation between the keywords March 2008 selected (regardless of whether the choice of words was correct or not). -Writing up the abstract. This concluding stage was essentially studied from the point of view of expression and synthesis abilities. The test was carried out with two groups of students: the first made up of students beginning the Library and Information Science (Bachelor) degree, and the second group of final-course (Master) students. As there were substantial differences in the training and capacities of the two groups, additional sections related to content taught on the Indexing and Abstracting Techniques course in the final year of study were included in the trial template completed by the Master students. Table 1 presents the stages the two groups were asked to follow in the trial. The text to be abstracted was handed out with the template. As Spanish is the official language used for instruction on these courses, and the students' level of English was not sufficient to tackle a text in this language, we looked for texts, preferably scientific, in Spanish, with an abstract, keywords, and references. The texts had to be short enough for the students to be able to write the abstract following the template provided in the available time (three hours). The subject ma er of the article was related to the Library and Information Science degree, in part because the subject was familiar to the students, but also because it would provide them with an idea of the type of research carried out in the field. The IV. Information representation Identify keywords Identify keywords and organize them (their relation to each other) in a map or similar figure V. Expression Write up the abstract Write up the abstract Together with the trial instructions, the students were given the article from which the abstract, the keywords, and the references had been removed to avoid any influence on the students' results. The final version was 4 pages long and had a total of 1,835 words. Test Sample and Conditions Two groups of students were chosen, the first made up of 40 students on the corecurriculum subject "document abstracting" from the Library and Information Science Diploma (Bachelor) and the other of 38 students on the a core-curriculum subject "Indexing and abstracting techniques for scientific documents," from the final course of the Documentation Degree (Master), both taught at the University of Granada. In this way we were able to appreciate the different levels of skills acquired by students, from those just beginning their degree to the more experienced. It should be mentioned that although the two subjects both deal with abstract preparation, their approaches are not the same. The first-year course in document abstracting predominantly centers on text analysis and comprehension: once students begin university, they have to become familiar with the various document typologies, particularly with scientific texts, and it is important to learn to understand and structure these documents. In the final year course, "Indexing and document abstracting," the student is assumed to be more familiar with scientific texts and more accustomed to text comprehension; as a result, information representation and the use of conceptual maps are dealt with in greater depth. Although both courses concern the subject of abstracting, they approach it from different angles: in the first year, the course centers more on the abstract as a product, with the study of the various abstract production procedures and their stages, and an in-depth exploration of textual structure detection. The final-year course focuses more on information representation and its links with new technologies. A weekday at the end of the final semester was chosen to carry out the trial. Participation was voluntary and took place on 23 and 24 May 2006 with a total of 19 master and 18 bachelor students (somewhat less than half the potential students). The sessions lasted 3 hours. Students were able to ask the expert tutors present in the classroom for clarification on how the schema should be done or the type of visual representation they were required to prepare. Data Collection and Processing Once the templates had been collected, all the data provided by the students was introduced into an Excel spreadsheet. This information was standardized and codified for ease of handling. We then designed the sample text. Each research team member abstracted the text according to the template, following all the stages outlined. The three authors of the study then held brainstorming sessions to come up with the best solution in each of the stages or sections of the template, bearing in mind that the assessment of some of the stages had to be based on the students' responses in previous stages; for example, to assess the way the sentences had been grouped, the sentences chosen in the previous stage had to be considered. This allowed us to spot any bias or doubts that could arise when assessing the results. The next step was the data analysis, which followed each of the sections of the template: • Descriptive data: We totaled the number of students holding other university qualifications or who carried out a professional activity compatible with the academic course. We also observed the students' level of familiarity with the subject and the difficulties they had encountered with the terminology. • Procedure: The stages proposed by the students were gathered and standardized for subsequent tabulation. March 2008 • Unknown words: The words the students had not understood were listed and tabulated, differentiating between LIS terminology and common words. • Subject and author's intention. This assessment was somewhat subjective, as each student answered this question in his or her own words, and as such was not standardized. We assessed the two aspects (subject and author's intention) on a scale of 1 to 3 to codify the answers, where 1 indicated no identification; 2, approximate identification; and 3, identification of the respective aspect. • Text structure: We noted the text structure elements that had been identified, verified whether or not they were correct, and assessed them on a scale of 1 to 5, with 1 representing the lowest and 5 the highest score. We took into account the way each part was named and the delimitation of the sections of text. • Markers: We observed which markers had been selected, whether they were correct or not, and the frequency with which they had been chosen. We were thus able to see whether the students were aware of what markers were and the importance of being able to locate them in order to understand the text. • Sentence selection: We counted and standardized the number of sentences chosen and totaled the number of words in the sentences to calculate the percentage of the whole text they represented, bearing in mind that the formal selection is usually 50 percent. We also calculated the percentage of correct selected sentences and the number of key sentences that should have been selected but were omi ed. It is not an easy task to determine the exact number of important sentences in a text, as it is essentially a subjective exercise, and consequently we analyzed those the students had underlined and only considered incorrect those that were obviously superficial or repetitive. For the same reason, only the failure to mention a reference sentence for any one of the sections in the text structure (the 5 basic sections) was considered as an error of omission. • Generalization: We calculated percentages of reduction on the previous stage and the original text to see whether the number of sentences or words increased or decreased when the students expressed themselves in their own words. • Grouping: We first counted the groups of sentences and then assessed how appropriate the groupings were in relation to the selected sentences on a scale of 1 to 3. • Schema: We looked at the type of schema the students had produced, and on a scale of 1 to 5, assessed how they structured the information and represented it in a schema (compared with the "grouping" data in the case of the Masters students, and with "structure" in the case of first-year students). Finally, we assessed the appropriateness of the schema to the original text, also on a scale of 1 to 5. • Keywords: Three steps were followed in this analysis: -The keywords suggested by each student were analyzed and synonyms were removed, as the students were not provided with controlled vocabulary and many of them used similar terms such as electronic information / digital information. Hence, for data-handling purposes the number of keywords identified by each student was reduced in some cases. -The percentage of correct keywords identified of all those proposed (accuracy) was calculated, together with the percentage of correct keywords of the total number of keywords that should have been identified (thoroughness). Up to 5 keywords were accepted as valid to take these measurements. -Finally, to obtain an overall picture, we examined the general frequency with which the proposed keywords appeared. • Visual organization. We focused on three aspects to assess this section: ability to coherently organize keywords, ability to represent the text content and choice of graphic used. The first two capacities were assessed on a scale of 1 to 5, where 5 represented the highest score. • Abstract: The following aspects of the abstract were analyzed: -Number of words. This gave us the percentage of reduction on the text. -Writing style. This allowed us to check whether the abstract was correctly wri en, with no spelling mistakes, repetition, literal copying from the text, examples, and so on (scale of 1 to 5). -Representativeness of the text content. This allowed us to see whether the abstract represented the content of the text (scale of 1 to 5). -Proportion. The proportionality of the abstract was analyzed; that is, whether each section of the text was reflected in the abstract in its true measure (scale of 1 to 5). • General Comments. We introduced a section in the Excel spreadsheet to note Results Of the 37 completed templates collected, 19 were from final-course students (11 women and 8 men) and 18 were from first-course students (12 women and 6 men). This sample had the following characteristics: • None of the first-course students combined a professional activity compatible with their studies or held any other university qualification. • Of the final course students, 21 percent had a university qualification other than the Library Science Diploma, and 31.5 percent combined their studies with a professional activity. Text Assessment When the students were asked how familiar they were with the text subject area and how complicated they found the terminology, the following opinions were obtained: Familiarity with the subject area was higher in final course students, 50 percent of whom claimed to be quite familiar with the subject and 15.79 percent very familiar, whereas over half the first-course students were only slightly familiar with the subject and 11.11 percent were not at all familiar. This result was only to be expected ,given that the subject was related to library science and the final-course students logically had a much greater knowledge. Generally speaking, neither of the two groups found the terminology especially complicated, particularly the final-course students, who had a broader education and richer vocabulary. Most of the firstcourse group considered the terminology not very complicated (66.67%), a few found it quite complicated (11.11%) and 22.22 percent, not very complicated. Finalcourse students had even fewer problems with the terminology: 26.32 percent found it easy/ not at all complicated and nearly 70 percent stated it was moderately or not very complicated. Procedure Used in Abstract Preparation The mean number of stages identified by first-course students was 3.88, while for final course participants it was 4.29. The latter group may have identified more stages because they followed a more thorough, complex process than the first-year students, who were at the start of their degree. The stages the students identified to approach the abstracting of the text are detailed below. However, this does not mean that the students rigorously followed these stages or did not unconsciously use others, but rather these were the ones they identified because they considered them to be more logical and provided a base for writing the abstract. There is reasonable consensus on several stages: rapid reading, detailed reading, underlining, extracting the main ideas and writing the abstract, all of which are highly logical and, to a large extent, coincide with related doctrine. The final-course students reported paying more a ention to the structure (47.06% of the sample) and, because of their experience, identified more stages. These tended to be stages that identified "extra-textual" elements such as the title, the structure, the identification of keywords (that allow the information to be be er understood and structured) and the typography, which allows the structure of the text to be be er understood. Some students also used other stages, such as looking at the subject (only 5.88%), which enabled them to understand the text as a Very few (5.88% of the two samples) used a schema to write the abstract, although final-course students did use them to represent the information, which proved to be a fairly effective system. Unknown Words There were no difficulties with the terminology and any unknown words were on the whole English expressions. The students were able to find the meaning of practically all the words using Internet (the trial was carried out on computers with an Internet connection). The term "STM" (Scientific, Technical, and Medical) caused the greatest difficulties both for first-course (44.44% of the sample) and final-course students (31.58%); the term metadatos for firstcourse students (38.89%), which may be because it is a technical term on the degree they have just begun, and "peer review" for final-course students (21.05%). Identification of the Main Subject Matter In general, the students had no major difficulty in identifying the main subject ma er; only 15.79 percent were unable to do so and referred to more secondary questions. The remaining 85 percent identified the topic either approximately (2) or correctly (3). We can therefore say that they were able to identify and analyze at a general level. Identification of Author's Intention In general, the students accurately identified the author's intention; most of them (63.16%) realized that the author's intention was to inform on the situation of document delivery in Canada and to detail the initiatives undertaken and planned to take place in this field. Only 10.53 percent failed to identify this intention. This element is linked to the subject ma er but requires a deeper analysis of the text. Structure The mean score for first-course students was 4.35; and, for final-course students, 3.1 out of a possible total of 5 points. The first-course students therefore performed better, which may appear surprising since a priori the final-course students should have a greater ability for identifying text structure. However, further analysis of the responses indicated that this may be due to a "bad habit" picked up in the final year: a relatively high number of students suggested an OMRC (objectives, methodology, results, and conclusions) structure because it is the one most commonly found in scientific articles. The chosen text was not structured in this way; rather, it was the presentation of a specific situation in a specific place and took the following structure: • Introduction / contextualization of the problem. • Specific presentation of the situation and initiatives in Canada. • Conclusions / future expectations. What were surprising were the high scores of the first-course students. This is due to the emphasis on structure identification in the first-year course content and the fact that they had carried out numerous practical exercises on both scientific and general texts in class. Markers A mean of 4.3 markers were identified by each student, but many were not found or were not given sufficient importance. The students identified a total of 17 different markers, all of which could be classified as reasonable choices, although perhaps they were not the most important. Clearly, not all of them were equally important, as some indicated the beginning of a section, others emphasized a relevant sentence, and others linked the ideas in the text. In general, neither of the two groups used this technique to full advantage, or at least not consciously. Selection of Most Important Sentences The students' mean values in this stage were as follows: We can observe that the final-course students made greater reductions than the first-course group in this initial information synthesis, with a mean of 17.42 sentences selected by the first course, and of 13.44 by the final course, representing 410 and 325 words respectively. However, further analysis of the selected sentences revealed be er results from the first-course than from the final-year students. A slightly higher percentage of correct sentences were selected by firstcourse students: 72.2 percent, as compared to 60.88 percent in the final-year group. The final-course students were also less successful than those on the first course in dealing with all aspects of the text. On average, the first-course group omi ed to mention 0.83 of the basic aspects of the text, while this mean rose to 1.33 in the final-course group. This aspect is fairly closely linked to the percentage of reduction made, and the poorer results of the final-course group were to be expected as they had selected fewer sentences. Generalization of Most Important Sentences As shown in the following table, the final-course students (the only group to perform this stage) made a slight reduction in the number of both sentences and words as compared with the previous stage. Not all were lower, however, and, in fact, some were higher, showing that not all the students had properly understood the purpose of this stage. Sentence Grouping Only final-course students were asked to complete this stage, of whom only 17 responded. The results are shown in table 6. A high reduction percentage can be observed: the mean value of 11.47 sentences in the previous stage dropped to a value of 6.18 groups of sentences (61.04%). Moreover, the sentence grouping obtained a fairly high value (2.41 out of 3). It should be noted that the scores g i ve n f o r s e n t e n c e grouping were based on the work of previous stages, and we were therefore assessing the ability to logically group the previously selected sentences. Schema The type of schema used was clearly different in the two groups: the final-course students showed a preference for graphic schemas (67%), whereas 95 percent of the first-course group opted for linear schemas. This is explained by the final-course group's greater knowledge of the different information representation techniques, which they studied in "Indexing and Abstracting Techniques" taught on the final course. Two aspects were assessed in this section: the ability to structure the information extracted in previous stages and its appropriateness to the text content. Both groups showed good ability in structuring the information they had gathered in the previous stages, although the final-course students obtained slightly higher scores (means of 4.28 in the first and 4.51 in the final-course group). However, the schemas prepared by the first-course students be er reflected the text content (4.33 in the first as opposed to 3.94 in the final course), perhaps due to the fact that in the "document schema" section they showed greater skill in identifying the text structure. Keywords Both groups identified a total of 26 different keywords, 15 of which were included by the two groups. As can be seen in the following figure, the mean number of words put forward by the first-course students was 4 per student, which, once synonyms had been removed, fell to 3.94 per student. The final-course group identified a mean of 6.29 keywords per student, falling to 5 when synonyms were eliminated. The mean number of correct keywords was slightly higher in the second group, although both groups came close to 3 of the 6 possible options. As the mean number of correct keywords identified by the two groups was similar, the accuracy and thoroughness values obtained were conditioned by the Linear Schema Graphic Schema BLS MLIS number of keywords selected. Hence, the first-course students, who had selected fewer keywords than the final-course group, obtained higher accuracy values but lower thoroughness values. Three of the 6 keywords we considered appropriate were detected by most of the students in both groups, and two, perhaps the most debatable of the six, were only selected by a small percentage of the students. One keyword, slightly more difficult to identify but still important, was identified more successfully by the final-course students, perhaps because of their greater ability to generalize and their familiarity with the subject ma er. However, on the whole, both groups performed well in this task. Visual Organization of the Information Only the final-course students were involved in this stage as they were more familiar with graphic representation of information and the various techniques and methods associated with it. The overwhelming majority of this group opted to use diagrams in their representation (88%), and more than half of this percentage used arrows to indicate the relationship between the diagrams (conceptual maps), unquestionably one of the clearest and most suitable ways. Only two students used a different type of graph, a hierarchy graph and a pie chart, the la er being particularly unsuitable for this case. The results we obtained from the assessment of graphic representation quality were poorer than those from the schema preparation stage, as it was more difficult to relate just a few concepts rather than sentences. Our assessment of the students' ability to relate the selected keywords on a scale of 1 to 5 resulted in a mean score of 3.65, but when we analyzed the value of the graph as a representation of the text, the mean fell to 2.94. In most cases, error was due to an incorrect relation between concepts. The choice of keywords also conditioned the way they were associated; and if a correct selection had not been made, it was clearly more difficult to provide the right relationship among them. Abstract The results are shown in the following two tables: The abstracts produced by the final-course students were more concise, with a mean of 142.74 words (7.70% of the text total); the first-course students reduced the text to 11.59 percent of the total, using a mean of 212.61 words in their abstracts. Although this represents a fairly significant difference, it did not substantially affect the quality of the abstracts. Our analysis of the abstracts' quality showed that the groups obtained a similar score for the representativeness of the text content (3.89 and 3.94) and the proportionality of the abstract (3.83 and 3.56). However, the final-year students presented a be er writing style (4.33 compared to 3.11 obtained by the first-course group), evidenced in be er sentence linking, be er use of punctuation, and less direct copying from the text, to name a few. To a great extent, this is the factor that determined the slightly higher overall assessment in the quality of the final-course students' abstracts. In general, the final abstracts are of a relatively good standard, and the main shortcomings, aside from writing style, are due to a failure to identify the structure of the text and errors in text schematization. As a result, the structure of a number of the abstracts suffered from a lack of proportion. Discussion Our analysis of the results revealed a series of relevant differences between the first-and final-course students, the most significant of which are the following: • The final-course students were much more concise than the first-course However, they used more keywords, possibly because of their greater knowledge of indexing techniques. • The first-course students were more successful at identifying the text structure, preparing schemas that correspond more appropriately to the text and selecting the most important sentences. • The final-course students had a be er writing style and showed greater skill in structuring the information. As can be seen, there is a set of skills directly related to the stages in document abstracting that can be discovered through an analysis of these stages. Comprehension and Analytical Skills In the stages associated with these skillsidentification of the text structure, selection of keywords and selection of most important sentences-we observed a highly developed ability for comprehension and analysis in the first-course students. Particular emphasis is placed on these skills in the "Document abstracting" course curriculum, in which intensive practice of text reading and structure analysis takes place, the object being to enable students to understand the text correctly and thoroughly grasp its meaning. Final-year students do not tackle these aspects in such depth on the "Indexing and abstracting techniques" course, essentially because they are assumed to be proficient in this skill, although clearly, if they do not systematically and regularly use what they have learned, they gradually lose this proficiency. The final-course students did not score badly, but simply lower than the firstcourse group. In general, they identified the text subject ma er and the author's intention, and their selection of keywords was relatively good. In other words, they understood the text, but in the more thorough analysis (structure or the selection of the most important sentences) they were le slightly behind. How can this situation be improved? By a empting to improve text analysis competencies through similar exercises to those done by the first-course students. Identification of text markers was not a strong point in either of the groups, and it may be appropriate to perfect this technique to strengthen the students' text analysis skills. Synthesis Skills This skill was best seen in the writing up of the abstract stage, although the most important sentence selection stage also provided significant clues. A clear tendency to emerge in various stages was that the final-course students were much more likely to synthesize; they said what they wanted to say more succinctly. Although their scores were lower in the selection of important sentences stage, this weakness was due to a failure in analysis, not in their ability to synthesize, which was, in fact, quite high. In the abstract, the final-course students obtained a similar overall score to the first-course group, but using much fewer words. As in the sentence selection stage, the problems arising in the abstract (proportionality) derived from previous stages and were not due to a lack of synthesis skills. This may be explained by the fact that the instruction the students received at university has fine-tuned their ability to synthesize as an important transversal skill in many subjects. Moreover, on the course studied they have learned the value of being concise through practice in schematization and preparation of conceptual maps. The first-course students have not yet developed this skill through practice and training. Information Organization and Structuring Skills. The final-course students' skills in these areas were adequately developed. The way they prepared schema, grouped sentences, and represented the information graphically evidenced a great ability to structure information, which was to be expected since the "indexing and abstracting techniques" subject focuses heavily on these skills. As in the previous stages, the shortcomings we detected were due to problems in the detailed text analysis, but the information they selected was adequately organized and structured. The first-course students have also acquired these skills, and their results were fully satisfactory; certain aspects related to schema typology could be improved, but, on the whole, they showed a good grasp of these skills. The results they obtained in these stages were positively influenced by the high quality of their text analyses, which greatly contributed to their performance in subsequent stages. We can therefore conclude that students develop information organization and structuring skills on both courses, although they are more firmly consolidated in final-course students. Expression Skill Expression skill, evidenced in the writing up of the abstract, is of great importance, as it can be clearly seen in the final product of any communicative activity. It was more highly developed in the final-course students, as a result of much more training in this aspect, ranging from projectbased work to exams. It may therefore be expected that the first-course students will also develop this skill during their time at university. However, to ensure that this happens, the maximum num- March 2008 ber of exercises involving text writing should be included in their instruction program. Conclusions The transversal nature of information skills means they can be appreciated in any learning process and, in general, in any aspect of life. Knowledge about how capable students are in these skills is essential if they are to be improved, if weaknesses and strong points are to be detected and corrective measures adopted. The discipline of document abstracting is studied on the Library and Information Science degree and, on the whole, centers on the product: the abstract. However, if abstract preparation is to be properly undertaken, various stages and skills are required in a set of abilities. If we focus only on the abstract as the final result, ignoring the stages necessary in its creation, we cannot see which competencies and abilities need to be strengthened. Through this study we have analyzed these stages by observing how two groups of students produced an abstract, thereby detecting the strengths and weaknesses of each group. Likewise, by comparing the curriculums for the document-abstracting subjects taught at the University of Granada, we were able to discover the keys to identify their causes and improve skills in these competencies. Our case showed that the instruction received by first-course students in information analysis skills was appropriate, whereas the corresponding course taken by final-course students focused more on correct structuring and graphic representation of information. To improve competencies related to document abstracting in the field of information analysis and synthesis skills training, greater emphasis must be placed on the learning process, which can be advanced through practical exercises in the corresponding subjects. The following will be of interest to the case in hand: • Exercises to improve reading speed, a ention, and comprehension (individual and feedback/sharing ideas). • Exercises in extracting thematic concepts from original documents following models by Lasswell 32 and Ranganathan 33 to improve skills in identifying theme and rheme structures in documents. • Learning activities dealing with visual representation of text concepts using the concept map technique (individual and feedback/sharing ideas). • Exercises to assess abstract structure and its correlation with the original document (individual and feedback/sharing ideas). The use of action-research methodology is particularly appropriate for this type of study, as its purpose is to get to know the student and thereby improve his or her training. The use of standardized templates that reveal the students' skills, adapted to the students' level and the characteristics of the desired information, provides this knowledge and, if used on a regular basis (at the beginning and end of a course, every year, or another appropriate time), additional valuable information can also be obtained, including the extent to which the student improves throughout the course, the validity of a certain teaching method, or even the work of the teaching staff.
2019-01-23T16:04:46.716Z
2008-03-01T00:00:00.000
{ "year": 2008, "sha1": "25a60e1f0d3373c699cf7881cc47b0e3d821da10", "oa_license": "CCBYNC", "oa_url": "https://crl.acrl.org/index.php/crl/article/download/15922/17368", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d50a27deb0b97eb894e037c43385749afaea4cb8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
225382261
pes2o/s2orc
v3-fos-license
Trip Chaining Model with Classification and Optimization Parameters : In order to model the complex requirements of users travelling in an urban environment, the relevant parameters for creating activity chains have to be identified. In this study, travel related parameters were collected and groupedinto two main types: classification parametersand optimization parameters. In the case of optimization parameters, further grouping was performed where general and comfort parameters were introduced. Additionally, the possible values and data sources of the parameters were identified. A utility function was created to take into account the optimization parameters and the weights. Weights related to comfort optimization parameters were aggregated to decrease the number of required settings by the users. Finally, the features of the proposed optimization algorithm are described. With the identified parameters, aggregated weights and elaborated utility function activity chains can be optimized for users with di ff erent requirements. Introduction Recent developments in the field of travel behavior are dealing with topics like activity-based trip analysis, mode choice modeling, travel demand management, and flexible mobility options. About 20 years ago, Bhat and Singh [1] developed an analytical framework to identify the travel patterns of workers by estimating the commuting mode choice, the number of stops, and arrival periods. In the same period, Wen and Koppelman [2] found empirical results to demonstrate the connection of individual parameters to activity location choice and tour formulation. Bowman and Ben-Akiva [3] presented a daily activity scheduling concept where activity and travel related decisions were handled together with transport mode, time choice, and activity location. Islam and Habib [4] investigated the effect of socio-demographic characteristics on activity chains and found that several characteristics played a major role in influencing trips. Many other aspects of activity chains have been studied such as the comparison of travel behavior by men and women by McGuckin and Murakami [5]; the analysis of activity chains in specific regions by Subbarao and Krishna Rao [6]; the assessment by age group by Golob and Hensher [7]; and travel patterns of specific user groups [8]. These papers highlight that activity-based modeling needs to include several types of parameters. Mazzula [9] analyzed user responses by applying an activity-based approach, where stated preference and revealed preference results were combined. Random utility models were used to simulate travel behavior and potential choice alternatives. In order to define traveler profiles, Pronello and Camusso [10] used factor analysis and cluster analysis. The study showed how significant constraints such as necessity, time saving, and transport supply determine a behavioral change, while Prillwitz and Barr [11] tried to assess the role of attitudes for travel decisions. The results demonstrated the usefulness and limitations of segmentation approaches and underlined the need for more comprehensive mobility style frameworks. Haustein and Hunecke [12] worked on the creation of useful segmentations of travel groups who share similar attitudes and preferences. In their contribution, attitudinal, socio-demographic, geographical, and behavioral segmentations are compared to provide sustainable travel choices. The research question arises, of how to define a suitable set of parameters and model the activity chain optimization with a utility function. Several researchers have dealt with aspects of trip chaining, activity scheduling, and travel behavior, where user related parameters and grouping options were also investigated. However, the representation of detailed user requirements in an activity chain optimization framework has not appeared as of yet. Therefore, in this paper, a model with a set of parameters and a utility function are introduced. Such a detailed description of trip chaining parameters and the related utility function with weights has not been realized, thus the contributions of this paper provide a real added value to the literature. With this achievement, it will be possible to create more advanced activity chains taking into account user requirements. The main contributions of this paper are: • To provide a detailed classification of parameters related to trip chaining. • To identify the types, potential values, and data sources of the parameters. • To aggregate weights related to parameters, so that user settings can be easier. • To create a utility function for the activity chain optimization. The rest of this paper is structured as follows. Section 2 presents the literature review. In Section 3, the parameters are defined, which were separated into two groups: classification and optimization parameters. In Section 4, the model is elaborated, where the utility function connects with the weights and parameters. In Section 5, the implications, limitations, and realization options are discussed. Section 6 provides the conclusions. Literature Review Trip chaining, or often called activity chain optimization, has been investigated by Timmermans et al. [13], who described and analyzed travel patterns. They proved that travel patterns are, in general, independent from spatial settings. Liao et al. [14] modeled the activity-travel scheduling problem to predict short-term effects of travel information systems and travel demand management. The authors developed a multi-state supernetwork, where the temporal dimension was also included when selecting the locations of activities. More importantly, personal preferences were taken into account, which supported optimal solutions for the travelers. Buliung et al. [15] explored the spatial variety of activity patterns and highlighted the importance of the flexibility of activities; however, no algorithm was developed to provide solutions to the travelers. Balaji et al. [16] worked on a hybrid approach that combined customer prioritization with optimization algorithms. In their model, users were clustered, and optimal routes were assigned using the analytic hierarchy process (AHP). Hafezi et al. [17] developed a method for modeling the daily activity patterns of individuals. The dependencies between activity type, activity frequency, and socio-demographic characteristics were taken into account while employing a random forest model. Kang and Recker [18] proposed an algorithm for daily activity scheduling using the location selection problem, where the locations of activities were chosen using predetermined and alternative locations. In order to define the best solutions, a utility function was introduced. The problem of huge search space was in this case solved with dynamic programming. Hilgert et al. [19] developed a mobility assistance system that gathers information from timetables and a real time information system. Furthermore, it knows the user plans and can reorganize weekly activity schedules according to personal preferences. They included both personal and network related parameters. In order to collect data from activities and user preferences, traditional survey methods and automatic data collection methods can be applied [20,21]. The answers of the questionnaires can be analyzed by AHP in order to determine the preferences of user groups [22]. However, these methods require time and human resources. A current application for automatic data collection is GTPlanner [23], Sustainability 2020, 12, 6422 3 of 15 which takes into account personal preferences when planning routes for users, and also provides information on their trips. Lawton [24] claimed that there were four possible sources of information to build up or feed activity-based models, which may be used to identify parameters of an optimization model: household surveys (revealed preference) to study activities that influence travel demand; stated response surveys to investigate activity-travel patterns; longitudinal panel surveys; and retrospective surveys of activities to explore long term behavior (e.g., household location decisions). The combination of these methods with information technology supported information may help to identify personal parameters to establish a proper activity chain optimization model. In connection with this, an important aspect of a survey by Frignani [25] was the attempt to capture activity-travel planning attributes. The planning attributes were focused on timing and the constraints of planning decisions and explored whether user decisions regarding transportation mode are mainly driven by routine, while the choice of start time of activities is more individual and impulsive. In addition, Artenze et al. [26] developed a latent-class user model for tourists, where they used activity location-based parameters and trip-based parameters (i.e., tourist attraction values, time-use characteristics and point of interest (POI) attributes). With a multi-attribute utility function, personalized optimal tours were offered for the users. This approach was also utilized in the current research. Relevant papers were collected (Table 1), which cover the aspects of activity chain optimization, especially the general goal setting (modeling, with algorithm development), the used network (multimodal, with activity types), the applied calculation method (optimization, with utility function), and the types of parameters included (classification, with optimization parameters). Artenze [27] placed emphasis on providing personalized advice for travelers. The main idea was to find out travel parameters based on choices, where empirical testing was performed based on a travel choice experiment; however, no optimization was performed. Nijland et al. [28] developed an activity-based model, where daily agendas were modeled based on a web survey with reported activities. The research analyzed the effects of planned activities on the decision to schedule an activity, but no optimization was realized. Another activity travel scheduling model was created by Miller and Roorda [29] based on travel diaries. Their aim was to understand the process of how travelers schedule and reschedule activities with a utility maximization approach, however, several features were lacking such as flexibility and multimodality. Chowdhury and Scott [30] examined the influence of the built environment on trip-chaining behavior with regression models. They took into account personal and household characteristics, and a few attitudinal variables, but did not use detailed optimization parameters. Their focus was rather on the modeling of accessibility, and not on the optimization of trips during the day using a utility function, which is present in our model. Dib et al. [31] worked on a route planning problem in a practical way. They developed route planning methods in multimodal transportation networks using genetic algorithms and variable neighborhood search methods. In contrast to traditional algorithms, this approach was fast enough for practical routing applications. However, the approach was presented on a theoretical network and did not consider daily activities and optimization parameters. Ghiani et al. [32] solved the traveling salesman problem with heuristic algorithms to generate optimal activity chains. Here, the implementation of daily activity optimization was presented, however, neither flexibility nor a complex utility function were elaborated. Nuzzolo and Comi [33] created a method of how to choose paths in multimodal travel networks. The method used an individual traveler utility function, which allowed personal preferences to be included, although daily activity chains and complex optimization parameters were not considered. Västberg et al. [34] developed a dynamic discrete choice model for daily activity travel planning including individual preferences and generating a utility function. Additionally, time-space constraints were taken into account, but personal and optimization parameters were not. One of the most complex solutions was provided by Pougala et al. [35], who elaborated a scheduling method for daily activities where a complex utility function with flexible activities was included using a mixed integer programming approach. They covered four transportation modes and 11 activity types; however, classification parameters were not considered. Malik and Kim [36] created an optimal travel route recommendation mechanism to predict the best routes for tourists based on neural networks and particle swarm optimization. In their route optimization, a complex utility function was created, and five main optimization factors were included; however, activity types did not play a role, and only a limited number of optimization parameters were considered. Another excellent approach was elaborated by Charypar and Nagel [37], who applied genetic algorithm to provide activity plans, where a complex utility function was created taking into account the preferences of the users. Their utility function included the time and the location of the activity, however, multimodality and activity types were not handled. Definition of Classification and Optimization Parameters In order to model the complex requirements of users regarding an urban activity chain, the possible optimization parameters were identified. In the literature, the main typical optimization parameters are time, cost, and comfort. Furthermore, the parameter type, component type, possible values, and data sources were created for grouping the parameters. Parameter Type: Two types of parameters can be introduced ( Figure 1). The parameter type describes whether the parameter is a classification parameter or an optimization parameter. The detailed descriptions of the parameters follow the order of the parameter types, which are actually strongly linked to the component type. can be changed by personal preferences. The optimization parameters are used in the optimization process. Their two main groups are the general optimization parameters without weights (with exception of time and cost) and the comfort optimization parameters with weights. Usually, general optimization parameters are parameters with fixed or predefined values, where weighting cannot be defined (e.g., opening times). Parameters present directly in the utility function are in italics. Component Type: Three types of component were identified: the user, the trip, and the location (Figure 2). Most parameters clearly belonged to one component type, but some parameters influenced more component types, therefore they were placed in the intersections. The user includes classification and optimization parameters, which depend on the individual user. The trip contains optimization parameters and is divided into sub-types according to the transportation modes, as transportation modes have specific parameters. The location consists of those optimization parameters, which are connected to the location of the activity. The classification parameters are not used directly in the optimization process, but are crucial inputs for the classification of users into user groups. The creation of user groups facilitates user decisions about setting the weights for the optimization parameters. The user groups possess predefined settings of the weights, where the weights provide only an initial setting and the values can be changed by personal preferences. The optimization parameters are used in the optimization process. Their two main groups are the general optimization parameters without weights (with exception of time and cost) and the comfort optimization parameters with weights. Usually, general optimization parameters are parameters with fixed or predefined values, where weighting cannot be defined (e.g., opening times). Parameters present directly in the utility function are in italics. Component Type: Three types of component were identified: the user, the trip, and the location ( Figure 2). Most parameters clearly belonged to one component type, but some parameters influenced more component types, therefore they were placed in the intersections. The user includes classification and optimization parameters, which depend on the individual user. The trip contains optimization parameters and is divided into sub-types according to the transportation modes, as transportation modes have specific parameters. The location consists of those optimization parameters, which are connected to the location of the activity. Possible Values: In the case of optimization parameters with numeric values, the quantification and creation of categories is easy, as exact values can be assigned to the categories (e.g., prices). The quantification is also possible for optimization parameters with a textual value set by assigning artificially created value categories. In some cases, the optimization parameters can only be categorized by applying heuristic considerations or the exact values of the categories can be learned by collecting a large number of examples (e.g., crowding). In the case of optimization parameters with weights, the parameters have Possible Values: In the case of optimization parameters with numeric values, the quantification and creation of categories is easy, as exact values can be assigned to the categories (e.g., prices). The quantification is also possible for optimization parameters with a textual value set by assigning artificially created value categories. In some cases, the optimization parameters can only be categorized by applying heuristic considerations or the exact values of the categories can be learned by collecting a large number of examples (e.g., crowding). In the case of optimization parameters with weights, the parameters have the following possible values: low, medium, high. Low values represent "good" features, while high values represent "bad" features. Data source: The data source refers to the origin of the parameters, which can originate from the user (by setting the requested values), from the application (by collecting and evaluating usage statistics), or from external sources (by receiving data or datasets). The external sources can be represented by a transport operator, a municipality, a social media provider, a POI database, or other databases. Classification Parameters In the following section, the parameters are grouped by the parameter type. The comfort optimization parameters are further divided by the component type. The classification parameters and their attributes are identified in Table 2, and are mainly connected to the user component type. • Age, gender, occupation, income, car ownership, family status: The basic socio-economic data, which are required to categorize users into user groups. Data source: The data source refers to the origin of the parameters, which can originate from the user (by setting the requested values), from the application (by collecting and evaluating usage statistics), or from external sources (by receiving data or datasets). The external sources can be represented by a transport operator, a municipality, a social media provider, a POI database, or other databases. Classification Parameters In the following section, the parameters are grouped by the parameter type. The comfort optimization parameters are further divided by the component type. The classification parameters and their attributes are identified in Table 2, and are mainly connected to the user component type. • Age, gender, occupation, income, car ownership, family status: The basic socio-economic data, which are required to categorize users into user groups. • Number of daily trips: Average number of trips during a day (e.g., users with family tend to make more daily trips, while pensioners probably make fewer daily trips). • Flexibility: Average number of flexible activities during a day (e.g., users with flexible working hours and students tend to have more flexible activities). • Number of changes: Average number of changes in daily activity plans (e.g., younger people tend to change their mind and have new unplanned events during the day). General Optimization Parameters The general optimization parameters were identified. Most of these parameters were without weights, with the exception of time and cost. These parameters are mainly connected to both the user and the location component type (Table 3). Comfort Optimization Parameters The comfort optimization parameters were described connected to the trip component type (Table 4). • Weather (p9): Measure for the actual daily average weather situation measured by the temperature and the humidity (e.g., rainy, windy). Finally, comfort optimization parameters were identified connected to the location component type (Table 5). Elaboration of the Method Utility functions were introduced in order to combine the values of the optimization parameters and to support the creation of activity chains. The utility functions consist of optimization parameters and weights. Weights related to comfort optimization parameters are aggregated weights. Aggregated Weights The aggregated weights were introduced to decrease the number of required settings by the users. They influence the relevance of more optimization parameters, thus the modeling of typical user requirements is present. The possible values of the aggregated weights can be between one and five. These values are predefined by the user groups (average values), but can be changed by the user (Figure 3). The utility functions (u (p,w)) regarding comfort optimization parameters were formalized, creating the mathematical context of dependencies between optimization parameters and aggregated weights. The following aggregated weights were defined: • Routine (w r ): Measure of willingness to differ from well-known routes; this weight has a general effect on several parameters (e.g., willingness to make detours, if it is beneficial), is a super aggregation with an effect on delay sensitivity, lifestyle, quality sensitivity, price sensitivity, and area sensitivity. • Delay sensitivity (w 1 ): Average delay tolerated by the user, which depends on the congestion and the incident probability of the chosen trip (e.g., users with high delay sensitivity should avoid congested routes). u 1 (p, w) = p 15 * w 1 + p 16 * w 1 * w r (1) • Lifestyle (w 2 ): Measure for environmental consciousness and security features (e.g., rather using more eco-friendly transportation modes and avoiding dangerous areas). u 2 (p, w) = p 10 * w 2 + p 27 * w 2 * w r (2) • Quality sensitivity (w 3 ): Measure for taking comfort features, price ranges, and parking space into account (e.g., businessmen tend to use cars and visit places with higher prices). u 3 (p, w) = p 11 * w 3 + p 30 * w 3 + p 24 * w 3 * w r (3) • Price sensitivity (w 4 ): Willingness to pay for a certain trip, which includes traffic tolls and parking fees (e.g., workers may travel longer distances, where no traffic toll has to be paid). u 4 (p, w) = p 14 * w 4 + p 29 * w 4 * w r (4) • Area sensitivity (w 5 ): Measure for taking features regarding ratings and the area of the location into account such as the city area and location area (e.g., users tend to visit restaurants in the city center, but a recreational activity rather close to a park). u 5 (p, w) = p 26 * w 5 + p 23 * w 5 + p 25 * w 5 * w r (5) • Biking preference (w b ): Measure of the willingness of using a bike during trips (e.g., students tend to bike more often); this weight has a general effect on biking related parameters, • Biking habits (w 6 ): requirements of the users regarding road quality, biking routes, and weather (e.g., many users prefer built roads and good weather). u 6 (p, w) = p 12 * w 6 + p 13 * w 6 + p 9 * w 6 * w b • Car preference (w c ): Measure of the willingness of using a car during trips (e.g., businessmen tend to use their own cars more often); this weight has a general effect on car related parameters. • Car habits (w 7 ): Requirements of the users regarding road quality and weather (e.g., certain users do not use their cars in winter). u 7 (p, w) = p 13 * w 7 + p 9 * w 7 * w c (7) • PT preference (w p ): Measure of the willingness of using PT during trips (e.g., younger people prefer public transportation, because they can utilize their time more efficiently by reading on the vehicles); this weight has a general effect on PT related parameters. • PT habits (w 8 ): Requirements of the users regarding number of transfers, crowding, and vehicle types including cleanliness, comfortable seats, heating and air conditioning (e.g., users do not prefer old vehicles without air conditioning during the summer). • Walking preference (w w ): Measure of the willingness to walk during trips (e.g., young people tend to walk more); this weight has a general effect on walking related parameters. • Walking habits (w 9 ): Requirements of users regarding pavement quality, street type, and weather (e.g., certain users prefer nice road with trees and good weather). u 9 (p, w) = p 9 * w 9 + p 21 * w 9 + p 20 * w 9 * w w (9) • Special needs (w 10 ): The need for special services such as modern low floor vehicles, need to avoid stairs or slopes and accessibility of locations (e.g., users with wheelchairs do not like to visit places without ramps). u 10 (p, w) = p 22 * w 10 + p 18 * w 10 + p 28 * w 10 (10) Sustainability 2020, 12, x FOR PEER REVIEW 12 of 16 Utility Function The main utility function was defined as the sum of the products of optimization parameters and weights. The optimization parameters were weighted, where weights represent the personal preferences of the users. In the case of comfort optimization parameters, the weights were grouped into aggregated weights, so that users could express their requirements. The optimization parameters were values retrieved from external data sources, whereas weights and aggregated weights were set by the user. The value of time, the cost, and the value of comfort were different between user groups. During the optimization, the utility function is minimized. The minimization of time and cost is a well-known operation. In the case of comfort parameters, the possible values were defined in such a Utility Function The main utility function was defined as the sum of the products of optimization parameters and weights. The optimization parameters were weighted, where weights represent the personal preferences of the users. In the case of comfort optimization parameters, the weights were grouped into aggregated weights, so that users could express their requirements. The optimization parameters were values retrieved from external data sources, whereas weights and aggregated weights were set by the user. The value of time, the cost, and the value of comfort were different between user groups. During the optimization, the utility function is minimized. The minimization of time and cost is a well-known operation. In the case of comfort parameters, the possible values were defined in such a way that low values represent ideal conditions and high values represent not preferred conditions. min u(p, w) = p time * w time + p cos t * w cos t + m i=1 u i (p, w) • p-Optimization parameters. Optimization Algorithm The utility function supports the creation of the optimization algorithm. In general, optimization algorithms can be divided into two basic categories. The first type is exact algorithms [38,39], which search the whole solution space and provide a globally optimal solution, however, in most cases with considerably more processing time. The second type is heuristic algorithms, which use specific rules to speed-up the solution. This implies that not the whole solution space is searched, thus they usually provide only a nearly optimal solution. However, with proper settings, it is acceptable for practical applications [40]. For the optimization of activity chains, a special heuristic algorithm is to be applied. In the case of transportation related problems, the GA framework has been successfully applied to activity scheduling problems [41], such as the travelling salesman problem (TSP) [42], the travelling salesman problem with time-windows (TSP-TW) 42, and the vehicle routing problem (VRP) [43], which are classified as NP-hard problems [44]. These kinds of problems are usually harder to solve as the size of the network grows, however, when using the GA framework, solutions can be calculated in a reasonable amount of time. The optimization algorithm uses this GA framework that iteratively solves the TSP-TW problem for different combinations. Thus, it provides a set of possible solutions, which are evaluated based on the elaborated utility function. After running the algorithm for several iterations, a nearly optimal solution can be derived for the planned activity chain of the user. The functioning of the algorithm can be described in the following steps: • Data input: This part is especially supported by the classification and optimization parameters, which were discussed in detail in Section 3. They provide the main input for the optimization algorithm. During the creation of activity chains, it is assumed that the user is already aware of the activities and other parameters, which are provided to the algorithm in advance. • Creation of alternatives: Priority is one of the most important optimization parameters. Based on its value, if an activity is flexible, then the demanded service may be available in more places. The algorithm has to find these alternative locations, so that better alternatives can replace the original activity locations. However, if an activity is fixed, then the activity location cannot be changed and thus optimization cannot be performed for this activity. • Calculation of the utility function: With the original and alternative locations of activities, the utilities between the activity locations can be calculated. The utility function was discussed in detail in Sections 4.1 and 4.2, which provides the ranking of different alternatives. • Optimization algorithm: The GA calculates based on the provided utility function of the best scenarios, which results in an optimized set of activity locations based on the provided classification and optimization parameters. The GA framework runs several times to find possible solutions. It is not ensured that the global optimum will be reached, however, with good parameter settings, the solution can be quite close. • Visualization: The proposed activity chain has to be shown on a map, where the optimal activity locations are present, and the daily route is available. Discussion In this study, a well-defined utility function was created to support the optimization of activity chains. The main limitation of the study is the lack of realization, which will be done in a later stage of the research, however some considerations are discussed in this section. A crucial point of the development of the planned algorithm is the specific setting of the GA framework, where the genetic operators have to be initiated, which are the selection, the mutation, and the crossover operators. Moreover, parameters of the GA framework also have to be set, which are the population size, the mutation probability, the crossover probability, and the number of generations. The exact realization of these steps requires an extensive analysis and testing of the proposed framework, which are part of the future research directions. When realizing the optimization algorithm with the proposed utility function, a series of experiments need to be conducted to analyze the effectiveness of the algorithm. Thus, comparisons between the heuristic and the optimal solutions will be provided. In addition, a sensitivity analysis is needed to check how the setting of each parameter changes the results of the optimization. The application of the utility function could be used in any preferred location where the following data are available: a map of the city with routes provided by a map operator, the timetable of public transportation provided by the transport operator through an interface, city specific parameters provided by local authorities, and the set of activities provided by the travelers. In case some data are not present, the algorithm would be still functional, however, the optimum would be calculated by less parameters. By collecting personal information, the real weights of the users can be acquired. This could be reached through extraction from usage statistics (e.g., average waiting time for the bus) or by letting the user choose the value of the weight parameter, which is adapted during real usage (e.g., number of daily trips). As a consequence, the belongingness to a user group can be analyzed (e.g., the certain user likes biking more than the average of their user group). Finally, case studies could be carried out that would include the logging of user trips and comparing the activity chains of the original and optimized version. Conclusions In this paper, a model with a set of parameters was introduced and grouped into classification parameters (to classify users into user groups) and optimization parameters (to provide utilities to the optimization algorithm). The parameters were connected to the user, the chosen transportation mode, or to the location type of the activity. Aggregated weights were assigned to the optimization parameters, which represent the preferences of the users. A utility function was also elaborated to provide input for the optimization algorithm about the preferences of the user. As a conclusion, it was observed that some parameters were easy to include in the optimization algorithm (e.g., time), but some were hard to quantify or collect. In order to realize the optimization framework in real circumstances, a huge amount of external information is required to feed the model. Funding: The research reported in this paper was supported by the BME Artificial Intelligence FIKP grant of EMMI (BME FIKP-MI/SC).
2020-08-13T10:10:32.187Z
2020-08-10T00:00:00.000
{ "year": 2020, "sha1": "a71b0c6f1dc59e6487656c91eda98ffed75ef375", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/16/6422/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9d7dcaf7a2cf103b2ce0fdbc4824db96f9f3d4ec", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
1323416
pes2o/s2orc
v3-fos-license
Validity and reliability of the multidimensional health locus of control scale for college students Background The purpose of the present study was to assess the validity and reliability of Form A of Multidimensional Health Locus of Control scales in Iran. Health locus of control is one of the most widely measured parameters of health belief for the planning of health education programs. Methods 496 university students participated in this study. The reliability coefficients were calculated in three different methods: test-retest, parallel forms and Cronbach alpha. In order to survey validity of the scale we used three methods including content validity, concurrent validity and construct validity. Results We established the content validity of the Persian translation by translating (and then back-translating) each item from the English version into the Persian version. The concurrent validity of the questionnaire, as measured by Levenson's IPC scale was .57 (P < .001), .49 (P < .01) and .53 (P < .001) for IPC, respectively. Exploratory principal components analysis supported a three-factor structure that items loading adequately on each factor. Moreover, the approximate orthogonal of the dimensions were obtained through correlation analyses. In addition, the reliability results were acceptable, too. Conclusion The results showed that the reliability and validity of Persian Form A of MHLC was acceptable and respectable and is suggested as an applicable criterion for similar studies in Iran. Background Health Locus of Control (HLC, hereafter) is one of the most commonly-used parameters of health belief in planning the health education programs. In fact, the HLC is the degree to which individual believe that his or her behavior is controlled by external or internal factors [1,2]. The Multidimensional HLC scales have been used as one of the most efficient measures of health-related beliefs for more than a quarter of a century. HLC has been recognized as an important construct in understanding and predicting health behaviors [3]. It has helped to shape our thinking about the role of beliefs in the context of health behaviors, health outcomes and health care. According to Rotter's (1966) social learning theory, individuals may have internal or external locus of control, often abbreviated as I/E dimension [4,5]. Wallston with their colleagues (1978) deserves the acclaim to have applied successfully Rotter's basic idea to the health domain. The term 'locus' refers to the location where control resides either 'internal' to the individual who believe certain events and happenings are due to their own actions and behaviors, that is, their own actions are directly responsible for their events in their lives or 'external' to the individual who believe certain events and happenings in their lives are due to factors such as physicians, chance, fate, and luck. They developed a unidimensional HLC scale and began using it in studies in the 1970s [5,6]. The results from the early studies with the unidimensional HLC Scale convinced the Wallstons that internality and externality are separate dimensions. Following Levenson's (1974) splitting Rotter's I-E construct into three dimensions -Internal, Powerful Others, and Chance -they developed the Multidimensional Health Locus of Control (MHLC) scales. The MHLC scales consist of two equal and parallel forms, A & B that are the 'general' health locus of control scales [6]. Wallston, Stein, & Smith (1994) developed Form C of the MHLC in which they split the Powerful Others dimension into two subscales: Doctors and Other People [7]. Finally, Wallston et al. (1999) added a new subscale assessing beliefs about God as a locus of control of one's health status [8]. Chaplin et al., (2001) used factor analysis for this four-factor scale. Their findings showed that despite a desirable correlation between the three external factors of God, Powerful Others, and Chance, the four-factor condition which takes into account an internal control factor yields the best outcome. And these sub-scales of MHLC can be scored separately as different dimensions [9]. The HLC is regarded as an effective variable in the development of health behavior, clinical capacity, and determining the health problems. The Internal HLC is positively accompanied with knowledge and attitude, psychological state, health behavior, and better health conditions. On the other hand, most of the external HLC is accompanied with negative health behaviors and weak psychological state [3]. As such, various scales of HLC have been developed in general populations or children. And many studies have been conducted on this scale throughout the world which has led to valuable outcomes in the field of health psychology. The findings of these studies can be found in over 380 articles available in different data bases such as Academic Search Premier, Medline, Eric, and Health Source: e.g., Nursing/Academic Edition, American Humanities Index, Health Source Consumer Edition and Psych Articles [10]. In the past 25+ years, Form A has been used in over a thousand studies and has been cited in literature hundreds of times [5]. Comparison Iranian health belief and especially general health data with Western data using a common scale such as Form A of the MHLC is also indispensable in order to grasp any features that are characteristics of Iranian samples. Such an approach would provide insight into the general health of the Iranian community setting, so that we could choose or modify a number of health education and promotion projects. Despite its importance, the validity and reliability of the MHLC scale for Iranian population have not yet been verified. In the first step, the present study assessed the psychometric characteristics of Form A among Iranian college student. In so doing, various kinds of validity, namely, content validity, concurrent validity, and construct validity of this scale will be determined. Then, the reliability of the scale will be examined through test-retest, parallel forms, and internal consistency methods of estimating reliability. And finally, some suggestions and implications for further research will be put forward. Ethical considerations Ethical approval of this study was gained from the Research Ethics Committee, which at the time was based at Tarbiat Modares University. Measures Form A of the MHLC scales include 18 items and consist of three subscales, namely Internal Health Locus of Control, Powerful Others Health Locus of Control, and Chance Health Locus of Control. Each of these subscales contains six items with a six-point Likert response scale ranging from 'Strongly Agree' to 'Strongly Disagree'. Scales are scored by summing respective items for a total scale score. Higher scores reflect stronger endorsement of MHLC scales [6]. Internal HLC refers to the extent that personal behavioral factors are responsible for one's health or illness; Powerful Others HLC encompasses the degree to which one's health is influenced by others for example, by physicians or other healthcare professionals; and Chance HLC taps one's belief that his health depends on chance, luck, or fate. In order to assess concurrent validity we used the Form A of MHLC and Levenson's IPC scales (Persian language versions) simultaneously. The Levenson's IPC scale, which is a six-point Likert scale type, includes twenty-four items that similar to MHLCS it includes these components: internal, powerful others, and chance. Each one consists of eight questions, which measure individuals' belief level in the cases that were reminded before. Validity of IPC scale has been determined with Rotter's I-E scale (1996). Also Levenson has reported Kuder-Richardson's reliability coefficient for every scales of IPC 0.50, 0.61 and 0.77 respectively. The validity and reliability of Farsi manuscript of this scale was reported by Farahani, Cooper, & Jin (1996). For example, reliability coefficients for I, P and C were respectively 0.76, 0.56 and 0.67 among students [11]. Procedure The 'forward-backward' procedure was applied to translate Form A of the MHLC from English into Persian. The original 18 items questionnaire was translated into Persian by the authors, and then was translated back into English by two bilinguals who were blind to the original English version. The expert panel (majoring in psychology, specialist in Persian language and health sciences) reviewed our back-translation and some corrections were made accordingly. After that, in a pilot study, the edited version of the questionnaire was submitted to a group of 130 students from Tehran Medical University. There were two purposes for this review: first, to ascertain whether the student's understanding of the questionnaire items was the same as that of the researcher; and second, whether there was any disagreement among the students regarding their understanding of the items. Afterwards, the students' comments were taken into account and some modifications were done where necessary. Questionnaires were filled out by a group of college students. Students completed the paper-and-pencil measures in a classroom setting, which was staffed by research assistants who were available to answer the questions if necessary. Participants The participants of this study were 496 college students studying in different courses of medical sciences from Tehran and Gonabad Medical Universities. All of the samples participated willingly and voluntarily in this study and so all respondents in the sample completed the questionnaires in full. Analyses To examine the psychometric characteristics of the Persian version of Form A of the MHLC scales, the following analyses were performed through the Statistical Package for Social Sciences (SPSS) version 11.5 and STATISTICA soft ware. To establish the validity of the scale, the content validity was determined through Persian translation by faithfully translating (and then back-translating) each item from the English version into the Persian version; the concurrent validity was determined through the concurrent administration of this scale with the Persian language version of the Levenson's IPC scale; the construct validity was examined through exploratory factor analysis (EFA) and the minimum loading employed to retain in each factor was 0.40. In regard to reliability, test-retest method and parallel forms were applied using Pearson Product moment correlation. The internal consistency of the scale was measured through Cronbach's coefficient α. According to Wallston [12] modest reliability which is ranging from 0.60 to 0.75, is acceptable in the research. Results It should be mentioned that the 496 students, 30.2 percent of whom were male and 69.8 percent were female were randomly selected from all the above courses. They had a mean age of 20.6 years (SD = 2.05). Also, descriptive information for MHLC and Levenson's I, P, & C scales is included in Tables 1 and 2. Content validity To examine the content validity of the translated version of MHLC, as a mentioned in Procedure section, we established by faithfully translating (and then back-translating) each item from the English version into the Persian version. Concurrent validity In the concurrent administration of the scales, the participants were divided into two equal groups randomly in such a way that half of the participants first answered the translated Form A then Persian version of the Levenson's IPC scale; and the other half first answered Levenson's questionnaire then Form A. This was done to control the test effect. The obtained results indicated significant correlation coefficients between the two scale factors i.e., 0.57 for Internal (P < 0.001), 0.49 for Powerful Others (P < 0.01), and 0.53 for Chance (p < 0.001). Construct validity Exploratory factor analysis (EFA) Three factors were found by principal component exploratory factor analysis with criterion of Kaiser's eigenvalue above one. The eigenvalues were 4.93, 3.32, and 2.34 for factors one, two and three, respectively. The three significant factors could account for 58.90 percent of the total (Table 3). Correlation analysis For bivariate correlation among the subscales correlation analysis was calculated. In this regard, there was a positive but weak correlation (0.28) between the Internal HLC and Powerful HLC, no correlation was found between the Chance HLC and Powerful Others HLC (r = -0.31); and a negatively weak correlation coefficient was found between the Internal HLC and the Chance HLC (r = -0.20). Test -retest To determine the reliability of this scale, 496 students answered the questionnaire items, and after a time interval of 4 weeks, the questionnaire was administered again. The reliability indices for the Internal, Chance, and Powerful Others using Pearson's moment correlation were 0.60 (p < 0.001), 0.58 (p < 0.002), and 0.74 (p < 0.0001), respectively (Table 4). These levels reported sufficient testretest stability coefficients by Wallston [12]. Parallel forms The original form of MHLC was administered to 30 senior English students. Then, the same group took the translated version of MHLC after one week. Using the spilt-half method and Spearman Brown Prophecy formula, the correlation coefficient between the Persian version and the original form of MHLC were estimated as 0.71 for Internal HLC, 0.70 for Chance HLC, and 0.72 for Powerful Others HLC. All of which were significant (p < 0.0001). Comparisons between Persian and English version of Form A showed a sufficient consistent. Internal consistency Cronbach's coefficient α was employed to estimate the internal consistency of the scale that is reported in Table 2. Cronbach's alpha coefficients were moderately acceptable for Internal (0.68), for Powerful Others (0.72), and for Chance (0.66). The paired samples obtained after two administrations of the scale. Discussion Form A of MHLC scales has been employed in many studies throughout the world due to its easy administration, objectivity, and having appropriate psychometric characteristics, especially in the health domain, promoting of health and health psychology. Using Form A will allow assessing subjects' general health locus of control beliefs but researcher won't always be informed that what this means to his/her subjects. Considering the validity of Form A, like other studies in different countries, the content, concurrent, and construct kinds of validity were taken into consideration. Regarding the concurrent validity, the obtained results between Form A of MHLC of Levenson and the Persian version are analogous to the reports of Wallston and colleagues [6] and Wallston [12]. He stated that there was a significant correlation between the sub-scales of MHLC and their counterparts in Levenson's Internal, Powerful Others, and Chance subscale. For making sure of the quality of the study, concurrent validity was estimated according to Gold Standard model [6,12]. [10] mentioned that Maclachlan and colleagues in 1986 did not come up factorial structure of MHLC among Malawi students but distinguished three factors related to the limitations of medical care and its effect on health. Also, in another study by Astrom and Blay [16], they found a twofactor structure among the adolescents in Ghana. Furthermore, the correlation coefficients between the subscales pointed out that the sub-scales of Internal, Powerful Others, and Chance were orthogonal to one another, a finding which is similar to that of Wallston and colleagues [12]. He mentioned that "the IHLC and PHLC subscales were uncorrelated with one another, PHLC and CHLC were only weakly positively correlated, and IHLC and CHLC were weakly negatively intercorrelated. Thus supporting the construct validity claims that these dimensions were more-or-less orthogonal to one another". In another study conducted in New Zealand a much more degree of correlation among the factors was found [10]. In conclusion, the findings of this study indicated that the Persian version of Form A is reliable and valid for using in studies of health beliefs in Iranian language countries. Limitations and suggestions In this study, the researchers could not take into account some factors such as socio-economic status for difficulty in obtaining such information from the students, and at the same time it is an important socio-cultural variable which can help us in the explanation and interpretation of MHLC beliefs. In order to assess concurrent validity, we used Persian version of the Levenson's IPC scale which its validity and reliability was done among university students of non-medical courses. Also, there is not a suitable of the Persian version scale for convergent. It should be mentioned that the subjects of the current study were highly educated. It would be desirable to examine the validity and reliability of Form A in a national sample. Further studies should test the cross-cultural and external validity of Form A in a broader range of samples. Moreover, the Persian version of Form A can be validated to predict health-related behavior or health status. It is also suggested that the validity and reliability of Forms B, C and God be assessed in Iran. Qualitative and quantitative studies of health locus of control constructs are needed to address whether expansion or modification of the MHLC is needed for Iranian ethnic groups. Further, this scale can be administered to different groups of unhealthy people. Conclusion In spite of all the limitations, this study is the first to have examined the validity and reliability of MHLC scale in Iranian subjects.
2017-06-22T03:19:06.651Z
2007-10-18T00:00:00.000
{ "year": 2007, "sha1": "4cb8546fc4e73e1fb37ecf6e6e0c65d484bb20bf", "oa_license": "CCBY", "oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-7-295", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d727c29072d8e4059800c70409d343b8a98dcef8", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
55639678
pes2o/s2orc
v3-fos-license
A study on the relationship between employee mental health and agility strategic readiness: A case study of Esfahan hospitals in Iran Article history: Received December 2, 2012 Received in revised format 10 March 2013 Accepted 18 March 2013 Available online March 19 2013 This study investigates whether enhancing organizational agility and mental health of staff could increase strategic readiness for crises or not. In this study, descriptive statistics is used to present demographic data of the research, and P-Test is employed for analyzing the data. In addition, to examine research hypotheses, correlation coefficients and descriptive statistics are implemented. Finally, to rank the variables and indicators of the research, Friedman test and for comparison of indicators and components of the research, nonparametric Kruskal-Wallis test are used. The proposed study designs a questionnaire and The questionnaire and distributes it among some nurses in obstetrics and anesthesiology department and among supervisors. Cronbach's alpha is also employed for determining the reliability in this study. The results indicate that working conditions as well as employees’ mental health are in good conditions, the employees with higher levels of mental health have higher readiness to deal with potential crises, and the relationship between agility of hospitals and their strategic readiness for dealing with crises is confirmed. © 2013 Growing Science Ltd. All rights reserved. Introduction Today, manufacturing and service organizations are working in ever-changing environmental conditions, and economic, commercial, cultural, social and political conditions of environments influence organizational performance.Obviously, disasters such as war, earthquakes, floods etc. have their own influences on organizational behavior.Therefore, managers need to know that there are various environmental threats to the life of organizations, and organizations can overcome such crises only by means of strategic planning and strategic readiness (Ryvicker, 2009).According to Seifert (2007), the nursing shortage increasingly is concentrating on retaining the "aging" nurse.By growing interest in implementing this group of health care workers, there are associated questions about real and perceived chronologic limits-physiologic, cognitive, and functional-affecting older nurses.Seifert (2007) discussed these issues in light of research associated with older perioperative nurses, aging surgeons, and other health care workers.Sznelwar et al. (2008) studied working in public health services in Brazil and investigated the relationship between different rationalities.They explained how changes implemented in Sao Paulo, Brazil could influence the organizational parameters and working activities for front-line workers.Baba et al. (1999) discussed issues of occupational mental health among nurses in the Caribbean.They used a linear model linking role, work and social factors, stress, burnout, depression, absenteeism and turnover intention for their research.Their results indicated burnout was the sole predictor of depression, which predicted both absenteeism and turnover intention.Lang et al. (2004) performed an investigation to detect whether the peer-reviewed literature could supports specific, minimum nurse-patient ratios for acute care hospitals and whether nurse staffing could be associated with patient, nurse employee, or hospital outcomes.The results of the survey found no support for specific, minimum nurse-patient ratios for acute care hospitals, especially in the absence of adjustments for skill and patient mix, although total nursing hours and skill mix appeared to influence some important patient outcomes. Research Hypotheses The main hypothesis in this research is as follows, The employee's mental health has a positive impact on the agility and strategic readiness of hospitals in Isfahan province. This hypothesis is divided into the following three minor hypotheses, 1. Mental health of employees has a positive impact on the hospital agility.2. Mental health of employees has a positive impact on the strategic readiness of hospitals for responding to crisis management.3. Agility of hospitals has a positive impact on the strategic readiness of hospitals for responding to crisis management. 2.1.Research Methodology The methods used in this study include: applied method concerning the objective of the research (the reasons for conducting the research), and descriptive-analytic approach for inference, and finally survey method for design of the research.Statistical community includes all 30,000 employees of Esfahan Hospitals and we randomly selected 320 people from 80 hospitals in the province.In this study, descriptive statistics has been used for presentation of demographic data of the research, and for this purpose, the demographic data is shown using frequency table.Also for studying the central indicators and dispersion of the variables of the research, descriptive statistics techniques have been used. Inferential statistical techniques have also been used in this study and since this study sought to test the research hypotheses, the testing hypotheses and its related statistical techniques will be used.For analyzing the data, P-Test has been used and the reason for using this test at this stage is the fact that this research is a qualitative one.Also for checking and testing research hypotheses, by correlation coefficient and descriptive statistics, the status of indicators and research hypotheses have been tested. Since this research has been conducted in real, objective, and live (dynamic) organizations and the researcher has experienced real conditions of organizations, this study is also among field studies.Ultimately, in order to rank the variables and indicators of the research, Friedman test and for comparison of indicators and components of the research, nonparametric Kruskal-Wallis test is used and the Cronbach's alpha is employed for determining the reliability in this study.Fig. 1 shows the framework of the proposed study. Strategic readiness Employees' mental health Strategic readiness for crisis management means the level of an organization for better management before a crisis occurs.Readiness for a crisis is a strategy, because it provides a selectable model for inhibition or continuation of next organizational activities and estimates the depth of outcomes (Pennings, 1985).In this study, to assess the strategic readiness for crisis management, the measurement tool prepared by Reilly in 1987 for measuring the strategic readiness to crisis management in some US banks.The research was also been used for some hospitals in Egypt and it was validated in other countries.For measuring the strategic readiness of the concerned organizations, they studied such organizations from six aspects and the presented questions were in line with these dimensions (See Table 2). Agility According to Zhang and Sharifi (2000), agility means the ability of each organization in sensing, understanding and predicting changes in the business environment.In fact, agility includes capabilities and competencies, which lead to survival of organization's development in business environment (with constant change and uncertainty as its main characteristics) (Khoshsima, 2002). The dimensions of agility in public sector could be explained as follows, 1-Organizational change: Recognition of citizens' requirements, improving their services, decisionmaking by public needs and implementing resources to meet customer needs. 2-Organizational leadership: Development in organizational vision, trends and strategic objectives, processing flexibility and utilization of resources based on possible needs. 3-Organizational culture and values: building a good environment to promote changes, considering urgent needs to invest in innovation and creating a sense of teamwork throughout the organization. 4-Customer service: developing sustainable relationships among citizen to improve management strategy, aligning customer services with business processes as well as providing incentives for citizens to go towards new and cheaper communication channels. 5-E-government: developing electronic processes, implementing technology to improve administrative communications, and encouraging citizens to go for more efficient communication channels. 6-Performance management: preparing appropriate training for employees, building a comprehensive performance management system within firm, and incorporating appropriate models for assessing performance (Table 3). The results In this section, we present statistical results by prioritizing the main components of the research, then measure the effects of demographic factor on the research variables and finally, we test research hypotheses. Prioritization of Research Indicators In the present study, each component consists of several indicators and the level of influence of various indicators on the related component is different.So, we continue by ranking indicators to examine the extent of their influence on the respective components. Organizational Agility The most important indicators associated with organizational agility are summarized as follows, Mental Health of Employees The most important indicators associated with mental health of employees are summarized in Table 6 as follows, Strategic Readiness The most important indicators related to strategic readiness are summarized in Table 6 as follows, The Impact of Biographical Factors on the Answers The first part of the questionnaire is associated with demographic questions.In this section, we evaluate the impact of demographic variables on the research variables.For this reason, we used ranking analysis of variance or ANOVA.By using Kruskal-Wallis test, we also evaluated the impact of demographic factors on the responding answers. Age Table 11 shows that margin of error was lower than 5%, so "age" has influenced the answers. Marital status Table 12 shows that margin of error was lower than 5%, so "bachelors" show higher influence. Gender Table 13 shows that margin of error was lower than 5% and "female" shows higher influence. Education degree Table 14 shows that margin of error was lower than 5%, so "higher education degree" shows higher influence. Work experience Table 15 shows that margin of error was lower than 5%, so "more work experience" shows higher influence. Job Position Table16 shows that margin of error was lower than 5%, so "higher position" shows higher influence. Research Hypotheses Testing In this section, the results of testing different hypotheses are presented. Hypothesis No. 1: mental health of employees has a positive impact on hospital agility. Based on the results of Table 8, by considering the correlation coefficient, the research hypothesis is accepted.As we show, the coefficient relationship is 0.992 and there is a strong and direct relationship among these variables.A computed, the significance level is zero approximately equal to zero and it is less than 0.05, so we accepted the H 1 or relationships among variables.Based on the results of Table 9, by considering the correlation coefficient, the research hypothesis is accepted.The coefficient relationship is 0.984 and there is a strong and direct relationship between these variables.The significance level is less than 0.05, so we can accept the H 1 or relationship between variables hypothesis.Finally, based on the results of Table 10, the research hypothesis is accepted based on the related correlation coefficient.The coefficient relationship is 0.995 and there is a strong and direct relationship between these variables.As we computed, the significance level is less than 0.05, so we can accept H 1 . Table 10 Correlation between strategic readiness and agility of hospital Test Correlation coefficient Significance level Spearman 0.995 0.000 Conclusion Using various tests, including both descriptive and inferential tests including P-Tests, correlation and Friedman's test of analysis of variance, we have examined different hypotheses as follows, 1. Mental health of employees has a positive impact on hospital agility. Since the null hypothesis is rejected, there should be a significant correlation between the two variables.Spearman's correlation test indicates a relationship between employees' mental health and organizational agility.The result of correlation test shows that the significance level is less than α = 0.05 (P-V < α = 5%).Therefore, we can state that, at confidence level of 95%, there is a significant relationship between mental health of employees and organizational agility.The level or intensity of correlation between mental health of employees and agility is approximately 0.992.This suggests that per one unit of increase or improvement in mental health of employees in the studied organizations, agility will increase about 0.992. 2. Employees' mental health has a positive impact on the strategic readiness of hospitals on responding to crisis management. Since the null hypothesis is rejected, there should be a significant correlation between the two variables.Ultimately, Spearman's correlation test indicates that there is a relationship between mental health and strategic readiness to deal with crises.In other words, the improvement of mental health of staff directly influences the level of readiness for responding to crises.The result of Spearman's correlation test shows that the significance level is less than α = 5% (P-V < α = 5%). Based on the result of the performed test, it can be stated that, at confidence level of 95%, the organizations with mentally healthier employees have more strategic readiness for responding to crises.The level or intensity of correlation between mental health of employees and strategic readiness is approximately 0.984.This suggests that per one unit of increase or improvement in mental health of employees in the studied organizations, strategic readiness of these organizations for responding to crises will increase about 0.984. -Agility of hospital has a positive impact on strategic readiness of hospital for responding to crisis management. Since the null hypothesis is rejected, there should be a significant correlation between the two variables.Pearson's correlation test indicates a relationship between agility of hospitals and their strategic readiness.The result of Person's test shows that the significance level is less than α = 5% (P-V < α = 5%).With regard to the result of the performed test, it can be stated, at confidence level of 95%, that: an organization with higher agility, strategic readiness for crisis management is higher. The level or intensity of correlation between agility of hospitals and their strategic readiness is approximately 0.995.This suggests that per one unit of increase or improvement in agility of studied hospitals, strategic readiness of them for responding to crises will increase about 0.995. The results of examining the variables, indicators and the relationship between dependent and independent variables associated with hypothesis 3 indicates the existence of a relationship between agility of hospitals and their strategic readiness to deal with the crises.Theoretical discussions and review of literature also confirms such relationship.The results of the investigation conducted for hypothesis 3 and examining the relationship between dependent and independent variables associated with the this hypothesis also confirms such relationships. Based on the research results we recommended that: 1. Since the relationship between organizational agility and strategic readiness for responding to crisis was confirmed in the studied organizations, the following recommendations can be proposed: -External and internal environmental factors for organizations should be carefully analyzed. -Appropriate plans should be developed for preparing managers and staff in these organizations to deal with potential crises.-By increasing knowledge, awareness and skills of employees and updating these factors, change should be encouraged in organizations.-With regard to the influence of objectives and strategies in organizational agility, the organization's goals and strategies and policies should be revised in proportion to occurred environmental changes and after studying the status quo. 2. Conducted studies and surveys indicate that there is a relationship between leadership style and mental health in organizations.Therefore, it is recommended that: -Since the job environment in hospitals are stressful and they should put up with different stress, we offer what Taylor originally recommended.In other words, suitable job description should be prepared, the ambiguities in working standards and criteria should be eliminated, the relationships between managers and employees and inter-personal relations should be strengthened, the conflicts in roles should be reduced and some actions should be taken for improving the mental health of employees in workplaces. Fig. 1 . Fig. 1.The framework of the proposed study Table 5 Prioritization of the organizational agility indicators Table 6 Prioritization of the mental health of employee's indicators Table 7 Prioritization of the strategic readiness indicators Agility of hospital has a positive impact on strategic readiness of hospital for responding to crisis management.
2018-12-05T01:23:32.295Z
2013-04-01T00:00:00.000
{ "year": 2013, "sha1": "5ad71907ae591e751002497fc644bfa18e378982", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5267/j.msl.2013.03.018", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "5ad71907ae591e751002497fc644bfa18e378982", "s2fieldsofstudy": [ "Business", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
235372111
pes2o/s2orc
v3-fos-license
Acupuncture Intervention Protocol: Consensus Process for a Pragmatic Randomized Controlled Trial of Acupuncture for Management of Chronic Low Back Pain in Older Adults: An NIH HEAL Initiative Funded Project Objective The aim of this article is to describe the consensus process used to develop an acupuncture intervention protocol for an NIH-funded pragmatic randomized controlled trial (PRCT) of acupuncture for the management of chronic low back (cLBP) in older adults (BackInAction). Background CLBP is among leading causes of disability worldwide: almost 33% of US adults 65 and older experience LBP. Acupuncture is effective for cLBP but there is no specific data on older adults. The National Institutes for Health (NIH) funded a PRCT of acupuncture needling for this population. An essential trial milestone was development of a consensus intervention protocol. Methods An Acupuncture Advisory Panel (AAP) was formed with nine members: two physician-acupuncturists, six licensed acupuncturists representing diverse work backgrounds, and an acupuncture researcher. We used a modified Delphi process that included provision of acupuncture trial data, survey data describing how each expert treats cLBP, three conference calls, and between-call email discussion. Results Lively and professional discussions led to a consensus intervention protocol for the BackInAction trial that included steps/staging of care, recommendations for parameters of care session length, number of needle insertion sites, insertion depths, needle retention times, recommended types of needles, both local and distal areas of the body to be treated, acupuncture point options, auricular point options, self-care options, and minimum number of sessions considered ideal. Conclusion Using a modified Delphi process, an expert AAP created a consensus intervention protocol for the PRCT of acupuncture needling for cLBP in patients 65 and older. Background Low back pain is among leading causes of disability worldwide, with both prevalence and burden increasing with age: 1,2 almost 33% of U.S. adults 65 and older experience lower back pain (LBP). 3 Despite large investments in care for LBP, the health and functional status of Americans with LBP has deteriorated. Older adults with new primary care visits for LBP often have persistent symptoms, disability, and interference over 12 months of follow-up. 4 Those who filled 2 or more opioid prescriptions within 90 days had similar patientreported outcomes as those who did not fill early opioid prescriptions. 5 Opioid related deaths in people 65 years and older increased 635% in the 15 years from 2001 to 2016. 6 Opioids are associated with increased disability, medical costs, subsequent surgery and continued opioid use. 7 A critical gap exists in the evidence on the safety and effectiveness of treatments for older adults with cLBP. 8 Burgeoning imaging rates revealing incidental pathology may place older adults at risk for inappropriate invasive treatments, [9][10][11] persistent pain, and increased health care utilization. 11 Treatments that might be considered appropriate for younger adults may not be appropriate for older adults, given their greater prevalence of comorbidities with attendant polypharmacy. 12 Normal physiological changes of aging reduce tolerance of medications; older adults have substantially increased risk of adverse effects of commonly used LBP treatments and medications such as nonsteroidal anti-inflammatory drugs (NSAIDs), 13 muscle relaxants, and opioids. 4,9,13,14 Older populations have increased susceptibility to adverse events linked to opioids such as delirium, sedation, dizziness, confusion, pneumonia, constipation, nausea, falls and mortality. 5 In their guidelines on noninvasive treatments for LBP, the American College of Physicians (ACP) recommends acupuncture as one first line option for acute, subacute and cLBP, but there is no specific data on older adults. 15,16 Given the interest of the U.S. Centers for Medicare & Medicaid Services (CMS) in knowing the value of acupuncture for cLBP in the Medicare population, they partnered with the National Institutes of Health (NIH) to issue a 'Funding Opportunity Announcement' (RFA-AT-19-005) to conduct a pragmatic randomized controlled trial (PRCT) of acupuncture needling for cLBP in older adults. We were funded to conduct the PRCT which included a planning year funding to prepare for conducting the study. An essential planning year milestone was development of an acupuncture intervention protocol. This manuscript describes both the protocol and its process of development. Acupuncture for cLBP and Medicare Population Acupuncture has been found effective for chronic pain in adults, including for cLBP wherein the benefits persist over time. 17,18 A large individual patient data metaanalysis found 85% of acupuncture benefit was maintained in 12 months of follow-up. 17,18 While not able to compare effectiveness between specific styles of acupuncture, most reviewed studies applied traditional East Asian medicine (TEAM) principles based in the recognition of acupuncture channels and points. 18,19 Yet no trials to date have focused solely on cLBP in the Medicare population. While acupuncture is recommended by the ACP and recognized by the NIH for cLBP, it has not been routinely covered by Medicare and has been unavailable to most adults 65 and older. On January 21, 2020 the Centers for Medicare & Medicaid Services (CMS) announced a decision to cover acupuncture for cLBP in Medicare recipients with limits on number of visits per year and stipulations on who may provide and directly bill for acupuncture care, effective January 2021. 20 Methods We identified subject matter experts with input from each trial site's investigators. Suggested experts were contacted by email or telephone. The resulting Acupuncture Advisory Panel (AAP) included nine expert members: two physicians and six licensed acupuncturists representing diverse practice backgrounds and experience including work with underserved populations at Federally Qualified Health Centers (FQHCs), practice in university clinics, in integrative health systems and in leadership roles. One member is medical director of a holistic medicine network, another is dean at an acupuncture college, and another is an acupuncture researcher with multiple acupuncture trials and publications on acupuncture treatment for cLBP. We used a modified Delphi process to develop the acupuncture intervention protocol to be used in the IRBapproved study design. Modified Delphi Process Researchers have adapted the Medical Research Council's guidance of 2000 21 and 2008 22 in developing and evaluating complex interventions that have interacting components. 23 The process of forming a consensus-based intervention protocol, sometimes called manualization [23][24][25] describes one such adaptation that seeks to strike a balance between standardization and flexibility in acupuncture research 24 for trials on depression, 26 stroke, 27,28 and chronic pain. 23,25,29 The Delphi process, developed by the Rand Corporation, is widely used for convergence of expert opinion within certain topic areas. 30 It is part of the development of research protocols and manuals 31 and typically involves a formal process of using questionnaires to gather information from experts, summarizing areas of consensus and reviewing with experts one or more times until consensus is obtained. Consensus is defined as general agreement from group discussion resulting in clear support for each item. Ours is described as a 'modified' Delphi process wherein acupuncture had already been studied in trials and established in guideline recommendations for cLBP. Collating that knowledge from the literature with our experts' experience was the particular task of our panel to specify acupuncture for cLBP in people 65 and older. Preliminary Information From Trials of Acupuncture for cLBP To contextualize our acupuncture intervention in the existing trial literature, and to prepare the AAP members for discussion, we created a table of parameters from published acupuncture trials for cLBP that included: number of sessions, number of needles, needle retention time, description of the needles, local and/or distal acupoints required, optional points if permitted, and whether 'de qi' was sought. [32][33][34][35][36][37][38][39][40][41][42][43][44][45] Additionally we added the same data published by experienced Chinese acupuncturists practicing in the US detailing their treatment approach for cLBP in patients 65 and older. 46 This background gave us information on key acupuncture intervention parameters, frequency and intensity of treatment interventions reported in the literature. Panel members were invited to complete a survey for the intervention parameters and details that reflected their approach to treatment of cLBP in people 65 and older. Initial Questions for the AAP Prior to our first conference call (October 29, 2019), we emailed our AAP members (Oct 10, 2019) and asked them how they treated cLBP in general and with older adults (see Table 1). All AAP members responded prior to the first conference call and their responses were collated in a separate table and disseminated to facilitate comparisons between experts. Consensus Topics The first AAP conference call consisted of introductions, an overview of the trial, a brief summary of the role and responsibility of the AAP, an explanation of the consensus intervention process, orientation to STRICTA guidelines (Standards for Reporting Interventions in Clinical Trials of Acupuncture), 47 a review of the AAP members' responses, and a discussion of essential consensus topics. Those are listed in Table 1. Our first meeting found consensus on most topics (steps/staging of care, recommendations for parameters of care including palpation, session length, number of needle insertion sites, insertion depths, needle retention times, recommended types of needles, that both local and distal areas of the body be treated, obtaining 'de qi', inclusion of 'ah shi' points (tender points sensitive to palpation), and self-care options (to be named) considered ideal. We clarified topics that would require ongoing discussion: acupuncture point options, auricular point options and ideal minimum number of sessions recommended to achieve clinical benefit ( Table 2). Topics agreed on would be reconfirmed in the next call where we would then focus primarily on acupuncture point options. In preparation for the 2nd conference call, the AAP members were asked to review the point list created from the acupuncture trials for cLBP, as well as the AAP members' survey responses. Survey topics included: would you omit any of the points listed and were there any omissions of useful points that should be included? The second AAP conference call occurred on November 5, 2019 with discussion and confirmation of consensus topics from the first meeting. Discussion turned to creating and confirming a list of acupuncture point options. A list was built and circulated after the call for further consideration over email. Email Communication Between Second and Third Conference Call Acupoints to include continued to be discussed over email. Consensus intention was to include primary points from trials but also points that have been utilized in textbooks and by panel experts. Because patient presentations vary and treatments are expected to vary over time as a patient's conditions improves, a panel of point options is ideal to support a practitioners versatility in individualizing care. Table 3 lists the topics in email discussion between the 2nd and 3rd conference call. The third AAP conference call occurred November 26, 2019 where members finalized acupuncture point options and discussed self-care recommendations. Additionally questions about traditional Chinese medicine (TCM) diagnoses tracking, micro-bloodletting and trademarked approaches were addressed. Acupuncture Point Options As described above, the AAP discussion of acupoint options was informed by multiple considerations: a table of information about treatments from acupuncture trials for cLBP that included local and/or distal acupoints used and optional points if permitted; acupoints from an article by seven experienced Chinese acupuncturists practicing in the US, treating cLBP in patients 65 and older, and results from our survey of each AAP member that included acupoints they use for cLBP (see Table 1). The discussion acknowledged the large number of suitable acupoints for treating cLBP, creating a range of relevant options. The decision was made to include all acupoints on the low back, including the Bladder and 'Du' channel acupoints, 'Hua tuo jia ji' and 'extra' points as well as indicated points on the back of the leg and selected points on the upper back and the ears. The acupoint options are listed in the grid in the draft session form Appendix 2. The AAP agreed that point selection is at the discretion of the practitioner, with 6-20 insertion sites per session and expectation of both local and distal point needling. As a pragmatic trial, What is the total number of treatments you typically provide older adults with cLBP? Varies: Ongoing discussion within parameters of trial (12 or 15 sessions in 12 weeks) then continue with option of 6 more sessions for some patients Do you typically palpate (channels, points, 'hara')? All agreed Do you typically use local and distal points? All agreed Do you typically use 'ah shi' points? All agreed Do you have 'favorite' points (for cLBP), i.e., points you commonly use? AAP members have favorite points they use. If so, what are they? Points collated for discussion How many acupoints do you typically needle per session? Varies Agree need to discuss further Do you vary acupoints selection as a patient's presentation evolves? All agreed Do you try to obtain de qi/session? Depending on points; early consensus favors de qi at acupoints but used at the practitioner's discretion How long does a typical session last for a first treatment and follow-up treatment? 60 minutes for first session: 40-60 minutes fu. What is your typical needle retention time? For any one insertion site (or unit as tx of back or front) may be as little as none and up to 40 minutes, with common range of 10-25 minutes What kind of needles do you use? (diameter . . . and length, coated, non-coated?) Agree to leave to practitioner discretion with recommendation of non-coated needles Are you comfortable treating patients only using acupuncture needling? Other acupuncture therapies have been proscribed by NIH; only needling is allowed in this study. Are you comfortable filling out computer forms? All agreed Is there anything else you want us to know? Informed other discussion topics such as optimal patient positioning, how to treat if patient is limited in positioning, consideration of recommendations prior to tx in terms of food and fluids to reduce risk of syncope and to receive tx well. How to adjust tx for nervous acupuncture. naïve participants. Auricular therapy. Post session recommendations. What do we consider to be appropriate steps in an intervention?* An anticipated order of interview/conversation, palpation, selection of acupuncture points (body and ear if used), a range of number of points treated, a range of point retention times and a range of session times *Question raised at first call, but not on survey. Consider reminder to eat and drink (water) within 2 hours prior to treatment to reduce risk of syncope. acupuncturists are expected to individualize a treatment as they would in real world clinical settings from within the protocol options. 'Ah shi' points and points not in the protocol list can be used with provision of rationale. Fidelity to the intervention protocol will be assessed using data from the session form, draft attached as Appendix 2. Acupuncturists may indicate if other aspects of acupuncture therapy not included in the trial (for example, moxibustion/application of heat, Ba guan (cupping), Tui na, Gua sha, electrical stimulation, herbal medicine) would have helped a particular participant, in their opinion. Our intervention protocol proscribes microsystem or exclusively using trademarked treatments. It is acceptable to incorporate a point or two from any system into the consensus intervention, with rationale. While some texts recommend intentional microbloodletting of certain points for cLBP as a feature of acupuncture needling, this method is proscribed in this trial. Self-Care Recommendations In addition to the panel of acupuncture points, self-care recommendations elicited the most discussion for the AAP. Such recommendations are an essential aspect of acupuncture therapy and are engaged as a means of sustaining benefits from a session. These typically include recommendations on breathing, movement, temperature and other aspect of food, sleep, work and so on, as well as attitude and regard for oneself and others. The following were suggested for discussion. 1. Avoid excessive cold and sour food and drink. 2. Avoid extreme exercise or work, lifting or twisting. If there is a sense one can increase exercise already engaged in, increase in small intervals. Or suggestion of meditative movement like Qi gong or Tai qi. 3. Eat regular warm cooked meals. 4. Drink enough water. 5. Guidance on breathing awareness. 6. Recommendations re: meditation or quiet reflective time in the day. AAP agreed on need for inclusion of basic recommendations within the paradigm of classical Chinese medicine, or Traditional East Asian medicine as they would be given in real word setting. However, AAP members would have preferred more detail in the post-session recommendations. With an understanding of this as a PRCT, allowing acupuncturists to respond to patient questions and provide self-care information on breathing, increasing kinds of exercise, temperature of foods, sleep hygiene, work and activity as well as mental outlook was considered important. However, collecting data on self-care would best be done in general categories. We decided to use lifestyle recommendations categories collected in a previous trial (see Table 3). Traditional Chinese Medicine Pattern Diagnosis 'Diagnosing' in traditional East Asian medicine (TEAM) acupuncture can blend paradigms from a description of locations (channels, areas of body, levels or depths, organs) to the status of substances (Qi, Blood, Fluids, Food) to Ba gang bian zheng (eight parameters: outside/inside; hot/cold; excess/deficiency; yang/yin), to TCM differentiating patterns of disharmony. 49,50 These provide a context for clinical decision making where 'patterns' are secondary to the practical interactions of acupuncture practice. 51 Acupuncturists simultaneously treat and evaluate patients during a treatment visit by assessing how a patient responds to various stages of treatment from palpation to needling to other manual interventions that in turn inform the depth and direction of a disorder. Here evaluation is treatment and treatment is evaluation. 52 Signs and symptoms including tongue and pulse may change within a session, informing a responsive acupuncture approach. With an herbal medicine approach, one might look to patterns of disharmony and changes in symptoms, tongue etc. over time. With acupuncture therapies such changes can happen within a session and over time. Responses within a session provide immediate and relevant information on the morphology of a presenting problem including location, type, quality, inherent waxing and waning of symptoms particularly with musculoskeletal problems like cLBP. While aspects of 'patterns' provide an essential context, they are not the primary determinant in choosing acupuncture points or combinations of points. It is also important to note that a TCM diagnosis, if made, is not typically written in the medical record in China. 51 Rather, signs and symptoms, acupoints treated, therapies applied, tests ordered and herbal prescriptions, if given, are recorded. A pattern 'diagnosis' can vary among practitioners for the same patient, while the acupoints chosen might be similar whatever the diagnosis. This does not mean that TCM patterns are irrelevant, rather, TCM diagnosing should not be confused with the reductionist operation of Western medical diagnosing but is a flexible working assessment in a medical paradigm that assumes the only constant is change. 51 No consensus exists regarding TCM pattern diagnosis for cLBP when using acupuncture. 34,53 Where acupuncturists may agree on diagnosis they can vary substantially in treatment recommendations. 54 Our trial acupuncturists are encouraged to engage and record the process they use in practice but the AAP agreed to not focus on or capture TCM diagnoses in this trial. 34 Additional Questions The trial Steering Committee developed the inclusion/ exclusion criteria; the AAP was consulted regarding the need for any acupuncture-specific inclusion/exclusion criteria. The AAP recommended a minimum effective course of treatment would include at least 8 visits. This is consistent with other sources, 17 that emphasize specific dosage in treatment including for a population likely managing multi-morbidity. 45 The AAP also concurred on the qualifications of the acupuncturists in our trial (as detailed in the next section). Trial Acupuncturists Trial acupuncturists will be state licensed and qualified to practice acupuncture in the state where care will be provided. At least 5 years clinical experience postlicensing is preferred with experience in treating older adults with cLBP having multi-morbidities. Exceptions for three years' experience per individual applicant may be permitted. Trial acupuncturists will have an orientation to the trial and safety review. Orientation for Trial Acupuncturists All study acupuncturists will be required to complete provider training, which will include an orientation to the trial, pertinent trial logistics, the consensus intervention protocol and a review of safety for acupuncture needling in patients 65 and older. Discussion Establishing expert consensus protocols for complex interventions with interacting components, such as acupuncture therapy, especially for PRCTs, has replaced simple formulaic protocols used in earlier acupuncture efficacy RCTs. The Delphi process, developed by the Rand Corporation, is widely used for convergence of expert opinion 30 that seeks to strike a balance between standardization and flexibility in acupuncture research 24 for trials on depression, 26 stroke, 27,28 and chronic pain. 23,25,29 Delphi processes for acupuncture trials have varied. A trial for acupuncture as a complex intervention for depression preselected 52 characteristic components of treatment used in trials and then asked 15 expert practitioners to rate them in a survey. 26 A trial of acupuncture for assisted fertility used 3 rounds of survey questionnaires. 31 A trial of acupuncture for stroke rehabilitation blended structured planning meetings with experts and protocol questionnaires. 27 A large trial of acupuncture for chronic pain in an underserved population used multiple expert panel discussions without the use of surveys. 23 Our process is unique in several ways. First, our intervention protocol is for a PRCT, albeit limited to acupuncture needling compared to usual care. Also unique is that we provided our panel with treatment intervention information from prior trials of acupuncture for cLBP. We then used both survey information from our expert panel as well as conference call discussions to find consensus on topics and reconfirm them in subsequent calls. This process mixed the best features of 'time needed to think' in responding to survey topic questions, and the lively and professional discussions with the AAP members that built not only a consensus protocol, but a sense of team commitment to the project. Tables 1 to 3 illustrate consensus building as an iterative process representing compromise from individual opinions toward collective agreement. The intervention protocol is attached as Appendix 1. Pilot Study of Processes The consensus intervention was then used in a pilot trial at 2 of our 4 health system sites. This process provided direct feedback from pilot acupuncturists on possible tweaks to the protocol, which were then taken back to the AAP for discussion and confirmation. The final consensus intervention was used to refine our training orientation for acupuncturists employed in the trial, along with a review of safety in terms of potential issues with an older population, multi-morbidities and infection control. The draft acupuncture visit form is included as Appendix 2. While use of modified Delphi processes are established as a feasible means for obtaining consensus on an intervention protocol for acupuncture RCTs and PRCTs, future Delphi processes might benefit using previous trial information, expert surveys as well as well as either in person or conference call meetings for a certain team building in the process of arriving at consensus. Conclusion Using a modified Delphi process, an expert AAP created a consensus acupuncture intervention protocol for a pragmatic randomized controlled trial of acupuncture needling for cLBP in patients 65 and older. The consensus protocol options provide a balance between standardization and flexibility in allowing acupuncturists to customize a session to a participant's specific presentation of cLBP and as it evolves over time. Acknowledgment We thank Lynn DeBar, PhD for attending several meetings and Danielle Katsman for administrative assistance. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Institutes of Health (NIH) through the NIH HEAL Initiative under award number UG3/UH3 AT010739 from the National Center for Complementary and Integrative Medicine. This work also received logistical and technical support from the PRISM Resource Coordinating Center under award number U24AT010961 from the NIH through the NIH HEAL Initiative. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or its HEAL Initiative. Supplemental Material Supplemental material for this article is available online.
2021-06-09T07:01:56.449Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "fb096ce42dbf49a62dfc70718457055e3234207b", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/21649561211007091", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "b5ec723e667edf6ecd9e0ad97aa56c619dad9d6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15086429
pes2o/s2orc
v3-fos-license
Meta-Analysis on the Correlation Between APOM rs805296 Polymorphism and Risk of Coronary Artery Disease Background The present meta-analysis aimed to summarize the inconsistent findings on the association of apolipoprotein M gene (ApoM) rs805296 polymorphism with the risk of coronary artery disease (CAD), and to obtain a more authentic result about this topic. Material/Methods A total of 7 available articles were identified through electronic databases – PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI) – and their useful data were carefully extracted. The relationship between ApoM rs805296 polymorphism and CAD risk was assessed by odds ratios (ORs) and corresponding 95% confidence intervals (95% CIs), which were calculated using the fixed- or random-effects model, according to the degree of heterogeneity. Hardy-Weinberg equilibrium test, sensitivity test, and publication bias examination were also performed in this meta-analysis. Results According to the pooled results, ApoM rs805296 polymorphism conferred an increased risk of CAD under all the genetic contrasts: CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T, and TC versus TT (OR=2.13, 95% CI=1.16–3.91; OR=1.80, 95% CI=1.50–2.17; OR=1.91, 95% CI=1.04–3.51; OR=1.72, 95% CI=1.45–2.04; OR=1.78, 95% CI=1.47–2.15). Conclusions ApoM rs805296 polymorphism may be a risk factor for developing CAD. Background Coronary artery disease (CAD), one of the most common cardiovascular diseases, ranks first among fatal diseases in adults around the world [1][2][3]. Several risk factors have been confirmed, such as smoking, hypertension, diabetes, high blood cholesterol, excessive alcohol drinking, depression, and lack of exercise [4,5]. As for the underlying mechanism of CAD, it is reported that cardiac atherosclerosis may have a significant influence on the occurrence and progression of the disease [6,7]. However, genetic and environmental risk factors have been widely researched in the etiology of CAD, and their remarkable effects on susceptibility to CAD have also been identified. In recent years, accumulating evidence indicates that genetic polymorphisms may be implicated in individual susceptibility to CAD, including polymorphisms within genes of APLNR [8], interleukin-6 [9], CYP7A1 [10], and PAI-1 [11]. In addition, the apolipoprotein M gene (ApoM), located on human chromosome 6 p21.31, has been reported to be significantly related to the occurrence of CAD [12]. The ApoM gene codes a 22-kDa protein that belongs to the apolipoprotein superfamily in structure. ApoM protein was first identified and determined in a study on lipoprotein by Xu et al. in 1993 [13]. Human ApoM cDNA, with 734 base pairs, encodes a residue-long protein with 188 amino acids [14]. ApoM is reportedly related to lower high-density lipoprotein (HDL) cholesterol, triglyceride-rich lipoproteins, lipoproteins containing ApoB, and very low-density lipoprotein (VLDL). Only expressed in the kidneys and liver [15], ApoM has been confirmed to have great influence on the transportation of reverse cholesterol [16]. Previous studies have suggested that one of the polymorphisms in ApoM gene, rs805296, was related to the susceptibility to CAD [17][18][19][20]; however, the number of studies is relatively limited, and the results are divergent rather than conclusive due to various reasons. Therefore, we comprehensively summarized all the findings on the association of ApoM rs805296 polymorphism with risk of CAD so as to reach a more authentic conclusion by performing the present meta-analysis. Study source and search strategy The electronic databases searched for all the usable studies were: PubMed, EMBASE, and Chinese National Knowledge Infrastructure (CNKI). The words and items for literature searching contained "Apolipoprotein M" or " ApoM, "polymorphism" or "variant" or "mutation", and "coronary artery disease" or "CAD" or "atherosclerosis". In case of missing any adequate studies, we screened the articles in the reference lists of relevant studies by manual searching. All the studies were restricted to those in English or Chinese. Selection standards All the publications meeting the following requirements were included into our meta-analysis: (1) possessing the case and control subjects at the same time; (2) studying the correlation between ApoM rs805296 polymorphism and CAD risk; (3) with sufficient data describing genotype and allele frequencies of the polymorphism in case and control groups; (4) human studies; and (5) the genotype distribution in control group conforming to Hardy-Weinberg equilibrium. Those studies with case-only design, inadequate information, or duplicating other articles were excluded. For publications with similar datasets, the one with the largest amount of information was included. Data extraction An identical form for data extraction was designed in advance, and the whole process was completed by 2 authors separately. Information to be extracted from each included article incorporated the following aspects: year of publication, name of first author, country of origin, ethnicity, genotyping method, sample sizes of cases and controls, genotypic and allelic distribution in case and control groups, and P value for Hardy-Weinberg equilibrium in control group. Statistical analyses STATA software (V.12.0) was used in all statistical analyses. Since all the publications would have control groups with genotypic and allelic frequencies consistent with Hardy-Weinberg equilibrium according to the selection criteria, we checked the compliance degree of Hardy-Weinberg equilibrium using the chisquare test for those not stating relevant data. The strength of relationship between ApoM rs805296 polymorphism and CAD risk was evaluated with odds ratio (OR) and its corresponding 95% confidence interval (95% CI) under 5 genetic models (CC versus TT, CC + TC versus TT, CC versus TT+TC, C versus T and TC versus TT). The absence or presence of statistically significant inter-study heterogeneity, tested by the c 2 -test-based Q statistic, determined the use of fixed-effects model (Mantel-Haenszel method) or random-effects model (DerSimonian and Laird method). In sensitivity analysis, each single study was deleted in turn to observe the alterations of the overall results. Through Begg's funnel plots and Egger's test, we detected if there existed significant publication bias across eligible studies. For all the statistical tests, the significance level was set at P<0.05. Results Publication characteristics Figure 1 displays the detailed process of study selection and reasons for study exclusion. Initially, 81 records were identified through the computer search, and 36 remained after excluding 8 studies not on humans and 37 apparently irrelevant ones. Through the subsequent exclusion for reviews and letters (7), without full texts (3), duplicates (5), not about ApoM rs805296 polymorphism (8) and without original data (6), we eventually included 7 studies in the quantitative synthesis [18,[21][22][23][24][25][26]. The primary features of eligible studies are presented in Table 1. Heterogeneity examination As shown in Table 2 for TC versus TT model). Therefore, there was no marked heterogeneity and the fixed-effects model was used for pooling the results. Publication bias test Begg's funnel plots and Egger's test were used to detected possible publication bias among the included studies from the visual and statistical perspective, respectively, and neither the shapes of funnel plots (Figure 3) nor the statistical data of Egger's test (P=0.260) provided evidence for the presence of obvious publication bias. Sensitivity analysis In the process of sensitivity analysis, every individual study was omitted in sequence, and the changed results were observed correspondingly. No radical alteration occurred in the pooled results, suggesting that no single study substantially affected the results, and our meta-analysis outcomes were statistically robust. Discussion CAD is a complex multi-genetic disease caused by synergistic effects of genetic and environmental risk factors [27,28]. Hereditary epidemiological studies have suggested that genetic mutations may elevate individual risk of developing CAD [29][30][31]. Initially separated and cloned from chylomicrons [13], ApoM in plasma mainly exists in HDL particles, and very little is in triglyceride-rich lipoprotein (TGRLP) and lowdensity lipoprotein (LDL), suggesting ApoM may be associated with lipid transportation and metabolism [15]. Richter This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License confirmed its protective effects against atherosclerosis [16,32]. In the study by Xu et al., the correlation between ApoM and indexes of lipid indicated that ApoM levels in plasma had a positive relation with factors against the progression of atherosclerosis, such as ApoA I and HDLC, and was negatively related with factors promoting atherosclerosis development, such as triglyceride, total cholesterol, and lipoprotein (a), and that elevated levels of ApoM could prevent and slow the progression of atherosclerosis [33]. The human ApoM gene is located in a region adjacent to that of major histocompatibility complex (MHC) in which multiple genes are related to immune response; therefore, the ApoM gene is likely to participate in the regulation of immune defense [34]. Among a number of polymorphisms within the ApoM gene, the rs805296 variant in the proximal promoter region has been verified to have a link with plasma cholesterol, and may increase individual susceptibility to CAD [35]. In this present study, we referred to previous studies and analyzed the association between ApoM rs805296 polymorphism and CAD risk. Our results indicate that ApoM rs805296 polymorphism under all the comparisons could elevate the risk of CAD, suggesting this polymorphism might act as a promoter for CAD onset. Several case-control studies have investigated the significance of ApoM rs805296 in CAD risk in Chinese populations, and obtained useful findings. Using the method of polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP), Huang et al. carried out a screening for ApoM rs805296 in 220 CAD cases and 195 normal controls, and observed the frequency of C allele in case and control groups was 19.1% and 12.6% respectively, and this difference was statistically significant (P=0.011), which proved the polymorphism rs805296 might be a susceptible factor for CAD [21]. Zhang et al. performed a large study recruiting 675 patients with acute coronary syndrome (ACS) and 636 healthy control subjects, and found that the frequencies of both C allele and CC genotype of ApoM rs805296 polymorphism in the case group were significantly higher than those in the control group (P<0.01). Subsequently, after the adjustment of susceptibility factors for CAD, the C allele was found to be an independent risk factor for the occurrence of ACS [25]. In addition, some other studies also obtained results similar to those mentioned above [18,22,23,26]. In contrast, Zheng et al. found no statistically significant difference in distribution of 3 genotypes of ApoM rs805296 polymorphism, including TT, TC, and CC, between case and control groups, and concluded that rs805296 might not be correlated with the development of CAD [24]. Conducted in a Chinese population, the study of Zheng et al. obtained results that are in contrast with our present study and the other case-control studies listed above, which might be attributed to differences in number of samples, methods of genotyping, correction factors, and other risk elements. Absence of heterogeneity and publications bias is the biggest strength of this meta-analysis. However, as in previous studies and meta-analyses, our meta-analysis also has some weaknesses that should be clearly presented. Because all the prior studies on the association of ApoM rs805296 polymorphism with CAD risk only focused on Chinese populations, our metaanalysis solely discussed this association among Chinese people, which might not be representative in other ethnic groups. In addition, the limited number of included studies and the relatively small sample size might lessen the statistical power of our results. Another important aspect that should be stated is that some potential risk factors such as family history, smoking status, body mass index (BMI) and other environmental influences [36] were not incorporated into the discussion of the present study due to limited original information of included studies. Conclusions In conclusion, our meta-analysis results revealed a significant correlation of ApoM rs805296 polymorphism with CAD risk, and showed rs805296 polymorphism might confer increased risk of CAD in the Chinese population. The association between ApoM rs805296 and onset risk of CAD needs to be further verified by studies containing combined effects of genetic and environmental factors and larger sample size in multiple ethnicities.
2018-04-03T02:53:47.723Z
2016-01-02T00:00:00.000
{ "year": 2016, "sha1": "9b783ea3287876addfd2a3a98f26118f1c79fef8", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc4702609?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9b783ea3287876addfd2a3a98f26118f1c79fef8", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
134440694
pes2o/s2orc
v3-fos-license
Including Farmer Irrigation Behavior in a Sociohydrological Modeling Framework With Application in North India Understanding water user behavior and its potential outcomes is important for the development of suitable water resource management options. Computational models are commonly used to assist water resource management decision making; however, while natural processes are increasingly well modeled, the inclusion of human behavior has lagged behind. Improved representation of irrigation water user behavior within models can provide more accurate and relevant information for irrigation management in the agricultural sector. This paper outlines a model that conceptualizes and proceduralizes observed farmer irrigation practices, highlighting impacts and interactions between the environment and behavior. It is developed using a bottom‐up approach, informed through field experience and farmer interaction in the state of Uttar Pradesh, northern India. Observed processes and dynamics were translated into parsimonious algorithms, which represent field conditions and provide a tool for policy analysis and water management. The modeling framework is applied to four districts in Uttar Pradesh and used to evaluate the potential impact of changes in climate and irrigation behavior on water resources and farmer livelihood. Results suggest changes in water user behavior could have a greater impact on water resources, crop yields, and farmer income than changes in future climate. In addition, increased abstraction may be sustainable but its viability varies across the study region. By simulating the feedbacks and interactions between the behavior of water users, irrigation officials and agricultural practices, this work highlights the importance of directly including water user behavior in policy making and operational tools to achieve water and livelihood security. Introduction Globally, water resources face unprecedented challenges due to population growth and changing lifestyles, exacerbated by variations in climate, including more frequent extreme weather events (Famiglietti, 2014;Moors et al., 2011;Schewe et al., 2014). While the impact of these factors on water resources is experienced by many millions of people worldwide, it is typically the vulnerable in society who are most acutely affected (Adger et al., 2003;Amarasinghe et al., 2016a;Conway et al., 2015). Improvements in current water management strategies depend on an in-depth understanding of the drivers behind the water use; among the most important of which are the practices of stakeholders. Human behavior is a significant driver of water resource insecurity (Dalin et al., 2017;Foley et al., 2005;Nazemi & Wheater, 2015). Despite this, inclusion of water end user behavior in planning and management of water resources has to date largely been neglected in research and model development (Nazemi & Wheater, 2015). This leads to an incomplete understanding of the problems and challenges facing communities and may result in poorly conceived water management strategies. Thus, incorporating users' behavior in water resource modeling could improve water resource management and enhanced resilience under changing conditions. This is also the central premise of the Panta Rhei initiative of the International Association of Hydrological Sciences, which aims to reach an improved understanding of the water cycle by focusing on the interactions and feedbacks between hydrology and society (Montanari et al., 2013). Approaches to water resource management have changed over time, and recognizing the role humans play in water security has become increasingly apparent (see Blair & Buytaert, 2016;Roobavannan et al., 2018). Modeling has played an important role in helping researchers and policy makers to better understand water resource use and resilience. However, while hydrological models are capable of representing complex While sociohydrology specifically refers to the dynamics and coevolution of coupled human and water systems (Sivapalan et al., 2012), the modeling approaches used to represent sociohydrological systems are varied. These include agent-based modeling, system dynamics, pattern-orientated modeling, Bayesian networks, coupled component modeling, scenario-based modeling, and heuristic-based modeling (for an overview see Blair & Buytaert, 2016). Top-down approaches, which may include system dynamics, aim to determine overall system functioning and are useful in situations where local-scale understanding is lacking (Blair & Buytaert, 2016). A disadvantage of this approach is that it can miss some underlying processes, producing a result that may be too simple for certain applications. On the other hand, bottom-up approaches such as agent-based modeling focus on the behavior and decision making of individuals (Bousquet & Page, 2004). Agents operate under rules, which determine the interactions and feedbacks between agents and their environment, and the approach has also been used to investigate water resource management problems (Madani & Dinar, 2012;Ng et al., 2011). The approach can also examine societal impacts on the environment and the reactions of humans to environmental or policy change. Sociohydrology is an evolving field, and among the recommendations for its advance is public participation (Lane, 2014;Srinivasan et al., 2016). Involving stakeholders has many advantages, including improved data collection and promoting a buy in to model results (Mostert, 2018). In addition, direct inclusion of stakeholders' insights and experience in model development increases model realism and real-world relevance. In order to fully represent water use, it is necessary to directly include human behavior. This is an important step in developing tools to better manage water resources and the feedbacks to water users. Attempting to do so through modeling physical processes alone is less likely to produce realistic results or lead to the right results for the wrong reasons. A variety of models have been developed to represent the interactions between human behavior and the environment. These include farm level decision making based on economic and resource availability (Foster et al., 2014;Inam et al., 2017), water resource competition between humans and ecosystems (van Emmerik et al., 2014), the system dynamics of small holder farmers (Pande & Savenije, 2016), and the feedbacks between climate change and societal adaptation (Kuil et al., 2016). Complete behavioral representation is difficult to achieve through a top-down approach as data and regulations rarely reflect what takes place on the ground, particularly in developing countries where data are scarce and governance is often inadequate or poorly enforced. This is the case in India where water resource resilience has become one of the country's most important challenges (Amarasinghe et al., 2009;Briscoe & Malik, 2006;Shah, 2016). India's vulnerability to environmental and socioeconomic changes highlights the necessity of good resource management practices. The introduction of improved irrigation technology, high yielding drought-resistant seed varieties, and artificial fertilizers allowed Indian agriculture to expand rapidly and go from what was a famine prone country, to one that is now food self-sufficient (Jewitt & Baker, 2007;Singh, 2000). Despite the manifest benefits, however, the green revolution has led to increased strain on the regions water resources (Amarasinghe et al., 2009;Briscoe & Malik, 2006;Macdonald et al., 2016;Shah, 2016). Consequently, an understanding of the drivers and outcomes of change in water use is vital to develop sustainable and realistic management options to help safeguard water resources. This paper outlines the development of a water resource and farmer livelihood modeling framework developed from the bottom up, which incorporates the behavior of water users. The framework provides a unique tool for identifying and testing potential water management options by incorporating real-world insights from observed farmer behavior informed by field collected information (O'Keeffe et al., 2016), improving the representation of feedbacks, and tipping points between water use and the environment. The model is applied to a number of districts in northern India; however, when local knowledge is collected, it is envisaged that the framework can be applied to a wide variety of locations, realistically representing the actions of water users under changes in anthropogenic and environmental conditions. The following sections outline the data (section 2, model conceptualization and development (section 3), followed by model application (section 4), a description of the results (section 5), and a discussion of the outcomes, including limitations of the model (sections 6 to 7). Fieldwork and Socioeconomic Data More than 200 semistructured farmer interviews were carried out by the first author in Uttar Pradesh, a large and diverse state within the Gangetic plains of North India. The interviews were conducted across four districts (Sitapur, Sultanpur, Jalaun, and Hamirpur), which are representative of agricultural and water use practices in the region (ICRISAT-ICAR-IRRI Collaborative Research Project, 2012). The interviews sought to obtain information on water use and constraints as well as socioeconomic and environmental factors, which influence rural livelihoods. A complete description of the methodology and results of the field campaign is provided in O' Keeffe et al. (2016). Collected data include water application rates, irrigation scheduling, and water source along with information describing cropping practices, particularly during the dry Rabi season, approximately November to March, and the monsoonal Kharif season from June to October. Additional socioeconomic information such as crop yields and fertilizer costs were obtained from secondary data sources, including the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT-ICAR-IRRI Collaborative Research Project, 2012) and the Indian Government of India, Department of Fertilizers, Ministry of chemicals and fertilizers (2015). Fertilizer application rates were taken from Yadav (2003). Climate Data Observed rainfall and temperature data were obtained from the Indian Meteorological Department and Tropical Rainfall Measuring Mission multiprecipitation analysis products, 3B42 version 7 from the National Aeronautics and Space Administration archive (Huffman et al., 2007). While the general circulation models were selected according to their ability to accurately model monsoon conditions in the region, the large spatial heterogeneity in convective rainfall patterns make projections highly uncertain. To date, there has been little research on the possible effects of changes in climate on groundwater resources (Holman et al., 2012). In order to represent future climate uncertainty, emission scenarios Representative Concentration Pathway (RCP) 4.5 and RCP 8.5, derived from CMIP5 projections were chosen (Wayne, 2013). Time series representative for future climate conditions were obtained by perturbing the observed data using the delta change method (see Prudhomme et al., 2010). Relative change (for precipitation) and absolute change (for temperature) were calculated between the periods 1971-2005 and 2006-2040. The latter period was chosen as being most relevant for policy. Historical and perturbed values can be seen in Table 1. While considerable uncertainty surrounds Indian rainfall projections, research points toward more frequent extreme events (see Barik et al., 2017;Jena et al., 2015;Johnson et al., 2016;Menon et Groundwater Data Data describing groundwater levels between 2002 and 2013 in the districts were obtained from the Central Groundwater Board of India (Central Ground Water Board, 2014). No groundwater level information was available prior to 2002 for the study region. While each district has numerous monitoring wells, many were excluded due to poor consistency in data recording. As a result, 14 monitoring wells were used in Sitapur, 44 in Sultanpur, 21 in Hamirpur, and 26 in Jalaun. Despite the poor spatial distribution (1 monitoring well/410 km 2 in Sitapur, 1 monitoring well/100 km 2 in Sultanpur, 1 monitoring well/150 km 2 in Jalaun, and 1 monitoring well/200 km 2 in Hamirpur), this information represents the best available observation data for the study area. The regional geology, alluvial aquifers comprising silts, sands, clays, and gravels, suggests less groundwater level spatial variability than would be found in hard rock aquifers, increasing confidence in applying these groundwater levels to the study region. Conceptual and Perceptual Model Development 3.1.1. Perceptual Model The field observations were analyzed in detail in a previous study (O'Keeffe et al., 2016). Here we synthesize them in a set of dominant observed processes, which together constitute our perceptual model (Beven, 2012): 1. Irrigation scheduling depends on water availability, seed developer guidelines, and local knowledge. Farmers typically follow a set irrigation schedule. However, since access to or availability of water can be an issue, farmers may not always irrigate at the optimum time. 2. Conjunctive use of water sources is widespread. Irrigation canals provide an irregular but important source of water to some farmers and because their low cost are used in preference to groundwater when possible. Proximity to a canal is not always an indication of access. 3. Farmers will continue to irrigate despite increasing prices. While the price of irrigation was found to be a major concern for farmers, it did not have a significant influence on irrigation practices. Farmers' first priority was to provide food for their own families, and they were willing to spend more to achieve this. 4. Canal recharge benefits all. Farmers with land located close to a working canal may benefit from the contribution of canal leakage to aquifer recharge, leading to more stable groundwater levels and lower pumping costs. 5. Water application is not measured. Farmers do not record the volume of water they apply to crops; instead, observing the approximate depth water reaches within their bunded fields. 6. Irrigation return flow is an important hydrological process. The majority of farmers use flood irrigation, much of which is lost through evaporation or returned to the underlying aquifer. 7. Irrigation time increases with decreasing water tables. Farmers described increasing irrigation costs with decreasing groundwater levels, particularly during the dry, premonsoon season, as lower water levels mean that pumps are required to run for longer in order to abstract the same quantity of water. 8. Farmers' solution to lack of water: drill deeper wells. The most common solution to declining water tables reported by farmers was to drill deeper wells. This information was used to develop a conceptual model, which is described in detail in the remainder of section 3 and in Figure 2. The most important feedbacks between the physical and behavioral elements of the framework can be seen in Figure 1. Conceptual Model In a next step, we conceptualized our perceptual model as three coupled submodels representing hydrology, crop yield, and farmer livelihood, which are described in detail in the following sections. Throughout, we use indices t and T to index days and years respectively. Thus, Δt = 1day and ΔT = 1year. Hydrology A single cell model is employed to simulate the response of water resources to changing socioeconomic and environmental conditions. This class of hydrological model is commonly used in the field of water resources economics and there is an extensive body of literature describing their application (see de Frutos Cachorro et al., 2014;Gisser & Mercado, 1973;Koundouri, 2004). The soil column is represented in terms of the total available water, TAW (M∕L 2 ), which describes the maximum amount of water that is available to plants at field capacity: where FC and WP (M/L 3 ) are, respectively, field capacity and wilting point and Z r (L) is the maximum root depth in meters. The proportion of TAW that can easily be extracted from the root zone before the soil moisture deficit impedes plant growth is termed the readily available water, where p is the crop-specific depletion factor and the dimensions of RAW are the same as those for TAW. The daily water balance equation expressed in terms of root zone depletion, D r (M/L 2 ), is written where ET c is actual crop evapotranspiration, R is recharge, P is precipitation, RO is surface runoff, and I is irrigation where K c is a crop coefficient, which varies according to the crop growth, K s is a water stress coefficient, and ET 0 is reference evapotranspiration. We use the Hargreaves-Samani equation to estimate ET 0 , but other approaches can be used (see Itenfisu et al., 2003;McKenney & Rosenberg, 1993). Crop coefficients are obtained from Allen et al. (1998) and from field work conducted in North India by Choudhury et al. (2013). The water stress coefficient is calculated as follows: Spatial and temporal rainfall variability is taken into account by adding a noise component drawn from a normal distribution. A runoff coefficient is used to partition rainfall into runoff and infiltration. Farmers in the surveyed districts typically use flood irrigation and apply water to their crops at set intervals during the growing season. Thus, farmers are assigned an irrigation volume drawn from a normal distribution with mean and standard deviation derived from field data. To account for spatial and temporal heterogeneity in irrigation, timing the model is programmed to randomly select the day irrigation takes place from a normal distribution where the parameters are again based on observations. Recharge from the root zone to the underlying aquifer is assumed to occur when the water content of the root zone exceeds field capacity: Canals in India are typically operated by the Irrigation Department, and while water supply is often unreliable, it is typically free or very cheap (O'Keeffe et al., 2016). Within the model, farmers' access to canals is predetermined and does not change during the simulation. On the other hand, groundwater abstraction through private tube wells, which considerably outnumber all other types of well, is more expensive to the farmer because of the upfront cost of installing the well in addition to the cost of buying and operating the pump. Outside northwest India, where many farmers have access to heavily subsidized electricity, we found 10.1029/2018WR023038 that farmers typically rely on diesel pumps with comparatively expensive running costs. Thus, we assume that farmers with access to a canal preferentially use this water source when it is available, otherwise relying on groundwater if they have access to a borehole of sufficient depth. We assume that farmers outside the canal command area only irrigate if they have access to an operational borehole. Lastly, we assume a leaky canal system, which contributes recharge to the aquifer (Macdonald et al., 2016). Consistent with the single cell paradigm, the aquifer is represented as a bathtub with spatially homogeneous hydrogeological characteristics such as groundwater level, aquifer thickness, and specific yield. Drawing together these assumptions, we can express the change in aquifer storage, H (M/L 2 ), as where V (M) is the amount of water held in the canal and l (L −2 ∕T) is a leakage coefficient. Crop Yield Within the model, crop yield is the principal link between farmer livelihood and agricultural water use. It is calculated using the relationship between crop production and evapotranspiration developed by Doorenbos and Kassam (1979), which can be expressed as where Y x is maximum yield, Y a is actual yield (both with dimensions M∕L 2 ), K y is the yield response factor, ET x is maximum evapotranspiration, ET c is actual crop evapotranspiration, d s is sowing day, and d h is harvesting day. Y x is taken from annual field reported information, which implicitly incorporates the biophysical impacts of fertilizer, improvements in seed variety, or crop disease. While other factors limit crop production, such as labor and nutrient availability, farmers in the surveyed districts stated that water availability, in terms of timely access and volume, was the largest constraint on production. Livelihood The conceptualization of the feedbacks between farmer livelihood and irrigation behavior is fundamental to the model. Farmer livelihood, L, is considered in terms of the difference between farm income, m, and farm expenditure, z, as follows: Farm income is limited to the amount of money that farmers' receive at the market for their crop, expressed as follows: where n c is the number of crops grown in an agricultural year and q and A (L −2 ) are the price and area of crop c, respectively. The model explicitly includes expenditure on irrigation and fertilizer. Other items, such as living expenses, education, and loan repayments, are represented by a single a parameter, , which represents the fraction of income that is saved on an annual basis. We assume that canal irrigation is free, while the cost of groundwater irrigation is a function of the cost of diesel, pump efficiency, and depth to groundwater. The consumption of diesel, V d (L 3 ∕T), required to abstract groundwater from depth h (L) is estimated from empirical data collected by the University of Nebraska (Martin, 2003) as follows: 10.1029/2018WR023038 where is the pump efficiency. The total cost of groundwater abstraction, m a , can then be calculated as follows: where q d is the unit cost of diesel. At the end of each year, if net farm income (i.e., livelihood) is positive, the farmer saves a proportion, s , of the difference between income and expenditure, as follows: During periods of low income farmers use their savings as a buffer to sustain production. During this time irrigation may still take place until a lower groundwater limit is reached. In reality, the shortfall in revenue may be compensated by off-farm activities, loans, and/or scaling back other outgoings, which the model does not explicitly consider. Once the lower limit is reached, irrigation no longer takes place and rainfall becomes the only source of water sustaining crop growth. The water use options available to each farmer vary in time and space. As highlighted in the perceptual model, farmers who rely on groundwater for some or all of their water supply will often drill deeper wells in order to safeguard their water supply. We conceptualize this behavior by dividing farmers into categories according to the depth of their well. This approach follows Srinivasan et al. (2010), who categorized households in Chennai, India, according to their level of access to municipal water supply. The number of categories and actors within each category is set by the modeler. At model initialization all farmers are randomly assigned a category, C w and at the end of the year farmers with sufficient savings change categories by paying for a deeper well, as follows: where m w is the cost of installing a new well, assumed to be the same regardless of the depth of the new well and C max w is the category corresponding with the maximum well depth. The cost of installing a new well is subtracted from the farmer's savings. Behavior Human behavior forms the backbone of the modeling framework, acting as the control structure, which coordinates the operation of the hydrological, crop production, and livelihood components. This is shown graphically in Figure 2 where the behavioral elements driving the modeling framework are identified. Observed farmer behavior is represented in the hydrology model in equation (7) and in the livelihood model in equations (9), (11), (13), and (14). Behavioral and Climate Change Scenarios While many plausible future socioeconomic scenarios may exist, including changes in dominant crop types, or changes in the cost of energy sources, the scenarios outlined in Table 2 were chosen as plausible present and future versions of the water use environment in North India. These were informed through relevant literature, as well as field work in the study region (Amarasinghe et al., 2016b;Barik et al., 2017;O'Keeffe et al., 2016). An initial baseline, business as usual run was completed and compared with the limited observed data available for the study region. Given the strain India's growing population is likely to place on food demand, an increase in irrigation intensity encouraged by government is likely. This is modeled in scenario 2 as an additional irrigation event, which takes place during the dry season. For scenarios 2 and 3, this same change in farmer behavior is modeled under predicted changes in climate. No changes were made to farming practices except inclusion of an additional irrigation event. Calibration 4.2.1. Model Initialization and Calibration Model calibration and output verification requires observations, which is a major challenge in data scarce environments. Relevant socioeconomic data for comparison with model outputs are particularly difficult to obtain as details of incomes, savings, and expenditure are limited. Model applications in each of the four study districts were manually calibrated using groundwater levels and crop yields which represented the best available observed data. The conceptual model, which was developed using observations of local conditions, was considered throughout the procedure to ensure that all parameters were realistic. Calibration was performed manually by visually comparing simulated groundwater levels and crop yield against available ground water level observations. Two parameters were adjusted: the runoff coefficient and the evaporation coefficient. Calibration took place during initial model runs, establishing a base case (Harou et al., 2009). Subsequent model outputs are compared to observed groundwater levels and reported crop yields to evaluate the outcomes of changes in scenarios with the baseline conditions. Initialization values and parameters used during model operation are shown in Table 3. Results The following sections describe the model results from each of the scenarios. Output variables include changes to groundwater, crop yield and farmer income. Groundwater To evaluate model operation, modeled groundwater outputs (1971 to 2013) are compared to the best available observed groundwater data ( Figure 3). Observed data lie within the range of modeled outputs in all four study districts and largely mirrors the trends of reported groundwater levels. The median modeled outputs are used as a baseline for comparison across all other modeled scenarios. Modeled changes in groundwater levels due to predicted climate change are shown in Figure 4. In the northern districts of Sitapur and Sultanpur, groundwater levels are predicted to remain largely unchanged. In the southern district of Jalaun, modeled groundwater levels increase by approximately 5 m over baseline conditions by 2005. Water levels in Hamirpur under RCP 8.5 are expected to fall approximately 5 m while remaining to baseline conditions under RCP 4.5. As expected, under additional irrigation practices, ground water levels deplete at an increased rate when compared to the baseline scenario. This is more pronounced in the southern districts of Jalaun and Hamirpur ( Figure 5). In Sitapur, median water levels vary between 2 and 9 mBGL throughout the model run reaching approximately 5 mBGL by 2005. Median water levels under increased abstraction are 5 to 6 m lower than under current business as usual conditions by 2005. Overall, however, water levels appear sustainable, showing an increasing trend post 2002. There is little variation between increased groundwater abstraction under baseline conditions and the same behavior when predicted future climate is taken into account ( Figure 5). Sultanpur maintains an extensive canal system and groundwater levels in the district are predicted to remain largely stable under an increased irrigation scenario. Between 1971 and 2005 the aquifer depletes at approximately 0.14 m/year ranging from 5 to 10 mBGL. Under increased irrigation and predicted future climate, median modeled groundwater levels are by 2005 expected to fall by approximately 10 m when compared to groundwater levels under current irrigation practices. Water is also supplied through canals in Jalaun. Despite this, the model suggests declining water levels, falling to approximately 30 mBGL by 2005. Overall groundwater levels are expected to reduce by up to 25 m by end of the model run, suggesting that additional premonsoon irrigation from groundwater sources is unsustainable in the district. When predicted future climate is accounted for, groundwater levels are expected to be broadly similar under increased abstraction (see Figure 5) Of the four districts studied, water levels in Hamirpur show the steepest decline under increased irrigation. Here water levels fall at approximately 1.3 m/year between 1971 and 2005; a reduction of 45 m when compared to model outputs driven by current practices, suggesting that water resources in Hamirpur are not capable of sustaining increased groundwater abstraction. Modeled outputs suggest that variations in predicted future climate will have little impact on water levels when increased abstraction is encouraged. Farmer Income Net farmer income is derived from the revenue generated from growing crops, less the expense of irrigation and fertilizer. The annual prices for fuel used for irrigation and fertilizer, along with the market prices for each crop were obtained from socioeconomic data sets (Government of India, Department of Fertilizers, Ministry of chemicals and fertilizers, 2015; ICRISAT-ICAR-IRRI Collaborative Research Project, 2012). The income values discussed are adjusted for inflation; an important factor to consider when assessing how farmer income has changed over the model run period. Inflation was accounted for using the consumer price index values (Triami, 2016), adjusting income to 1971 levels, providing a time series in constant rupees. A comparison of farmer income under increased irrigation and increased irrigation when future climate scenarios is taken into account reveals little variation in any of the four districts ( Figure 6). All outcomes are higher than under business as usual baseline conditions. Farmers who grow rice in addition to wheat (Sultanpur and Sitapur) receive higher income from the combined revenue generated by the two crops ( Figure 6). In Sitapur, increased irrigation does not result in additional farmer income as the revenue gain is matched by production costs. Crop Yield As expected, the introduction of an additional irrigation event for wheat results in an increase in yield, ranging from 0.2 to 0.6 tonnes/ha across the four districts ( Figure 7 where the black line represents recorded annual crop yields [ICRISAT-ICAR-IRRI Collaborative Research Project, 2012]). Under increased irrigation the model results show that farmers in Sitapur will receive median wheat yields approximately 0.2 tonnes/ha larger than those under baseline conditions, while yield values in Sultanpur are expected to increase by up to 0.5 tonnes/ha. Simulated wheat yield for farmers in Jalaun and Hamirpur also increases, up to 3.2 tonnes/ha in Jalaun, or 0.2 tonnes/ha more than under baseline conditions, and a median yield increase of up to 0.5 tonnes/ha in Hamirpur resulting in approximately 2.4 tonnes/ha by the end of the model run. As irrigation practices are not changed for rice cultivation there is little difference in yield, with overall values matching those produced during the baseline run (Figure 7). The increase in crop yield as a result of an additional irrigation event is maintained under future climate scenarios RCP 4.5 and RCP 8.5 (Figure 7). There is only a marginal change in rice yields, which remain similar to baseline model outputs throughout (Figure 7). Discussion This paper explores the integration of water user behavior in a sociohydrological modeling framework in order to simulate the feedbacks between anthropogenic and environmental variables. Model development has been informed by interviews conducted with over 200 farmers in Uttar Pradesh, northern India, providing field level insight on the operation and challenges behind water use. The model is applied to four districts representative of conditions across the Indo-Gangetic plain and is used to investigate the impacts increased groundwater abstraction and changes in future climate may have on water resources and farmer livelihood. Our results show that the impacts of predicted future climate alone may not substantially impact water resources. Nevertheless, climate change may indirectly affect variables outside the modeled environment such as energy price and availability or the cost of fertilizer, leading to uncertainty and market volatility. It is possible, however, that future socioeconomic factors will lead to additional water abstraction. Results suggest that increasing irrigation prior to the onset of the monsoon, such as those suggested by Amarasinghe et al. (2016b) and Revelle and Lakshminarayana (1975), is potentially viable in Sitapur and Sultanpur. This is not the case in Jalaun or Hamirpur, however, where an unsustainable depletion in groundwater levels is likely under the same behavior. The variability of results between the study districts highlights the importance of collecting data that are relevant to the inferences made and the potential decisions that may be taken, as actions which are applicable in one location may not work in another despite their relative proximity. The scenarios and results described highlight the ability of the model to show how changes in anthropogenic or environmental conditions can impact farmer livelihood and water resources. Due to limited data, however, this model is necessarily a simplified representation of reality, which leads to a number of limitations. Groundwater is represented within the model as a single cell where inflows are supplied by rainfall and canal flow. Outflows occur through abstraction, evaporation, and transpiration. Lateral subsurface groundwater flow into or out of the cell is not taken into account. A single water level is applied to all farmers across the cell, and the model does not account for well interaction. While this approach is less of an issue in unconsolidated alluvial aquifers, such as those found in the Ganges Basin, model uncertainty will increase when applied to hard rock aquifers. Crop production is determined through the relationship between evapotranspiration and yield (see Doorenbos & Kassam, 1979;Smith & Steduto, 2012). While the model accounts for the impact of water availability on crop production, it does not explicitly account for the biophysical impacts derived from fertilizer application or improvements in seed variety, except through the reported increase in observed yield, which is used in equation (8). Representation of socioeconomic conditions was a major challenge during this study. In reality, the way in which farmers save and spend their income is highly variable and depends on a range of factors which are outside the scope of this work. The model assumes that individual farmers will retain savings for investment in their water security and does not take into account the many other options, for example, their children's education or investment in aspects of their farm besides irrigation. It is also assumed that all farmers sell their crops for the same price and that indeed there is a market for their produce. It does not take into account that a proportion of crops grown are for personal consumption, a common practice among interview participants. Loans, repayments, supplementary farmer income, or water markets were not directly considered, elements that can lead to changes in farmer behavior including, but not limited to, drilling additional tube wells. Despite some limitations, the framework captures the most important aspects of the farmers' environment and represents an advancement in hydrological modeling by directly including human behavior. The modeling framework is capable of identifying trends and tipping points, providing a useful tool for policy analysis, planning, and resource management. The model is adaptable and can be used as the basis for studies across a wide variety of locations and environments to represent a range of scenarios as well as socioeconomic and biophysical conditions. Conclusions This paper describes the development of a modeling framework, which directly includes water user behavior through a set of built in rules. Field collected insights are used to produce a tool, which is rooted in reality, capable of examining the impacts of changes in environmental and anthropogenic conditions on farmer irrigation behavior. The framework is adaptable and capable of incorporating a wide variety of farmer behavior across a range of socioeconomic and biophysical conditions. The model is applied to four districts in Uttar Pradesh, North India, to investigate the effect of changes in policy and climate on farmers and water resources. Model results highlight that changes in human behavior may have a larger impact on water security and stakeholder livelihood than changes in climate. In addition, increased irrigation under predicted future climate may be possible in Sitapur and Sultanpur. However, in the southern districts of Jalaun and Hamirpur, similar practices are unlikely to be sustainable as all scenarios involving increased abstraction predict groundwater levels falling to unsustainable levels. Predicted climate change alone is unlikely to adversely impact water resources, crop yields, or farmer income, although any potential increase in the costs of energy or fertilizer as a result of climate change are not accounted for. Under scenarios in which irrigation is increased, the water levels in all districts show a decline from the baseline, along with an increase in wheat yield. This results in increased income for farmers in Jalaun and Hamirpur but not for Sitapur or Sultanpur, where the production costs outweigh the advantages of more irrigation. The results show the importance of conjunctive use of groundwater and surface water and that under certain conditions an increase in groundwater abstraction may be feasible. The modeling framework developed is necessarily a simplified version of reality. As limited data exist in the study region, parametrization and calibration is difficult. Consequently, the model is not intended to be fully predictive but rather a tool than can be used to highlight trends and tipping points and understanding the outcomes of stakeholder practices.
2019-04-27T13:10:00.847Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "aa057790b74a888a60484823e35c23419c15b950", "oa_license": "CCBY", "oa_url": "https://agupubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2018WR023038", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "71280d58952ed32ed4db96fd2628ade9dea93e86", "s2fieldsofstudy": [ "Environmental Science", "Sociology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
41163166
pes2o/s2orc
v3-fos-license
Multi-Objective Optimization Algorithm to the Analyses of Diabetes Disease Diagnosis There is huge amount of data available in health industry which is found difficult in handing, hence mining of data is necessary to innovate the hidden patterns and their relevant features. Recently, many researchers have devoted to the study of using data mining on disease diagnosis. Mining biomedical data is one of the predominant research area where evolutionary algorithms and clustering techniques are emphasized in diabetes disease diagnosis. Therefore, this research focuses on application of evolution clustering multiobjective optimization algorithm (ECMO) to analyze the data of patients suffering from diabetes disease. The main objective of this work is to maximize the prediction accuracy of cluster and computation efficiency along with minimum cost for data clustering. The experimental results prove that this application has attained maximum accuracy for dataset of Pima Indians Diabetes from UCI repository. In this way, by analyzing the three objectives, ECMO could achieve best Pareto fronts. Keywords—Clustering; Genetic Algorithm; Multi-objective Optimization; ECMO; Diabetes Disease I. INTRODUCTION Among numerous diseases, health department of India have identified that Diabetes disease lists top for cause of death domestically.In the recent years, it is inferred that this problem is growing at an alarm rate with massive patient data [1].In this view, this work adopts Evolutionary Clustering Multi-objective Optimization Algorithm (ECMO) which was the extended work of NL-MOGA for analyzing diabetes disease datasets [2].There are some global optimization tools such as genetic algorithms uses validity measures for evaluating clustering accuracy [3].However, as no single validity measures works equally well for different datasets which could simultaneously produce high clustering accuracy.Some recent studies have posed the problem of data clustering as a multi-objective optimization problem in which several cluster validity measures are optimized concurrently to obtain the tradeoff clustering solutions. Depending on the dataset properties and its inherent clustering structure, different cluster validity measures perform differently [4].Therefore, it is important to find the best validity measures that could be instantaneously attain good clustering results.In order to evaluate the quality of the clustering, external measures like Jaccard-index, Minkowskiindex, Rand-index, and so on can be utilized to optimize the multi-objective problem [5].This measure are used to identify the intra-cluster similarity or compactness and the inter-cluster separation.In this paper, the cluster compactness and the separation is evaluated using Rand-index.This index measures both cohesion and separation of clusters using distance measures between the points in the closest cluster to the points in the same cluster [6].The Rand-index for the point xi is calculated as Whereas, T is the true cluster of the selected dataset for C the clustering result returned by some algorithm.The points a, b, c, and d are the objects belonging to T and C. The value close to +1 indicates a good clustering.Hence, the best cluster accuracy can be calculated using this index.Thus the inaccuracy could be the values nearing -1 [7].However, in clinical diagnosis, the inaccuracy could be in diagnosing false positivity and negativity. The false positive like inaccurate-positive depicts the percentage of patients diagnosed to have no disease while in reality they have.Inaccurate-negative implies that the ratio of patients being diagnosed with disease but is diagnosed contrarily.In general, false negative results could cause greater impact than false positive results for both the doctors and the patients.At this juncture, the analysis of medical disease needs more concentration on the state of lower cost or false negative than the state of high cost or false positive. Therefore by applying ECMO algorithm which uses data mining technology along with genetic algorithm that would help in analyzing the disease to produce high accuracy results by optimizing the low cost and high cost values.In this light, the accuracy and cost are the conflicted objectives.Hence, the optimum results could be achieved by setting minimum acceptable accuracy rate.On the premium that all the conditions were attained, the higher accuracy and low cost values could result better.The optimum values could be drawn from Pareto fronts.The rest of the paper is organized as follows.In Section II, a brief review of some past studied are presented.Then in Section III, methodology of ECMO is discussed in detail.Section IVshows the experimental results obtained from the study.Finally, conclusion and possible research issues are presented in Section V. II. LITERATURE REVIEW Sriparna et al. [8] proposed a multi-objective clustering technique to partition the data into appropriate clusters.This work aims to find total compactness of the partitioned clusters, symmetry of the clusters and the connectedness of the clusters.The algorithm uses Silhouette-index to measure the validity of the clusters.Hector et al. [9] presented a technique to identify the main folds in the large datasets.Author summarized the original search space with Map-reduce architecture to identify the voronoi regions.Guang et al. [10] depicted generate-firstchoose next method using upper bounds, lower bounds and inequality constraint engineering problem based on surrogate models.The algorithm failed to adopt weighted sum approach. Lei et al. [11] devised clustering-ranking algorithm using a series of reference lines as cluster centroid.The solutions are ranked according.Anibran et al. [12] defined an interactive genetic algorithm based multi-objective approach that could simultaneously found clustering solution by evaluating the validity measures.The algorithm reduces fatigue of the decision maker by generating only important solutions from the current population.A massive clustering based multiobjective genetic algorithm is presented in [13] and the author extended research by depicting an enhanced K-means Genetic algorithm for optimal clustering.The author overcomes the drawback of local optima with suitable dataset and also the algorithm fails in computational time.It is inferred that the algorithm produced more than the 90% accuracy for real life dataset.The author also adopted a neighborhood learning strategy for optimizing multi objective problems.This algorithm used k means Genetic algorithm to find the compactness of the clusters.It is noted that the algorithm could produce minimum index value for the maximum datasets.However, there is a need for proper feature selection for better, more optimal solution [14,15].Ruby et al. [16] suggested two methods for ranking of MOPs.This ranking methods were used to prune large data-sets of solution to small subset of good solution.Edward et al. [17] presented an approach by extracting the knowledge of conflicting interests like traceability and transparency to obtain the group of consensus data.Min Han et al. [18] considered mutual information based feature selection to enhance the searching capability of the data.Partha Pratim et al. [19] proposed high dimensional feature selection technique to preserve sample similarity using shared neighbor distance technique to reduce the outliers with a minimum computational complexity. III. METHODOLOGY This section address the issues specified in Section II by applying evolutionary clustering algorithm (ECMO) for MOPs.Primarily, ECMO generates uniform set of objects as the population.Then, the population is treated with three main procedures until the termination condition is satisfied.The three major operations are criterion learning algorithm (CLA), knowledge acquisition algorithm (KAA) and optimal clusterranking algorithm (RA).The ultimate goal of CLA is to perform global search based on the discovered criteria and then the knowledge is acquired through constant learning to dominance.While RA refine the process by grouping most relevant data with the help of ranking strategy. A. Evolutionary Clustering Algorithm for Multi-objective Optimization This research inherits ECMO which handles data by adopting criterion learning algorithm.The criterion for the particular objective was designed based on cluster location.The neighborhood data such as closest neighbor, farthest neighbor and indirect neighbor were identified using knowledge acquisition algorithm.Hence, based on the dominance of individuals the data can be grouped and ranked using best knowledge ranking algorithm.The optimal Pareto fronts was achieved using balancing Pareto front algorithm that was capable of finding the best features the particular data set.Therefore, the fitness function for diabetes disease diagnosis using ECMO could be maximizing the cluster accuracy with minimum number of false negatives and false positives.It can be represented as follows: ) (4) Hence, by adopting the rules of knowledge acquisition algorithm true negatives and true positives objects can be identified.Maximum cluster accuracy could be achieved through best knowledge ranking algorithm of ECMO. IV. EXPERIMENTAL STUDIES To evaluate the performance and efficacy of the proposed algorithm ECMO, an unsupervised genetic algorithm is discussed in this section. A. Data Set and Experimental Setting The algorithm is tested Pima Indian Diabetes microarray datasets which are taken from UCI repository [20].There are 768 records, out of which 268 cases are with diabetes disease and 500 cases are without diabetes with 376 records contain missing values.Pima Indian Diabetes microarray datasets contains 8 attributes with on class attribute.Table I contains the information about the dataset for the analysis.The algorithm were implemented in 7.6 and executed using Pentium with 2.99 GHZ CPU and 2 GB RAM.The operating system Microsoft Windows XP. B. Testing Datasets and Performance Metrics The experiment on the dataset was conducted on 90% of training dataset with 10% of test data.Testing has undergone 20 independent runs.The foremost aim of cluster validity indexes is to validate clustering solution.This index is useful in comparing the performance of the cluster.We adopted rand index (RI) to compare the performance of the algorithm with the selected diabetes datasets.The cluster accuracy, inaccurate positive and inaccurate negative for predicting diabetes disease is shown below: II.It is inferred from the result that the average closer cluster accuracy is determined using rand index metric.The average clustering accuracy is 98.48%.The results of Pareto fronts was presented in Fig. 1. shows the best cluster accuracies produced by the selected objectives.Blue color implies the healthy objects whereas pink and yellow color indicated inaccurate negative and inaccurate positive respectively.The evaluation metrics obtained by ECMO algorithm is recorded in Table III.The Fig. 2 Shows the best Pareto fronts obtained by the selected class variables for the single run.The selected from the Pareto fronts were mostly in the knee regions of the Pareto fronts. It is noted that cluster prediction the algorithm could able to produce accurate cluster classification with low inaccurate positive and negative results.Table IV represents the impact of ECMO on inaccurate negative and positive results. ECMO takes 20 iterations independently on diabetes dataset for its clustering process.It is praiseworthy that ECMO could form cluster along with good convergence and diversity as shown Fig. 1.It is observed from Fig. 2. ECMO can produce Pareto optimal solution for the selected objectives.It can be identified from the Table III rand index value of the proposed algorithm is comparatively low than other algorithms except few.When the value of RI is equal to 1, the formation of cluster will be good.Hence, it is certain that ECMO generates better convergence and diversity.Experimental results substantiates that the algorithm ECMO, can identify appropriate features set using criterion and produces better clusters by utilizing the procuring the knowledge from the neighbors.The algorithm adopts neighborhood learning from the previous work and the NLMOGA procedure is extended to figure the closestneighbor, farthest-neighbor and the indirect neighbor.Based on the outcomes of CLA and KAA, excellent clusters were ranked with more compact and less in diversity.The Table IV Hence, it was inferred that the algorithm selected minimum five attributes and the maximum of eight attributes as its feature to process the objective function.It was also noted that the algorithm could able to produce maximum accuracy of 99.92% at the 5 th iteration. Total number of false negative and false positives was noted to very minimum.Therefore, the ECMO produced high cluster accuracy at minimum computation time.Henceforth, it was recorded that the algorithm ECMO produced maximum cluster accuracy for the healthy dataset of disease diabetes by minimizing the inaccurate positive and inaccurate negative results in minimum CPU running time that could reduce the cost substantially. V. CONCLUSION This research application on diagnosing diabetes disease using evolutionary clustering multi-objective algorithm which helps in analyzing the datasets found in Pima Indian Diabetes datasets of UCI repository.In this work, the best feature of the dataset was identified using selecting features (CL) of criterion learning algorithm. The inaccurate positive and inaccurate negative neighbors were identified using knowledge acquisition algorithm.Hence, the algorithm could able to recognize more suitable healthy and sick objects while it possesses the similar dissimilar properties from the selected feature respectively.ECMO shifts the objects position according to their relative proximity.Hence, the experimental results recorded the optimal solution with good Pareto fronts and high accuracy in healthy clustering.The algorithm could able to produce better cluster accuracy in identifying the inaccurate positive and negative results.Therefore, the reliability by satisfying the considered objectives.Also, algorithm can predict appropriate number of clusters for all the three objectives respectively.Much further work is needed to investigate the utility of having different and more objectives and to test the approach still more extensively, to investigate the utility of having different and more objectives, to hybrid ECMO with multi-objective Particle Swarm Optimization technique for high effectiveness, efficiency, and consistency and to enhance with heterogeneous data. missing values using mutation operator, the testing of data starts with phase training samples.1) TEST: 90% of testing dataset (353 cases) and 10% training dataset (39 cases) of 392dataset.In this test phase, 353 cases training sets with 39 cases of testing samples are considered.During each run, ECMO select different features from the original attributes and the clustering accuracy is recorded.The experiment was repeated 20 times and the results are recorded in Table TABLE II . AVERAGE CLUSTER ACCURACY FOR 20 RUNS TABLE III . PERFORMANCE MATRICS OBTAINED BY RAND-INDEX reveals that performance of ECMO on healthy, inaccurate positive and inaccurate negative results for diagnosis of diabetes disease. TABLE IV . COMPARISON OF AND USING ECMO
2017-05-03T19:37:22.579Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "17909fcd431f2e5cba3b26cbc9a30eadb32f355c", "oa_license": "CCBY", "oa_url": "http://thesai.org/Downloads/Volume7No1/Paper_66-Multi_Objective_Optimization_Algorithm_to_the_Analyses.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "17909fcd431f2e5cba3b26cbc9a30eadb32f355c", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
203568013
pes2o/s2orc
v3-fos-license
Is there any relation between connective tissue growth factor and scar tissue in vesicoureteral reflux? Vesicoureteral reflux (VUR) is the most common uropathy in childhood which leads to increased frequency of urinary tract infection (UTI) and renal scarring. Connective tissue growth factor (CTGF) plays an important role in the development of glomerular and tubulointerstitial fibrosis in progressive kidney diseases. The aim of this study was to investigate the relation between urinary CTGF and renal damage resulted from VUR. This cross sectional study included 70 patients with VUR and 62 healthy sex and age matched children. Urinary creatinine and CTGF (uCTGF) concentrations were analysed in all cases and CTGF to creatinine ratio were calculated. The records of radiologic evaluations of the patients including ultrasound, voiding cystouretrography and 99m-technetium dimercaptosuccinic acid (DMSA) scintigraphy were obtained retrospectively. The patient group was further divided into two groups according to the existence of renal cortical scarring in the DMSA scan. The study consisted of three groups; Group 1 (control group) 62 children, Group 2 (VUR positive, scar negative) 24 patient, Group 3 (VUR positive, scar positive) 46 patient (VUR+scar). The medians of uCTGF and uCTGF to creatinine ratio of the three groups were significantly different (p <0.001). Pairwise group comparisons revealed that Group 1 had significantly lower uCTGF level and uCTGF/creatinine ratio, as compared to Groups 2 and 3 (p <0.001 and p=0.002, respectively). There was no statistically significant difference between Groups 2 and 3 (p=0.052). uCTGF is significantly increased in children with VUR, independent on the presence of renal scarring. Increased uCTGF, even in the absence of the renal scarring, could be interpreted as development and a progression of glomerular and tubulointerstitial fibrosis in vesicoureteral reflux. Further experimental and clinical investigations are required to fully elucidate the mechanism of CTGF in vesicoureteral reflux. Vesicoureteral reflux (VUR) is the backward flow of urine from the bladder into the ureters and sometimes into the renal pelvis and calyces, due to ureterovesical junction defect. Primary vesicoureteral reflux is common among congenital urinary tract abnormalities. The development of renal parenchymal scarring (RPS) is associated with VUR. While renal parenchymal scarring is detected in 30-60% of children who are diagnosed with VUR for the first time, end-stage renal disease can be observed in 5-12% of them. [1][2][3][4][5][6][7][8] The incidence of renal parenchymal scarring, which is also called reflux nephropathy, has been reported as 32% in Turkish children with chronic renal insufficiency. It has been shown that VURinduced renal parenchymal scarring increases the risk of developing hypertension and focal segmental glomerulosclerosis, and that if VUR is bilateral, it increases the risk of developing progressive renal failure. [7][8][9][10][11] Although a vast amount of information is available about the diagnosis and treatment of VUR, questions still remain regarding how reflux leads to infection and renal damage. Furthermore, imaging is important in diagnosis and follow-up. Standard imaging modalities are renal ultrasound (USG), voiding cystourethrography (VCUG), and renal scintigraphy. Technetium-99m dimercaptosuccinic acid (DMSA) is a nuclear agent that shows most effectively the renal cortical tissue and the functional difference between the two kidneys. Noninvasive tests apart from such invasive and expensive imaging modalities are also needed in diagnosis and follow-up. Renal fibrosis is the last common pathway for many kidney diseases that can progress to ESRD. As a consequence of inflammation and damage, humoral factors are secreted by infiltrating renal cells that stimulate the production of extracellular matrix molecules, which results in the disruption of normal function and integrity of the renal tissue. Connective tissue growth factor (CTGF), transforming growth factor-β1 (TGF-β1), platelet derived growth factor (PDGF), neutrophil gelatinase-associated lipocalin (NGAL), kidney injury molecule-1 (KIM-1), fibroblast growth factor (FGF), and bone morphogenetic protein (BMP7) are the most important mediators for fibrogenesis. 12 For example, TGF-β1 is the most potent fibrogenic factor in renal diseases and is the best indicator of renal damage. CTGF is a member of the CCN family of secreted cysteine rich regulatory proteins. It stimulates renal fibroblast proliferation and extracellular matrix (ECM) synthesis. Three different cell types, including interstitial fibroblasts, mesenchymal cells and epithelial cells, have been shown to express CTGF mRNA. CTGF-positive cells are primarily myofibroblasts in the tubulointerstitial region, and it is synthesized together with α-smooth muscle actin (αSMA). CTGF is the major mediator independent of TGF-β1 in fibrogenesis. 13 In vitro studies have shown that CTGF participates in matrix synthesis and fibrosis. It was observed that CTGF mRNA expression was up-regulated in many diseases such as diabetic nephropathy and cardiomyopathy, fibrotic skin diseases, systemic sclerosis, biliary atresia, liver fibrosis, idiopathic pulmonary fibrosis, as well as non-diabetic acute or progressive glomerular and tubulointerstitial lesions. It has also been shown that CTGF plays a key role in the development and progression of diabetic renal fibrosis, and that urinary CTGF levels were also associated with the stage of diabetic nephropathy. 14 Urinary CTGF levels were elevated especially in diabetic nephropathy. 15 The purpose of this prospective study was to investigate the possibility of early detection of the relationship between urinary CTGF level and renal parenchymal scarring (RPS), which can develop secondary to reflux nephropathy without the need for other invasive tests. Material and Methods This study was conducted with 70 patients with vesicoureteral reflux and 62 healthy volunteers who were admitted to a pediatric nephrology outpatient clinic of a research and training hospital between January 2014 and December 2015. There were 49 girls and 21 boys in the patient group and 38 girls and 24 boys in the control group. The patients were divided into 3 groups: Group 1 (control group) had no VUR or renal parenchymal scarring (62 children, mean age: 6.07±2.99 years), Group 2 (only VUR) had VUR but no renal parenchymal scarring (24 children, mean age: 4.61±3.96 years), Group 3 (VUR with renal parenchymal scarring) had both VUR and renal parenchymal scarring (46 children, mean age: 5.99±3.69 years). The renal USG, VCUG and DMSA reports, which were obtained during routine examinations and tests of the patients in the study, were evaluated retrospectively. The diagnosis of VUR was evaluated according to the VCUG results and was staged between 1 and 5 in accordance with the International Study Classification (International Reflux Study Committee, 1981). 16 VUR stages were added if VUR was bilateral (cumulative VUR score, CVS). The patients were divided into three groups according cumulative VUR score (TG) as follows: mild VUR (CVS = 1-2), moderate VUR (CVS = 3-6) and severe VUR (CVS ≥ 7). Renal scarring was diagnosed with DMSA. It was noted that DMSA scan was performed 3-6 months after urinary tract infection (UTI). According to the results of DMSA, the renal scarring was scored as follows: 0=normal, grade 1=one lesion (mild scar), grade 2=two lesions (moderate scar), grade 3=diffuse renal scarring together with renal parenchymal damage (severe scar). 17 Importantly, patients were excluded if they had a history of pyelonephritis during the period between withdrawal of DMSA and urine specimen collection. Therefore, it was accepted that the DMSA stages had not changed until the urine specimen collection. Those with a history of urinary tract infection, glomerulonephritis, urinary tract stone, major anomaly, and chronic disease were not included in the study. The study protocol was approved by Institutional Ethics Committee. Parental consent was obtained from each case in the study after providing detailed information about the aims of the study. Urine samples were collected from the patients for analysis of CTGF. They were asked to collect a midstream firstly in the morning on an empty stomach. The urine sample transferred into sterile containers was centrifuged at 3,000 rpm for 5 minutes. The supernatant was stored at -80°C until the time of analysis. After all samples were collected, they were analyzed with the ELISA (enzyme-linked immunosorbent assay) method based on the biotin double antibody sandwich technology for human CTGF assay (Catalog No: PHG0286, Out Licensing, Life Technologies, 5791 Van Allen Way, Carlsbad, California 92008). Urine creatinine was measured by using the modified Jaffe method on an automated analyzer (AU 5830-Beckman Coulter, USA) and urine protein was measured on Cobas 6000 modular analyzer series (in c501 module, Roche, USA). Results were expressed as pg/ml, also expressed as pg/mg creatinine, relative to creatinine (urinary protein normal: <15 mg/dl; trace: 15-30 mg/dl; positive: >30 mg/dl).) Statistical analysis Statistical analysis was performed using the Statistical Software Package Program (Utah, USA), NCSS (Number Cruncher Statistical System) 2007. In order to evaluate the data, descriptive statistical methods (mean, standard deviation) were used. Moreover, the Kruskal-Wallis test was used for intergroup comparisons, and the Dunn's multiple comparison test was used for subgroup comparisons. The Mann-Whitney U test was performed to compare differences between two independent groups. The Chi-square and Fisher's exact test were used in order to compare qualitative data. The areas under the ROC curve for urinary CTGF/ creatinine (pg/mg)10 3 and CTGF (pg/ml) were calculated. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), likelihood ratio (LR+), and predictive There was a statistically significant difference between the groups in terms of urinary creatinine, CTGF levels and urinary protein/ creatinine and CTGF/creatinine ratios (Table III). The mean urinary creatinine level was found to be significantly lower in the VUR group than in the control group (p=0.003). The mean urinary CTGF level and the mean urinary protein/creatinine and CTGF/creatinine ratios were found to be significantly lower in the control group compared to those in the VUR with scar and VUR without scar groups. There were no significant differences between the VUR with scar and VUR without scar groups in terms of these values. Predictivity calculations of urinary CTGF/creatinine ratio and urinary CTGF level in VUR+scar and VUR groups compared to control group The areas under the ROC curve for urinary CTGF/creatinine ratio and urinary CTGF level in the differential diagnosis of VUR with scar compared to the control group were found to be 0.646±0.054 (0.539 -0.753) and 0.722±0.048 (0.627 -0.817), respectively. The areas under the ROC curve for urinary CTGF/creatinine ratio and urinary CTGF level in the differential diagnosis of VUR without scar compared to the control group were found to be 0.768±0.065 (0.640 -0.895) and 0.737±0.064 (0.612 - Sensitivity, specificity, positive and negative predictive value and likelihood ratio of uCTGF/ creatinine ratio and uCTGF levels in the differential diagnosis are shown in Table IV. Discussion Vesicoureteral reflux (VUR) is among the most common congenital urological anomalies in children. Its incidence has been found to be about 1% in newborns but as high as 30-45% in children with urinary tract infection (UTI). 18 It has been associated with an increased risk of UTI and renal scarring. It is diagnosed mostly after UTI. In a cohort of pediatric patients with UTI, including 68% infants, VUR was diagnosed in 33% of cases. 19 Reflux nephropathy (RN) is defined as the formation of renal parenchymal scarring that is usually associated with UTI in patients with VUR. However, renal parenchymal scarring can be observed in the presence of UTI without VUR or in the presence of VUR without UTI. Children with VUR are more likely to develop pyelonephritis and renal scarring as compared to those with no VUR, and children with VUR grades III or higher are more likely to develop scarring than children with lower grades of VUR. 6 The risk of renal scarring involving more than 25% of renal parenchyma is significantly higher in patients with grade III-IV (40%) VUR as compared to those with grade I-II VUR (14%) or no VUR (6%). 20 Therefore, children with VUR can lead to anxiety in both parents and physicians, because long-term follow-up and treatment are necessary in order to deal with its complications. Use of non-invasive tests and methods for diagnosis and followup of the disease would bring convenience to their families as well as physicians. Connective tissue growth factor (CTGF) has an important role in embryogenesis, angiogenesis, wound healing, and tissue repair, especially in mesangial repair after kidney injury. Urinary CTGF levels (such as TGFβ) may be useful in following renal diseases. Studies on CTGF have been conducted mainly in adults with diabetes. In a study conducted in 2003, Gilbert showed a relationship between uCTGF and the severity of diabetic nephropathy and emphasized the importance of this indicator. In the context of its known profibrotic effects, these findings suggest that CTGF contributes to chronic tubulointerstitial fibrosis accompanying proteinuric renal diseases. 14 In our study, urinary CTGF level and urinary CTGF/ creatinine ratio was found to be increased in patients with VUR compared to the control group. However, urinary CTGF levels did not provide an additional contribution in the presence of RPS. In human studies, it has been reported that urinary CTGF expression shows a positive correlation with the severity of diabetic nephropathy and is a guide for progression of microalbuminuria to proteinuria. 21 This suggests to us that proteinuria and urinary CTGF level may provide guidance for followups in terms of the development of RPS Proteinuria is one of the complications of RN that is more common in adult patients with VUR than in pediatric patients with VUR. Microalbuminuria has been found to be associated with renal scarring in 51% of pediatric patients in early stages of glomerular injury before progressive renal damage and renal failure developed. 22 In our study, proteinuria levels were found to be lower in the control group than in the VUR with scar and VUR without scar groups. This supports that proteinuria begins before RPS develops and may be a good indicator of disease progression at follow-up. In experimental diabetic nephropathy, overexpression of CTGF in glomeruli and tubulointerstitium increased glomerulosclerosis, tubulointerstitial fibrosis, and proteinuria. [25][26][27] The urinary CTGF levels were normalized in these patients, in consistence with the improvement of tubular dysfunction with antiproteinuric measures. 29 process persists even in the absence of RPS in patients with VUR. However, the lack of significant difference between the VUR and VUR with RPS groups may mean that uCTGF level is not a sufficient marker for pre-diagnosis of RPS or is not sensitive enough to show the presence of a small scar that is not yet detectable in DMSA. Furthermore, it is not known whether the increased levels of uCTGF, which is one of the factors responsible for mesangial repair after renal injury, prevent the progress of kidney disease. Since there is no other study on the relationship between the presence of VUR and uCTGF levels, a comparison could not be made to explain the possible link between this marker and fibrosis related to RN. In conclusion; our study suggests that this noninvasive test may be useful in monitoring RPS associated with VUR but that there is a need for multicenter studies where a greater number of patients can be followed for standardization purposes.
2019-09-28T13:02:39.409Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "60cfde694c2c5ad3a90f90c64dfeedb04185ae26", "oa_license": null, "oa_url": "http://www.turkishjournalpediatrics.org/pdf.php?&id=1940", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f9778d750de87af616d37af437b100d4675dd0b6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15629972
pes2o/s2orc
v3-fos-license
A Novel Flexible Model for the Extraction of Features from Brain Signals in the Time-Frequency Domain Electrophysiological signals such as the EEG, MEG, or LFPs have been extensively studied over the last decades, and elaborate signal processing algorithms have been developed for their analysis. Many of these methods are based on time-frequency decomposition to account for the signals' spectral properties while maintaining their temporal dynamics. However, the data typically exhibit intra- and interindividual variability. Existing algorithms often do not take into account this variability, for instance by using fixed frequency bands. This shortcoming has inspired us to develop a new robust and flexible method for time-frequency analysis and signal feature extraction using the novel smooth natural Gaussian extension (snaGe) model. The model is nonlinear, and its parameters are interpretable. We propose an algorithm to derive initial parameters based on dynamic programming for nonlinear fitting and describe an iterative refinement scheme to robustly fit high-order models. We further present distance functions to be able to compare different instances of our model. The method's functionality and robustness are demonstrated using simulated as well as real data. The snaGe model is a general tool allowing for a wide range of applications in biomedical data analysis. Introduction Electrophysiological brain signals are widely studied to get insights into the inner function of the brain. e electroencephalogram (EEG), as an example, has been analyzed for decades and is particularly popular because of its noninvasiveness, wide availability, relatively small cost and excellent temporal resolution which enables capturing the fast neural dynamics. Because it is known that electrophysiological brain signals exhibit important spectral characteristics, frequency transforms are oen applied. However, since in general brain signals do not possess the statistical property of stationarity, time-frequency transforms are of special interest. Such methods are able to represent a given signal jointly in the timefrequency domain, called its time-frequency representation (TFR). ereby the signal's spectral components can be analyzed in relation to its temporal dynamics. In a wide sense, TFRs can be interpreted as images containing complex pixel intensities in general. An important feature of biological signals, and particularly brain signals, is their inter-and intraindividual variability. at is, under �xed experimental conditions, the obtained signals exhibit heterogeneousness not only between groups of subjects, but also between subjects within the same experimental group and even within the same subject between multiple experimental trials. Existing signal analysis techniques either do not take this issue into account adequately or usually treat it by de�ning frequency bands of interest rather than single frequencies, and similarly time intervals instead of sharp instants. However, this strategy requires a priori knowledge about the variability to appropriately set the interval widths, and it is imprecise because it blindly includes all information contained in that time/frequency region. A good example is the time-frequency coherence analysis of two given input signals, for instance, by means of the cross short-time Fourier transform [1], the cross Wigner-Ville distribution [2], or the wavelet coherence [1,3]. All of these methods relate both signals at �xed time/frequency (or scale) locations. us, if signal A exhibits the same neural activation as signal B, but signal A's pattern is shied in frequency just a little, none of the abovementioned techniques will be able 2 International Journal of Biomedical Imaging to �nd the strong similarity of A and B. Although coherence estimation and other rigid strategies have been successfully applied "for more than 30 years" [4], this issue has inspired us to develop a general �exible method of pattern analysis and corresponding feature extraction in electrophysiological TFR data. By abstracting from the TFR images and working with the representation of a TFR pattern, numerous applications in biomedical signal processing emerge. TFR patterns quantify neural activity and therefore extract useful features for subsequent analyses. e model proposed in this work goes even further by offering interpretable parameters. TFR patterns reduce dimensionality by representing neural activity in a wide spectrotemporal region by comparably few quantities. Pattern-based outlier detection has the potential to become a useful tool for data quality assurance. In ongoing studies we are employing the presented method, for instance, to estimate functional brain connectivity by means of a pattern-based approach. In the following, we present the developed neuroinspired interpretable model which is able to capture general timefrequency patterns. We use solely EEG data for demonstrations here, but our method is applicable to general electrophysiological signals or even to other signals showing similar behavior. Section 2 is devoted to developing our idea by extending the multivariate Gaussian model. Algorithms for robustly �tting the novel model to time-frequency representations are presented. A strategy for �nding an appropriate model order is given, and distance functions are de�ned which quantify (dis-)similarity of two given models. ese methods are tested in Section 3, where real as well as simulated data are used to demonstrate our technique's functionality and robustness. Methods While our technique is not restricted to speci�c timefrequency distributions, we employ the smoothed pseudo Wigner-Ville distribution [5] in this work. is is a quadratic transform estimating signal power in the time-frequency domain, whereby all quantities in this work are real numbers. e transform generates quite smooth TFRs, which means that neighboring pixel values are correlated. Although only positive values can be interpreted as signal power, the Wigner-Ville distribution introduces also negative values in general [6]. ese data properties will be taken into account by our method. We will refer to time-frequency representations as mappings (TFR) ∶ which estimate signal power for each point in the time-frequency domain. Of the numerous ways to quantify TFR patterns, we choose to �t a parametric surface to the data. Because of the spatial correlation inherent in the data, traditional regression assumptions about independence of observations do not hold here [7]. is absence of strong gradients in the TFR images also invalidates most image feature extraction techniques, which are oen based on edges and texture [8]. Our model, however, is especially designed for spatially correlated data; furthermore its parameters are interpretable. ese quantities are useful features which embody important information about the underlying signal and thereby considerably reduce data dimensionality. Using our method, feature extraction can be fully automated, and no training data are necessary. Nevertheless, a priori information can be incorporated fairly easily. In the following, we propose an extension of the wellknown Gaussian model for TFR analysis. e Gaussian Model. e Gaussian model for multivariate data is de�ned by with being a constant additive offset, the amplitude relative to the offset, the constantdimensional mean vector, and Σ denoting a symmetric positive de�nite matrix. Positive de�niteness ensures that the argument to the exponential function is always negative; additionally we know that is bounded by zero and one for negative . erefore, the exponential factor scales the �nal amplitude between 0 and relative to the offset . e term ( − ) Σ −1 ( − ) is also known as the squared Mahalanobis distance of with respect to and Σ. Because in our context this function represents arbitrary data in contrast to statistical distributions, will also be called the position vector, and Σ is the spread matrix, its entries are denoted spread parameters. Gaussian models are quite robust in various ways. Firstly, the model will be shaped like a peak for all possible parameter values by imposing the constraint that Σ (and thus also Σ −1 ) is symmetric positive de�nite. ereby, the model will never be able to completely "degenerate. " Because the model is not �exible enough to �t small local variations of an expected pattern, the Gaussian model is relatively insensitive to local data outliers and is also unsusceptible to over�tting. An additional aspect of robustness is that extreme peak deformations are directly re�ected in extreme parameter values. ereby degenerated models can be easily detected or may even be prevented by imposing parameter constraints. 2.1.1. Interpretability. e Gaussian model is well-suited to extract bivariate peaks from brain signals' TFR data, re�ecting short intervals of neural excitement in a speci�c frequency range. An instance of the above described surface, in the bivariate case, is fully identi�ed by its parameter vector e absolute peak height , the peak position ( 1 2 ) , and the peak orientation can be derived from the parameter vector. Further relevant quantities are the temporal peak onset, peak offset, and peak duration (as the difference of the previous two). 2.2. Extending the Gaussian Model: e snaGe Model. As already mentioned, the Gaussian model's robustness comes at the cost of in�exibility. While some local effects in TFR data International Journal of Biomedical Imaging 3 can be appropriately explained by (1), more general patterns of activation do not follow peak-like shapes, as will be shown later. us a generalization of the Gaussian model would be desirable, especially concerning the ability to represent patterns of activation rather than just independent "events" in the spectrotemporal domain. At the same time, a generalized method should maintain maximum robustness in order not to degenerate easily and to prevent over�tting. e so-called Gaussian mixture modeling (GMM) is a straightforward extension [9], but this method still assumes (multiple-) peakshaped data. In the following, the smooth natural Gaussian extension (snaGe) model is presented as a �exible extension of the multivariate Gaussian model. Before giving a formal de�nition, we explain the idea in an intuitive way, guided by Figure 1. e -variate Gaussian model can be described by its ( -dimensional peak point ( , … , , and its spread parameters controlling the exponential �attening relative to along the independent variables' dimensions. Now the idea is to not use only one, but peak points ( ∈ ℝ , … , interpolated by a smooth ( -dimensional curve of peaks. A way to think of the surface in Figure 1 modeled by snaGe is to "shape" it by sliding an -variate Gaussian model along the curve, its peak point being connected to the curve and thus varying in height ( ) and position (vector ). ereby complex smoothly "bent" patterns of data with varying amplitude (dependent variable) can be captured. e term snaGe is inspired by these snake-like forms. Analogously to the Gaussian model, an spread matrix determines the model's shape, which is a surface for . e tradeoff between robustness and �exibility can be controlled by choosing the number of peak points . Using many peak points will allow for good �ts to complex patterns but will also increase the danger of over�tting. �hoosing a small yields a robust model, but its ability to capture complex patterns will be limited. By setting , snaGe reduces to the traditional Gaussian model as a special case. Regarding the standard Gaussian model, each function value ( ( is fully determined by the Mahalanobis distance of the point to the unique mean point . But since the snaGe model offers in�nitely many mean points, the question arises how to calculate the surface values. is issue will be addressed in the next section, where a formal de�nition is given. ������ �or�a� �e�nition� Let the "number of peak points" be denoted by ∈ , . Let , ℝ denote a smooth -dimensional curve of "means, " and let , ℝ be a smooth one-dimensional curve of "amplitudes" along . Let further the "offset" ∈ ℝ, and let Σ ∈ ℝ be a symmetric positive de�nite matrix. �e�ne F 1: Example of an instance of the snaGe model, in the bivariate case. A surface plot is shown as well as its two-dimensional projection (colored contours). three-dimensional points were smoothly interpolated to yield a "curve of peaks" (curve connecting the circles). is curve's 2d projection is ( , the black line. A surface is determined by the spread parameters (dashed 2d ellipses), controlling the shape of the exponential �attening to both sides of the curve. Note that ( is a family of traditional Gaussian models, parameterized by the curve parameter . In order to construct a function which is independent of , we de�ne * ( arg max ∈ , e snaGe model is then given by e function * ( de�nes that traditional Gaussian model which assigns to the largest absolute amplitude relative to the offset among all members of the Gaussian family ( . is is necessary to cope with positive as well as negative ( . In TFR analysis, ( can be restricted to only positive values (see Section 2.3.1), in which case (5) in fact simpli�es to For ( as well as ( the max( function is necessary in case ( is a "near self-intersecting" curve, in the sense that ‖ ( − ( ‖ is small, but | ( − ( | is large. Figure 2 illustrates such an exemplary scenario. We assume that the diagonal entries of the spread matrix Σ, that is, the spread along each dimension, are sufficient to control a TFR pattern's "width. " erefore, we �x offdiagonal entries to zero for the sake of robustness. ereby, the Mahalanobis distance in (3) reduces to the weighted Euclidean distance. e two curves and are yet to be de�ned in terms of discrete parameter values. For good parameter interpretability, we choose to form both curves by interpolating points ∈ ℝ , by B-splines of degree ≤3, which yields the curve: By combining cubic B-splines with the Gaussian shape, we obtain a sufficiently smooth model which inherits both the splines' �exibility and the Gaussian standard model's robustness. Our model further inherits the B-splines' local control property; that is, varying a will affect the model only in the 's vicinity. Additionally, the degree of �exibility can be adapted to the data at hand by varying from 1 (single Gaussian peak) to arbitrary �exibility with . An instance of the -variate snaGe model with points (or of order ) is fully represented by the parameter vector of length : , , 22 , , , While the offset and the spread parameters are mainly responsible for the prediction of data values, the curve interpolating the is directly interpretable as it models the main path of peaks in the data. In the next section we show how to �t the model to data in a robust manner. Fitting the Model to TFR Data. Given a time-frequency representation TFR , ∈ , we aim to �nd a parameter vector so that the respective model �ts the data "best, " in the sense that it minimizes a cost function. We use the sum of squared differences of the data and the modeled surface: e parameters are implicitly represented by in the above formula. is quantity is also called the "sum of squares due to error" which is zero if the model perfectly �ts the data. SSE is a nonlinear function dependent on . Using squared differences pronounces outliers, but these are not expected to occur frequently in our smooth TFR data. In order to �nd a locally minimum solution of SSE, a nonlinear least squares algorithm implemented in the F 2: Depiction of the maximum rule. e curve of means is sharply bent. e surface value for a point * , * 2 (coordinates depicted by red lines) is chosen among all possible traditional Gaussians (black graphs visualize 1D cuts of two of these) by the maximum rule (red circles), see (6). MATLAB Optimization Toolbox, lsqnonlin [10], is employed. Given an initial parameter vector and the cost function SSE, this optimizer produces a sequence of models . e iteration hopefully converges to a * with minimum cost, that is, best resemblance between model and data. We attempt to provide advantageous starting conditions for the optimizer by preprocessing the TFR and by obtaining an initial parameter vector 0 which is expected to be close to the optimum with respect to SSE. Moreover, an iterative re�nement scheme is proposed to be able to robustly �t models of high order. e process of �tting is outlined by 2.3.1. TFR Preprocessing. TFR data are badly scaled, showing differences of several orders of magnitude in values of time, frequency, and signal power, which affects optimization performance [11]. To address this problem, lsqnonlin offers a way to take into account typical values for each dimension for gradient estimation. Also concerning this issue, any Euclidean distance operating on TFR data in our algorithms is weighted appropriately. Furthermore, smooth objective functions are desirable so that the low-order Taylor approximations used during optimization resemble the cost function in a relatively large neighborhood around the current point. To this end, the TFR images are smoothed and subsampled, which has the additional bene�t of faster cost function evaluations. Finally, since negative values in the time-frequency domain are not interpretable, they are usually set to zero. F 3: Illustration of an optimal horizontal image path (red line) found by a dynamic programming algorithm. e sum of signal power values along the red line is higher or equal to that of any other horizontal path. e trajectory is smoothed (black) for the robust extraction of the initial peak points' time-frequency coordinates. e background TFR image is computed from simulated data, whose three consecutive peaks of activity are correctly connected by the path. of the cost function's (unknown) global minimum, that is, a vector whose cost is already low. To this end, the constant offset and the spread parameters are initialized to , 11 ( max − min )/5 and 22 ( max − min )/5 if the underlying TFR's time and frequency axes are bounded by min , max and min , max , respectively. One may choose any sensible values alike, but the spreads should not be initialized too small in order to obtain a generalizable model. Yet, more thought has to be put in choosing the number and coordinates of the peak points ( ) . In the following we propose a way to compute initial estimates of the time, and frequency coordinates of all ( ) directly from the data. Refine A reasonable approximation to the unknown optimal curve of peaks can be found by tracing a path through the TFR image (TFR img) from le to right, which runs through areas of high pixel intensity. More precisely, the sum of all pixel intensities along this path should be as high as possible. is is an optimization problem in turn, yet its solution can be computed in quadratic time complexity (provided that the path's slope is bounded) by a dynamic programming algorithm, see [12]. Because a global optimum is guaranteed to be found, this strategy is insensitive to local outliers and noise. To this end, a similar approach as described in [13] is employed. Following the notation therein, we de�ne our energy function to be equal to the TFR values themselves, that is, ( ) . A horizontal path * (called seam in [13]) which maximizes (this is in contrast to [13], where minimum energy seams are computed) this simple cost function is found by dynamic programming. See Figure 3 for an example. Additionally, the paths are constrained by imposing an upper bound on their slopes. is value depends on the time-frequency resolution here and once more represents a compromise between robustness and �exibility. Given the found path * and the desired model order, evenly spaced samples are subsequently drawn from a smooth approximating curve to obtain estimates of the �rst two coordinates of the ( ) , 1 . We choose to empirically set the ( ) s' last components, interpretable as amplitudes relative to the initial constant offset , to max( (TFR) ) − max( (TFR) ). Once a parametric representation of the data is available, its accuracy can be improved in a step-wise manner, as is presented in the following section. ������ ��e�a�i�e �e�nemen�� As already stated, the number of points ( ) controls the model's robustness which complements its ability to resemble complex patterns. erefore, the demand for a near-optimal initial parameter vector increases with the model order . Employing the optimal image path method described in the previous section yields a "reasonable" estimation, but sampling equidistant points ( ) , 1 from the resulting curve is a simpli�cation. In fact, it can be observed that the optimizer tends to concentrate the ( ) in time-frequency regions of high signal variability. For low model order , this shortcoming of our initial parameter estimation algorithm can be compensated easily by the optimization algorithm, but it may become a problem for increasingly �exible models. For this reason we propose an iterative scheme. (1) Find initial parameters by means of an optimal path (see Section 2.3.2) for a �rst, robust model of low order . Let . (2) Fit the model to the data to obtain optimal parameters * . (3) Construct the optimal curve of peaks ( ) by interpolation of the peak points (see Section 2.2.1). (4) Obtain the 1 1 peak points for a re�ned model by uniformly sampling (with respect to the spline's sites ) the curve computed in the previous step. e curve found in steps 2 and 3 will exhibit smaller gradient magnitude, that is, traversal speed, in areas of high signal variability than in other regions. We aim at maintaining the curve-de�ning points' optimum distribution found by the �tting algorithm and at enhancing the model's �exibility mainly in these areas. It turns out that simply by uniformly sampling the �tted curve (step 4) we obtain a new interpolated curve which retains these properties. An application of this algorithm is demonstrated in Section 3.1, where the resulting sequence of nested models is evaluated. Model Distance. In this section we propose two functions for calculating the distance between two models ( 1 , ( 2 regarding (dis-)similarity of shape. Distance measures are necessary, for instance, to quantify how well the data exhibit an expected pattern. We will also employ these functions to assess our model's robustness. Distance functions which are based solely on the curve of peaks ( were found to be quite effective. Other possibilities include parameter vector distances and pixelwise differences of signal power of the models' generated data. By comparison, curve-based distance functions have the advantage of being able to interrelate models of different orders. Additionally, they are not in�uenced by the less informative parameters (offset and the entries of Σ). A popular distance measure for parametric curves is the Fréchet distance [14]. In the continuous case, the Fréchet distance of two parametric curves 1 ( and 2 ( is de�ned by _ max Here, ( and ( are monotone reparameterizations of the two curves, and ( denotes (weighted) Euclidean distance. In words, we search for those reparameterizations which make the curves the most similar with respect to maximum point-wise Euclidean distance along the curves. is maximum for these reparameterizations is returned as the two curves' continuous Fréchet distance. In practice, the discrete Fréchet distance is frequently applied, whose computation is based on dynamic programming once more [15]. In the discrete case, an additional distance function _sum ( 1 , 2 can be obtained by replacing the max function with a sum over . at way, _sum represents an average distance, being less prone to outliers in the curves. Real Data. We demonstrate the work�ow to determine the appropriate model order by �tting a TFR of real EEG data in this section. Typically we determine the necessary model complexity by �tting data with good signal to noise ratio (SNR) in order to prevent the overestimation of . For example, one possibility to achieve sufficient data quality is to average several TFRs which are expected to show similar patterns. e averaged TFR of real EEG data shown in Figure 5 will guide the following explanations. e depicted brain signals located in the lower frequency bands were recorded from the temporal brain region during a face recognition experiment. ese data exhibit a pattern of activity which is too complex to be captured by a traditional Gaussian model. F 5: TFR of real EEG data which is �tted by iteratively re�ned models. Multiple local peaks are visible which are connected by a path of increased activity. is forms a complex pattern. e iterative scheme described in Section 2.3.3 is employed to �t models of increasing �exibility to the highquality data. An optimal path (see Section 2.3.2) estimates the initial parameters for the �rst, least �exible model of order 0 . A minimum value of 0 = 3 is necessary to model bent patterns. Since the appropriate is still unknown, a sufficiently large number max = 7 is chosen for the re�nement. At each re�nement stage the respective model is evaluated, and in the end the most suitable * ∈ 0 , … , max , 0 ≤ * ≤ max (11) is chosen as the model order for future �ttings on lowerquality data. Model evaluation is realized by three measures. ese are the cost function value (SSE, see Section 2.3), the coefficient of determination 2 and its adjusted version 2 adj [16]. Although the use of quantities based on the coefficient of determination is discouraged for nonlinear models [17], they are applied here nonetheless for two reasons. ey are found to perform well for our purposes, and the proposed alternatives (AIC [18] and BIC [19]) are not easily applicable here. is is because the assumption of normally distributed residuals oen does not hold, which is supported by a highly signi�cant Shapiro-Wilk test [20] at = yielding 10 −3 for this experiment. Figures 6, 7, and 8 illustrate the results. e results show that for the data at hand a model of order * = 4 is sufficient to capture the variability. Having determined the maximum model complexity on high-quality data, such a model can now be �tted to the rest of the data. If the TFRs are not expected to vary substantially, like when �tting a model to signals from several nearby sensors, a previous �t may serve as the initial model. However, if, for instance, multiple data segments of the same sensor should be �tted, the TFRs' patterns may vary strongly. In this case, initial parameters should be chosen depending on the data by using the method of optimal paths described in Section 2.3.2. Since in this example * = 4 is a quite moderate number, the iterative re�nement may also be skipped. However, in general we would start with 0 = 3 or 0 = 4 and re�ne up to the determined * , as proposed in Section 2.3.3. Synthetic Data. We want to assess our model's robustness by simulating data and measuring how strongly the model is affected by additive Gaussian noise. To this end, arti�cial data are created in the time domain, and their TFRs are computed to which our model will be �tted. Description of the Simulated Data. We created a signal consisting of three consecutive oscillations, representing an alpha-theta-alpha EEG pattern at 10 Hz/4 Hz/10 Hz respectively over a time span of 2.5 seconds. e simulated sample rate is 250 Hz. A plot is shown in Figure 9. ese data are quite challenging for our model because three distinct peaks emerge in the TFR which could be more appropriately modeled by a mixture of independent Gaussian peaks. However, we want to demonstrate the �exibility of the snaGe model which should also be able to cope with patterns of this form. For the following experiments we chose to start the �tting with a model of order to account for the pattern�s complexity and perform one re�nement step. Initial parameters are estimated by �nding optimal paths, which means that no a-priori information about the known optimal model is passed to the �tting procedure other than the number of ( to use. We de�ne the optimal model by �tting the noise-free simulation in the same way. e distance measures from Section 2.4 are used to determine how well the simulated pattern is found. Noise Experiment. In this experiment we added Gaussian noise, which is appropriately �ltered with respect to the sampling frequency, to the simulated data in the time domain. Signals exhibiting signal to noise ratios of −15 dB up to +10 dB were generated in steps of 2.5 dB. At each SNR, ten distinct noise realizations are created to obtain representative results. is independent noise in the time domain will produce correlated noise in the time-frequency domain due to smoothing. erefore, the pattern shown in Figure 9 will be distorted. is experiment serves to assess how strongly our algorithm is affected by pattern variability, respectively, to investigate its robustness. Small pattern distortions should ideally only slightly alter the optimal model, re�ecting its robustness and avoidance of over�tting. We further want to �nd out to what degree our model is able to �nd the simulated pattern at all. We note here that adding noise increases the TFRs' maximum amplitudes exponentially which strongly affects the comparability of different models. Without normalization, one would observe an exponentially decreasing distance for increasing signal to noise ratio. But this would merely re�ect the decreasing data amplitudes and contain no information about the quality of �t. �owever, normalizing maximum data values are not an appropriate option either, because, for negative SNRs, this would keep the noise constant while exponentially shrinking the pattern's pixel intensities. Even if the optimal model was perfectly recovered from the noisy simulation, high distances would arise. Only if signal power is excluded from model distance estimation, the returned values are useful representatives of how well the pattern was found. e snaGe's robustness to noise with respect to the pattern's power is therefore not regarded here. is is done by setting the third dimension of the path of peaks to zero during Fréchet distance computation. Figure 10 visualizes the results. Both curves of mean distance consistently decrease with improving data quality. Convergence to the optimal model seems to require high signal to noise ratios. At 7.5 dB, the distance measures' variances fall off, re�ecting the point of reliable pattern extraction. Apparently, the noise and interferences introduced in this experiment considerably impair the �tting process. In Figure 11, this issue is exemplarily investigated. At the positive SNR of 5 dB, where distance variances across the noise realizations are still high, the �t which exhibits the largest distance is plotted. e pattern was in fact found, but only in a different way than was expected. is leads to high Fréchet distances. Nevertheless, this example shows that the impact different kinds of noise may have on the �tting process. To get a better feel for the average ability to �t the pattern under the in�uence of noise, see Figure 12. At each noise level, the ten �tted models are averaged by computing the mean parameter vector. Shown is a sequence of mean models which progressively look more similar to the true pattern. In fact, the average �tting capability concerning both the positioning of the peak points in the time-frequency domain and the estimation of surface values is better than expected aer having studied Figure 10. Apparently, although the mean distances are still decreasing at negative signal to noise ratios, they are already small enough for successful pattern extraction on average. An example is the subplot corresponding to SNR = 0 dB in Figure 12, which already clearly resembles the simulated pattern.is experiment shows that interferences between the desired signal and additive noise affect the �tting process quite strongly in the worst case. Positive signal to noise ratios of at least 7.5 dB are found to be necessary for reliable pattern extraction in this investigation. However, successful data modeling is also possible at lower SNRs, as is seen in the average case. Discussion e snaGe model is especially suited for time-frequency representations of electrophysiological signals because of their (expected) nonnegativity, smoothness and their patterns following a path of peaks. However, our robust model is able to cope with data which do not exactly meet these requirements. In order to retain robustness, we imposed several restrictions on our model, like neglecting offdiagonal spread parameters and holding the spread matrix constant over the curve of peaks. An interesting question remains how the model's �exibility and robustness would be affected if these constraints were dropped. Little effort would be necessary to include the stated extensions. As is typical for nonlinear optimization problems, the choice of initial parameters is crucial to obtain satisfying results. erefore, a-priori knowledge about the optimal model can be incorporated by starting the optimization with a model which was previously �tted to similar data. Moreover, an algorithm based on an optimal path is developed to estimate initial model parameters directly from the data. Its robustness stems from the guarantee to �nd the globally optimal path. However, this method is limited to positive peak polarity by trying to maximize the path's average amplitude. Additionally, the technique will not be able to �nd initial models which exhibit multiple contemporary components. In such a case, the nonlinear optimization algorithm, which was found to work well, must compensate. Further strategies for the estimation of initial parameters would be desirable. In particular, the extraction of the curve interpolation points ( from the optimal path possesses potential for improvement. An open question is how we should deal with the spatial correlation of both the dependent variable and the residuals in a statistical inferential context. Further work is necessary to facilitate statistical testing, for instance, to assess the null F 11: Fitted model (white curve) at SNR = 5 dB, which has the largest distance _sum to the target pattern across all ten noise instances. e noisy TFR is shown in the background. e signal was successfully extracted, yet a high-distance results due to a different connection of the three peaks compared to the pattern. hypothesis that an expected pattern is not contained in the data. Concerning the presented measures of model distance, the Fréchet distances were found to be very useful to assess model similarity in our experiments involving simulated noise. eir distinct advantage is their independence of model order and the disregard of the less interpretable parameters. On the other hand, spurious high distances could be observed when in fact the pattern was found. is can be attributed to the fact that the simulated data exhibit three independent peaks, which is a violation of the snaGe's assumption of a connected path of peaks. erefore, a combination of Fréchet values and a pixel-wise distance function based on the models' generated data seems advantageous. When applied to noisy time signals, the snaGe model adapts too well to the corresponding smooth time-frequency representations. Because the optimized cost function does F 12: Mean snaGe models per simulated SNR. Models were averaged across the ten noise realizations by direct parameter vector averaging. e curve of peaks (white line) is shown as well as the models' predicted data ( ( . Compare with Figure 9. not take into account information about the expected pattern, the model simply tries to capture the TFR data as accurately as possible. Data preprocessing and TFR interference suppression are therefore extremely important. Adding penalty terms to the cost function and/or providing explicit initial parameters are ways to point the optimizer in the right direction. However, even without specifying a-priori knowledge, the model was able to �nd the simulated pattern for low signal to noise ratios in the average case. Conclusion e analysis of time-frequency representations of electrophysiological signals calls for �exible methods accounting for inter-and intraindividual data variability. We present the �exible, robust, and interpretable model snaGe, which extends the established Gaussian model. Its ability to extract 3D features from time-frequency representations of electrophysiological data is demonstrated. However, the model applies to general multivariate data which exhibit similar behavior. In this work, several techniques to improve the model �tting performance are described. We show how to estimate start parameters directly from the data. An iterative scheme to re�ne optimized models is proposed so that high-order models can be robustly �tted. Experiments with real as well as simulated data demonstrate the snaGe model's robustness and �exibility. �nder the in�uence of severe noise, the developed technique is best suited for patterns which are too complex to be appropriately captured by a Gaussian model, but still simple enough to facilitate robust �ts. To summarize, due to its robustness and �exibility the snaGe model possesses the potential to become a bene�cial tool for practical EEG/MEG analysis, including functional brain connectivity analysis, outlier detection, time-frequency denoising, and feature extraction.
2018-04-03T00:49:44.461Z
2013-01-21T00:00:00.000
{ "year": 2013, "sha1": "6c9918cd386b7186cbf49785d4a74e734f803c98", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ijbi/2013/759421.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91542cbec350310421deba5d03c237b6710fcace", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
117106150
pes2o/s2orc
v3-fos-license
Stochastic Methods for Quantum Scattering Quantum scattering at zero energy is studied with stochastic methods. A path integral representation for the scattering cross section is developed. It is demonstrated that Monte Carlo simulation can be used to compare effective potentials which are frequently used in multiple scattering with the exact result. Introduction Multiple scattering off nuclei is in general a complicated many body problem, as target and projectile degrees of freedom are strongly coupled. The standard method for treating multiple scattering problems is the construction of an effective one-body optical model potential by eliminating the target degrees of freedom. Optical potential calculations have been widely and very succesfully used in the past [1]. Despite their phenomenological success, there are severe shortcomings of these models. One example is the spectrum of kaonic atoms, where the shifts of the lowest level require a repulsive real part of the optical potential, in contrast to the results of conventional fits [2]. Another problem, which is conceptually even more severe, is the absence of reliable methods for calculating inclusive cross sections, for which optical potential models can not be applied at all, as they are based on restriction of the target Hilbert space. Because of these problems, alternative methods have to be studied. For calculating ground state properties of a many body system beyond perturbative or mean field approximations, stochastic methods are well established. Starting from a path integral expression for the density matrix, an algorithm of Metropolis type or Langevin simulation is used for path sampling [3]. The advantage of these approaches is that they provide results, which are in principle exact and can be used to develop better analytical understanding of the physical system under investigation. Path integrals and scattering observables We start from a Hamiltonian which can be decomposed into an internal target Hamiltonian H int , a projectile kinetic energy and a projectile-target interaction V . In case the projectile has no bound state, the ground state of the system is the zero projectile momentum scattering wave function Ψ 0,k=0 (x, q). In the low temperature limit, the density matrix of the system is dominated by this state [4]: q denotes the target and x the projectile degrees ao freedom. E 0 is the target ground state energy. In pure bound state problems, convergence is controled by the energy gap between first excited state and ground state. Here, the ground state of the system lies at the edge of a continuum. This fact manifests itself in the β 3/2 factor in front of eq.(1). The slow convergence, as compared with bound state problems, requires rather long times β, which makes it necessary to choose observables and path sampling techniques carefully. From the left hand side of (1), a path integral expression can be derived [5] in the standard way. Path sampling methods, however, do not yield the path integral directly, but give only paths sampled according to the normalized functional This difficulty can be solved by measuring the following functional: The interaction between projectile and target is removed from the numerator and replaced by an effective interaction U, which only acts on the projectile degrees of freedom, like in conventional treatment of multiple scattering physics. The advantage here is, that the stochastic process can be used to test the quality of the effective potential U. The expectation value of O in P is in the limit β → ∞: where Φ 0 is the target ground state and ψ k=0 is the projectile scattering wave function to the potential U. Example: Potential Scattering To demonstrate the feasibility of this type of calculations I discuss potential scattering. In this case numerical integration of the Schroedinger equation provides exact results. Scattering off a Gaussian potential V g is considered. As reference potential a square well V w is used: , r > V w plays the role of the effective potential U in (3). The parameters of the square well are fitted to reproduce the first two nonvanishing moments of the Gaussian potential. Fig.1 shows the ratio of the cross sections of the two potentials sr := σ Vw /σ Vg as a function of V 0 . The line is the exact result, the data points are obtained from a stochastic calculation at β = 100. Path sampling was performed with a simple forward Euler scheme Langevin algorithm [6]. 2.5×10 5 paths were used to measure O after an equilibration run of 2.5×10 4 updates. In the range of V 0 where the reference potential is already a good guess, the stochastic calculation reproduces the exact result within 2%. Where this is no longer true, results become worse. For V 0 = −0.5 the stochastic result deviates about 17%. Note that at this value of V 0 the cross section is already σ = 274, because V g has the first bound state at V 0 = −0.66. There are two ways to improve the results in this region: One possibility is to increase β. This would require much longer calculation times, as the autocorrelation time of the Langevin algorithm increases like β 2 . The other possibility is to improve the reference potential. By adjusting the parameters of the potential successively, one can obtain results for the stochastic calculation which do not deviate more than a few percent from the exact result, although the same simulation parameters as for the first calculation were used. Discussion A new method for calculating elastic cross sections at zero projectile momentum was presented. The crucial point is that this method relies on the comparison of the full problem with a reference problem. This makes it possible to study questions related to the construction of effective potentials in nuclear multiple scattering by computer simulations, which seems to be the natural way for a nonperturbative treatment of many body problems. The method can be extended to nonzero momentum by exploiting information contained in the β dependence of observables. Work in this direction, as well as multiple scattering calculations, are currently in progress. A severe shortcoming of this method is that at this stage it is not possible to calculate inelastic or inclusive cross sections. Developement of stochastic methods for these problems seems to be promising, as scattering observables will not depend strongly on individual nuclear states due to summation over final states. The simple structure of experimental data like e.g. energy loss spectra [1] strongly supports this conjecture.
2019-04-14T03:09:59.468Z
1994-02-25T00:00:00.000
{ "year": 1994, "sha1": "aae936620a689d86cad13d7fe9ac41cbf244b1d9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "47ac781cc97b7bf005efd7815d009294a7316769", "s2fieldsofstudy": [ "Physics", "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
35259319
pes2o/s2orc
v3-fos-license
Single-molecule observations of RNA – RNA kissing interactions in a DNA nanostructure † RNA molecules uniquely form a complex through specific hairpin loops, called a kissing complex. The kissing complex is widely investigated and used for the construction of RNA nanostructures. Molecular switches have also been created by combining a kissing loop and a ligand-binding aptamer to control the interactions of RNA molecules. In this study, we incorporated two kinds of RNA molecules into a DNA origami structure and used atomic force microscopy to observe their ligand-responsive interactions at the single-molecule level. We used a designed RNA aptamer called GTPswitch, which has a guanosine triphosphate (GTP) responsive domain and can bind to the target RNA hairpin named Aptakiss in the presence of GTP. We observed shape changes of the DNA/RNA strands in the DNA origami, which are induced by the GTPswitch, into two different shapes in the absence and presence of GTP, respectively. We also found that the switching function in the nanospace could be improved by using a cover strand over the kissing loop of the GTPswitch or by deleting one base from this kissing loop. These newly designed ligand-responsive aptamers can be used for the controlled assembly of the various DNA and RNA nanostructures. Introduction Structural diversity of RNA is one of the important properties of RNA molecules, which exhibit unique functions such as specific complex formation and catalysis.3][14][15] A ligand-responsive kissing aptamer, called a guanosine triphosphate switch (GTPswitch), has been recently reported. 16This GTPswitch has a GTP-binding domain and a kissing domain that binds to a target RNA loop (Aptakiss) in the presence of GTP. 16Herein, we tried to visualize this unique ligand-responsive switching interaction between the GTPswitch and its counterpart Aptakiss at single-molecule resolution. Direct observation of interactions between biomolecules by using atomic force microscopy (AFM) is one of the practical methods for characterizing the properties of complex formation. 170][21][22] This single-molecule observation system should be used to observe the interactions of kissing aptamers for characterizing their properties. 0][21][22] The GTPswitch was generated based on a KG51 RNA kissing hairpin, which can bind to the Aptakiss only in the presence of GTP. 16We elongated the 5′ end of each RNA molecule and hybridized these molecules to the supporting DNA strands.These constructs were incorporated into the DNA frame through four ssDNA linkers (Fig. 1b).When the GTPswitch binds to the Aptakiss upon the addition of GTP, the configuration change of the supporting DNA strands from the unbound "double-loop" to the "X-shape" should be observed in the DNA frame (Fig. 1c).We investigated the ligand-responsive binding of the GTPswitch to the Aptakiss by observing the structural changes in the supporting DNA strands in the DNA frame, and examined the experimental conditions to improve the switching function. Preparation of RNA molecules A template dsDNA containing T7 promoter was used to prepare RNA by in vitro transcription.The sequences are shown in ESI (Fig. S1 and Table S1 †).Transcription was performed in a solution containing 0.5 µM template dsDNA, 40 mM Tris-HCl ( pH 8.0), 10 mM DTT, 23 mM MgCl 2 , 2 mM spermidine, 4.0 mM NTPs, and 2.5 U µL −1 T7 RNA polymerase (Takara Bio, Kusatsu, Japan) at 37 °C for 20 h.The transcribed RNA was purified by polyacrylamide gel extraction.Gel piece containing target RNA was cut out from the gel and crushed.Then RNA was extracted from the crushed gel pieces using elution buffer (0.3 M NaOAc buffer pH 5.2, 10 mM EDTA).The eluted RNA was collected by ethanol precipitation.The products were confirmed by gel electrophoresis. Preparation of the DNA frame and incorporation of RNA molecules The DNA frame and DNA strands containing RNA molecules were prepared separately.Then the DNA frame and two DNA strands were annealed together.The DNA frame was prepared as described previously.Briefly, for the preparation of the DNA frame, 22b a sample solution containing 25 nM M13mp18, 125 nM staple strands (5 eq.), 10 mM Tris-HCl ( pH 7.6), and 10 mM MgCl 2 was annealed from 75 °C to 15 °C at a rate of −1.0 °C min −1 . 19For the preparation of DNA strands contain-ing aptamers, sample solutions containing 0.83 μM Aptakiss (or KG51, or GTPswitch), 0.17 μM supporting DNA strands (AC96 and AC32 or BD96 and BD32, see ESI Fig. S1 †), 10 mM Tris-HCl ( pH 7.6), and 10 mM MgCl 2 were annealed from 75 °C to 15 °C at a rate of −1.0 °C min −1 . After the first annealing, 8.0 µL of DNA frame solution, 6.0 µL of Aptakiss solution, and 6.0 µL of KG51 (or GTPswitch) solution were mixed and then annealed from 40 °C to 15 °C at a rate of −1.0 °C min −1 .At this second annealing, the solution contains 10 nM DNA frame and 50 nM DNA strands containing aptamers.These DNA frames having target RNA/DNA hybrid strands were purified by a gel filtration column (Sephacryl 400, GE Helthcare, Uppsala, Sweden). AFM imaging of the kissing interaction AFM images were obtained using Dimension FastScan (Bruker AXS, Madison, WI) with cantilever, BL-AC40TS-CS (Olympas, Tokyo, Japan).Purified samples were diluted ten times using observation buffer.Observation buffer for Aptakiss-KG51 contained 10 mM Tris-HCl pH 7.6, and 10 mM MgCl 2 ; for Aptakiss-GTPswitch (with cover strand), 10 mM Tris-HCl pH 7.0, and 10 mM MgCl 2 , (1 mM GTP or ATP); for Aptakiss-GTPswitch mutant, 10 mM MOPS-KOH pH 6.5, 10 mM MgCl 2 , and 50 mM KCl, (1 mM GTP or ATP).The diluted solution (10 µL) was adsorbed onto mica plate for 5 min at room temperature and then washed three times using the same observation buffer to remove unadsorbed DNA strands and DNA frames.Scanning was performed in the same buffer solution using tapping mode. Results and discussion Assembly of the target RNA molecules in the DNA frame We used a DNA frame to evaluate the ligand-dependent activity of the GTPswitch at the single-molecule level.The DNA frame has a cavity (approximately 40 nm × 40 nm), in which four connectors are introduced to anchor the DNA strands.A pair of kissing RNA hairpins was placed in the cavity by incorporation into individual supporting DNA strands (DNA strands AC and BD), which were tethered between the specific connectors.Each strand comprised three parts: long ssDNA (AC96 or BD 96), short ssDNA (AC32 or BD32), and RNA that carried the designated sequence at its 3′ end (Fig. 1b).Here, we prepared three RNA/DNA hybrid strands: AC-Aptakiss, BD-KG51, and BD-GTPswitch.A pair of strands (AC-Aptakiss and BD-KG51 or BD-GTPswitch) was incorporated into the DNA frame through ssDNA linkers in the strands (Fig. 1b and S1 †).Each linker a′, b′, c′, and d′ was connected to the corresponding connector a, b, c, and d, respectively.The binding of the RNA loops (kissing complex formation) was identified by configuration changes of the supporting DNA strands from the doubleloop (unkissing) to the X-shape (kissing) (Fig. 1c).The difference in these structures was resolved in direct AFM imaging and quantified by statistical analysis of the AFM images (Fig. S2 †).Observation of the interactions between KG51 and the GTPswitch in the DNA frame First, we examined the interaction of a pair of kissing RNA hairpins, Aptakiss and KG51, in the DNA frame (Fig. 2a).KG51 is known to be capable of binding to the Aptakiss without the need for any additional cofactors and was used as an appropriate positive control for evaluating the data using this observation system.Quantitative analysis of micrographs of this sample revealed a high percentage of X-shaped structure (84.9%), suggesting that the KG51-Aptakiss system worked well in the DNA frame. We next substituted KG51 with the GTPswitch to examine the ligand-dependent binding property of the GTPswitch to the Aptakiss.Contrary to our expectation from the previous bulk experiment, 75.1% of DNA frame were observed to have the X-shaped structure even in the absence of GTP; this percentage was only ∼10% lower than that obtained for KG51 (Fig. 2c and Table 1).This unexpected high binding might have occurred because of an interaction between the bases in the kissing loop of the Aptakiss and the complementary bases of the unfolded free aptaswitch that was not previously detected in solution. 16We note that the two RNA sequences are located in relatively close positions in the DNA frame.The distance between the Aptakiss and the GTPswitch was estimated to be ∼10 nm.Assuming that the motion of each aptamer is limited to a sphere with a diameter of 10 nm, the hypothetical concentration can be estimated to be ∼1 mM.Although the molecular movement is constrained by fixation to the nanocavity, the molecules should behave as if they exist at a high concentration.This proximity effect may result in the ligand-independent binding. Observation of the switching function of the GTPswitch with a cover strand To improve the ligand dependency of the GTPswitch in the DNA frame, we used a cover strand.Such a strategy was recently demonstrated to be successful and improved the specificity of an aptamer to adenosine. 16In our case, the chosen cover strand binds to the kissing loop of the GTPswitch and extends to part of the central loop that is the GTP binding site of the aptamer (Fig. 3a).This cover strand can be displaced by the addition of GTP, which allows the GTPswitch to bind to the Aptakiss in a ligand-dependent manner.The cover strand was hybridized to the GTPswitch by annealing, and the covered GTPswitch strand was then incorporated into the DNA frame together with the strand carrying the Aptakiss.In the absence of GTP, the percentage of X-shaped structure in the AFM images was calculated as 64.4% (Fig. 3b).This value reflected a ∼20% decrease in the percentage of the X-shape and is much lower than the values mentioned above for KG51 and the GTPswitch.The results indicate that the cover strand reduced the ligand-independent binding between the GTPswitch and the Aptakiss.Next, we examined the effect of introducing a cover strand-bound GTPswitch to the Aptakiss in the presence of GTP (Fig. 3c).Adenosine triphosphate (ATP) was also used to investigate the ligand specificity of the GTPswitch (Fig. 3d).From the AFM images, 82.5% of the cover strand-bound GTPswitch was in the X-shape in the presence of GTP; this value is similar to that for KG51 (84.9%).By contrast, in the presence of ATP, only 66.5% was in the X-shape; this The data are represented as the mean ± S.D of triplicate experiments (n = 3). value is close to that in the absence of GTP (64.4%).These results indicate that GTP could selectively induce binding between the GTPswitch and the Aptakiss, whereas ATP could not (Fig. 3e and Table 2).The data are represented as the mean ± S.D of triplicate experiments (n = 3).Moreover, we observed the binding in the presence of 0.1 and 0.5 mM of GTP and ATP.In the case of ATP, there is no change of the proportion.On the other hand, we found that at 0.1 mM of GTP the proportion of X-shape was decreased by ∼5% from the proportion at 0.5 and 1.0 mM of GTP (Fig. 3f ). Observation of GTP switching in a mutant GTPswitch When the Aptakiss and the GTPswitch were placed in the DNA frame, the two RNA hairpins bound easily because of the close packing in the nanospace.We tried to reduce the interaction of RNA hairpins by using a mutant GTPswitch, in which one G was deleted from the kissing loop of the GTPswitch (Fig. 4a).After assembling the Aptakiss and the mutant GTPswitch strands, we used AFM to observe the formation of the X-shape (Fig. 4b-d).In the absence of GTP, the X-shape formation between the Aptakiss and the mutant GTPswitch was 44.0%, which indicated a significant suppression of the interaction compared with the X-shape formed with the usual GTPswitch (75.1%).To examine the switching ability, formation of the X-shape was observed in the presence of GTP.The percentage in the X-shape was observed as 65.2%, which indicated a 21% increase in the binding of the GTPswitch and the Aptakiss when GTP was added.To confirm the ligand selectivity, we added ATP instead of GTP.In the presence of ATP, the percentage of the X-shape decreased by 19% to be 46.4% compared with that observed in the presence of GTP (Fig. 4e and Table 3) These results indicate that the mutant GTPswitch preserved the switching ability and ligand selectivity.These findings indicate that adjusting the association and dissociation of the kissing interaction of the RNA hairpins by deleting the nucleotide in the kissing domain was successful without losing the switching ability. Conclusions We performed single-molecule observations of kissing complexes in the nanocavity of the DNA origami frame.Intriguingly, in the closely spaced condition, the GTPswitch could bind to the Aptakiss even in the absence of GTP in contrast to previous work. 13,16This GTP-independent binding could be suppressed by the addition of a cover strand against the kissing loop of the GTPswitch, and the GTP-dependent binding of the GTPswitch and Aptakiss was observed.The mutant GTPswitch also worked to control the kissing interaction and exhibited preserved switching ability and ligand selectivity.Although further optimization in the switching response is required, these findings support the potential applications of ligand-responsive kissing aptamers for dynamic systems that can be organized on DNA origami nanostructures.We believe that ligand-responsive kissing aptamers will enable us to regulate more global changes in nucleic acid nanostructures, such as programmed oligomerization into prescribed patterns. Fig. 1 Fig. 1 Single-molecule observation system for investigation of the interaction of kissing RNA aptamers using a DNA frame.(a) RNA aptamers used in this study; Aptakiss and its counterpart KG51 aptamer and GTPswitch.The GTPswitch can bind to the Aptakiss in the presence of GTP.(b) Schematic representation of aptamers and DNA strands incorporated into the DNA frame.(c) Incorporation of the Aptakiss into the a-c site and KG51 or GTPswitch into the b-d site in the DNA frame.When the GTPswitch is incorporated into the DNA frame, GTP should induce configuration change from the double-loop to the X-shape. Fig. 2 Fig. 2 Observations of the interactions between the Aptakiss and its counterpart either KG51 or GTPswitch in the DNA frame.(a) AFM images of the DNA frames with the Aptakiss and KG51.(b) AFM images of the DNA frames with the Aptakiss and the GTPswitch.Red and blue arrows indicate the double-loop and X-shape, respectively.Green rectangles represent an unidentified DNA frame.(c) Formation of the X-shape and double-loop in the DNA frame.Red and blue bars represent the percentages of the double-loop and X-shape formation, respectively. Fig. 3 Fig. 3 Observation of the interaction between the Aptakiss and the GTPswitch in the presence of a cover strand.(a) The cover strand for the GTPswitch used to prevent interaction with the counterpart Aptakiss.(b) AFM image of the DNA frames with the Aptakiss and the GTPswitch.Red and blue arrows indicate double-loop and X-shape, respectively.Green rectangles represent unidentified DNA frames.(c) AFM image of the DNA frames with the Aptakiss and the GTPswitch in the presence of GTP.(d) AFM image of the DNA frames with the Aptakiss and the GTPswitch in the presence of ATP.(e) Formation of the X-shape and double-loop in the DNA frame.Red and blue bars represent the percentages of the double-loop and X-shape, respectively.(f ) Proportion of the X-shape formation in the DNA frame at various concentration of GTP and ATP. Fig. 4 Fig. 4 Observation of the interaction between the Aptakiss and the mutant GTPswitch in the DNA frame.(a) One G was deleted from the kissing loop of the GTPswitch to suppress the interaction with the counterpart Aptakiss.(b) AFM image of the DNA frames with the Aptakiss and the mutant GTPswitch.Red and blue arrows indicate the double-loop and X-shape, respectively.Green rectangles represent unidentified DNA frames.(c) AFM image of the DNA frames with the Aptakiss and the mutant GTPswitch in the presence of GTP.(d) AFM image of the DNA frames with the Aptakiss and the mutant GTPswitch in the presence of ATP.(e) Formation of the X-shape and double-loop in the DNA frame.Red and blue bars represent the percentages of the doubleloop and X-shape formation, respectively. Table 1 Summary of the X-shape formation using Aptakiss and GTPswitch in the absence and presence of ligands Table 2 Summary of the X-shape formation using Aptakiss and GTPswitch with a cover strand in the absence and presence of ligands Table 3 Summary of the X-shape formation using Aptakiss and GTPswitch mutant in the absence and presence of ligands | Biomater.Sci., 2016, 4, 130-135 This journal is © The Royal Society of Chemistry 2016
2018-04-03T05:09:22.555Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "4cd6c1533ec9b653f78cb33c7d3060326d3e0572", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/bm/c5bm00274e", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4cd6c1533ec9b653f78cb33c7d3060326d3e0572", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
150785408
pes2o/s2orc
v3-fos-license
Lexical bundles in learner and expert academic writing : Lexical bundles (LBs) have been described as the ‘building blocks of discourse’; in addition to being highly frequent in writing and reducing processing time for readers and writers, they also perform important functions in language. LB choice, however, can vary according to genre, discipline, and different sections of the same text, which poses a challenge for novice L2 writers. This paper explores the use of LBs in a learner corpus of bachelor dissertations written in English by Spanish L1 students in linguistics and medicine, and compares it with published research articles in the same disciplines. By focusing on the introduction and conclusion sections, we identify the most frequent 3-, 4- and 5-word bundles in the corpora, to later study their types, structures, and functions. The results show differences in the use of LBs across disciplines, genres and sections, suggesting pedagogical implications for the inclusion of LBs in the L2 writing curriculum. fins i tot diferents seccions del mateix text, plantejant un desafiament per a estudiants novells. El present article explora l'ús de LBs en un corpus de treballs de fi de grau escrits en anglès per estudiants espanyols de lingüística i medicina. Aquest corpus es compara amb articles de recerca publicats en les mateixes disciplines. Centrant-nos en les seccions d'introducció i conclusió, identifiquem els LBs de 3, 4 i 5 paraules més freqüents en el corpus, per a després estudiar els seus tipus, estructures i funcions retòriques. Els resultats mostren diferències en l'ús de LBs entre disciplines, gèneres i seccions, suggerint implicacions pedagògiques per a la seva inclusió en l'ensenyament de l’escriptura acadèmica en anglès. The present study aims to further the understanding of phraseology in learner writing by exploring the use of LBs in the introduction and conclusion sections of bachelor dissertations (BDs) written in English by Spanish L1 university students in linguistics and medicine. In order to compare the frequency of form, structure, and function of these bundles, an expert corpus of research articles (RAs) in the same disciplines is used as the reference corpus. The comparisons will be made from both a quantitative point of view –applying a corpus-driven approach to identify bundles in the learner and the expert corpus– and a qualitative approach –classifying the bundles structurally and functionally in both corpora. This study hopes to contribute to the body of research that studies phraseology in academic writing, and to serve as a useful pedagogical resource for L2 learners of English who are trying to accommodate to the conventions of these specific disciplines. Introduction Over the last few decades, numerous corpus analyses have brought to the fore the fact that language is highly patterned (Hunston, 2002;Römer, 2010;Sinclair, 2005). Sequences such as additional information or is one of the main, especially common in particular registers, are 'ready to use' chunks, "stored and retrieved whole [s] from memory at the time of use" (Wray, 2002, p. 9) rather than generated item-by-item. These pre-fabricated units have been shown to facilitate production for authors and also save processing effort for readers and listeners (Nattinger & DeCarrico, 1992). Lexical bundles (henceforth LBs) were first identified by Biber and colleagues and have been defined as "the most frequently recurring sequence of words" (Biber & Barbieri, 2007, p. 264), as well as "important building blocks of discourse" (p. 270). The identification of LBs in corpus studies has been primarily based on corpus-driven approaches of frequency and range, following the pioneering lexical bundle approach developed by Biber, Conrad, and Reppen (1999). In order to qualify as a lexical bundle, a sequence needs an occurrence of at least 20 or 40 times per million words (Biber & Barbieri, 2007;Chen & Baker, 2010;Cortes, 2004). Range of dispersion (i.e. the number of texts in which the bundle appears) is normally set at 3 or 5 texts or 10% of the texts in the corpus (Hyland, 2008). This criterion is used to guard "against idiosyncratic uses by individual speakers or authors" (Biber & Barbieri, 2007, p. 268). The present study aims to further the understanding of phraseology in learner writing by exploring the use of LBs in the introduction and conclusion sections of bachelor dissertations (BDs) written in English by Spanish L1 university students in linguistics and medicine. In order to compare the frequency of form, structure, and function of these bundles, an expert corpus of research articles (RAs) in the same disciplines is used as the reference corpus. The comparisons will be made from both a quantitative point of view -applying a corpus-driven approach to identify bundles in the learner and the expert corpus-and a qualitative approach -classifying the bundles structurally and functionally in both corpora. This study hopes to contribute to the body of research that studies phraseology in academic writing, and to serve as a useful pedagogical resource for L2 learners of English who are trying to accommodate to the conventions of these specific disciplines. One recurrent finding is that English L2 writers' use of LBs does not always approximate the use by expert or native writers in terms of frequency, form, and function. For example, the masters and PhD candidates' writings explored in Hyland (2008) seemed to contain more impersonal clusters (i.e. avoiding stance), and more clusters in general compared to RA writers. The author suggests that less proficient writers rely on word combinations more often than expert writers. This finding contrasts with Durrant and Mathews-Aydınlı's (2011) study, in which student essays showed a lower production of formulas compared to RAs; differences regarding functional moves were also found. The authors suggest that the lack of attention paid to different genres and disciplines in academic writing education may account for these differences. Another interesting finding in the literature in relation to our study is English L1 students' greater and more varied use of LBs, especially in structures such as unattended this, existential there, hedging and negations, as compared to that of L2 university students, whose texts contained learner writing characteristic features, such as anticipatory it which, coupled with some informal lexical choices (e.g. it is easy to), pointed at register difficulties (see Ädel & Erman, 2012). In terms of functionality, L1 writers used stance more frequently than L2 writers. Interestingly, stance is one of the functions that differed the most among RA writers of the different languages (Spanish L1, English L2, and English L1) and disciplines studied in Pérez-Llantada (2014) and in Sheldon (2018): English L2 writers were found to transfer some of their L1 (Spanish) rhetorical practices into their L2 writing, which made their texts less interactional. In order to investigate the use of LBs by Spanish L1 undergraduate learners writing in English in two different disciplines (i.e. linguistics and medicine) and sections (i.e. introduction and conclusion) in comparison with their expert-writer counterparts, three research questions were established in this study: 1. What are the most common lexical bundles in the introduction and conclusion sections of L2 learners' BDs in linguistics and medicine? 2. How are these lexical bundles used in terms of structure and function? 3. To what extent does the use of lexical bundles approximate or differ from published RAs in the same discipline? Data collection In order to carry out a quantitative and qualitative analysis of LBs in academic writing, two corpora were compiled: (1) a learner corpus of BDs in linguistics and medicine written in English by Spanish L1 undergraduates in their last year of studies, and (2) an expert corpus of RAs in the same disciplines published in English-medium and peer-reviewed academic journals 2 . The introduction and the conclusion sections of each text were extracted and saved as raw .txt files for their separate analysis. Table 1 describes the number of texts, tokens, types, and paragraphs per genre, discipline and section. Extraction, filtering, and classification of lexical bundles In the present study, a corpus-driven approach was adopted in order to retrieve LBs from the corpora -i.e. no previous assumptions were made with respect to the LBs' form or function, and no pre-defined list of bundles was used. The function 'cluster n-gram' in AntConc (Anthony, 2018) was used to extract LBs from the introduction and conclusion sections of the corpora. In terms of length, even though the 4-word scope is the most researched length in LB studies (Ädel & Erman, 2012), other studies suggest that many recurrent word combinations come in as 3-word bundles (Simpson-Vlach & Ellis, 2010); as a result, we decided to adopt a more inclusive approach and explore 3-, 4-and 5-word bundles in the texts. As for frequency, given the relatively small size of the corpora, the frequency cut-off was set at a minimum of 20 times per million words. In addition, a dispersion range of three texts, which represent three different writers, was set; the selection of these cut-off criteria was based on previous corpus studies (Ädel & Erman, 2012;Biber & Barbieri, 2007;Chen & Baker, 2010). It is important to note that when a bundle appears only on one of the lists, it does not mean that this specific bundle was not used at all by writers in the other subcorpora; as Ädel and Erman aptly put it, "it simply means that the frequency and dispersion criteria were not met in the other group's material" (2012, p. 85 With regards to the grammatical structure of LBs, we initially followed Biber et al.'s ( , pp. 1014Biber et al.'s ( -1024 classification, which distinguishes 12 structural categories for LBs in academic prose. After revising this and the taxonomy they provide for conversation, we present a taxonomy of 15 categories with four broad structural groups: 'noun phrase-based', 'prepositional phrase-based', 'verbal phrase-based', and 'other' bundles, following Chen and Baker (2010, p. 34), which can best integrate the LBs found in our data. The NP-based bundles include noun phrases, with or without post-modifier fragments (e.g. the risk of, the most prevalent). PP-based bundles refer to those starting with a preposition plus a nounphrase fragment (e.g. of this paper, in addition to For the functional classification, on the other hand, we followed previous taxonomies (Biber, Conrad, & Cortes 2004;Cortes 2004;Hyland 2008) and classified all bundles into three main categories and their subcategories: 1) Research-oriented -also called referential in other models (e.g. : LBs in this category help writers to situate, contextualize and describe their research. There are four main subcategories: 1) location (e.g. at the beginning, at the university), 2) procedure (e.g. the use of the, the purpose of), 3) quantification (e.g. a part of, one of the most), and 4) description (e.g. the size of the, the nature of the). 2) Text-oriented -also called discourse organizers : these LBs are concerned with the structure of the text and the interrelations established between the ideas presented. There are four main subcategories: 1) transitions (e.g. on the other hand, in contrast to the), 2) resultative (e.g. as a result, due to the fact that), 3) structuring (e.g. in the next section, in this study), and 4) framing (e.g. with respect to, in the case of). 3) Participant-oriented: LBs in this category show writers' attitudes towards the ideational content and address readers directly or indirectly. It comprises two main categories: 1) stance (e.g. may be due to, are likely to), and 2) engagement (e.g. as can be seen, it should be noted). This functional classification was complex not only because the categorization involves subjectivity, but also because some LBs can perform more than one function (Liu, 2012). A concordance analysis was performed in order to see the extended context of certain bundles that seemed multifunctional. For example, the basis of is a 3-word bundle that can act as a research-oriented descriptive bundle, as in (1) (1) Findings from such a study can form the basis of learner-relevant form-focused instruction. (LIN_RA01_I) 1 But, when this sequence is part of the 4-word bundle on the basis of, it can mark a textoriented resultative relationship, as in (2) (2) Other linguistic accounts differentiate the two forms on the basis of information status, particularly in terms of topic. (LIN_RA15_I) For those cases in which the authors could not agree on the categorization, even after analyzing their extended context, previous literature that included examples on LBs and their functional categories was consulted (Cortes, 2004;Hyland, 2008;Pérez-Llantada, 2014). These structural and functional classifications allowed us to better understand the use of LBs in the corpora studied. Results and discussion The results of the analysis of LBs are reported on as follows. First, the most frequent LBs in the introduction and conclusion sections of BDs and RAs in medicine and linguistics are explored. Convergent bundles (i.e. those bundles that appear on more than one list) are then presented. Finally, a second and more qualitative analysis of the structures and functions of bundles is presented, exploring the similarities and differences found in the corpora. Frequency and convergence of lexical bundles in the corpus There are a total of 218 different bundles in the corpus as a whole (for the full list, see Appendix 1) with a total frequency of 1,151 hits, which represents around 4.5% of the tokens in the corpus. The most frequent bundle is the use of with a raw frequency of 85 counts, which equals more than 1000 times per million words (pmw) in our corpus. Moreover, the use of appears in all genres and disciplines explored in this study, so it could be regarded as a core or convergent bundle, following Pérez-Llantada's (2014) nomenclature. It is noteworthy to mention that the use of appears in the conclusion section of the corpora 50 out of 85 times, clearly indicating a preference for the last sections of a text. RAs in linguistics (37) and in medicine (21) are the genres that contain more hits of the use of, very often paired with other nouns (questions, tools, English, other alternatives, somatic stem cells). This bundle seems to help writers to display results, as in (3) or limitations, as in (4). (3) Trends for the social science fields indicate a reduction in the use of these informal features. (LIN_RA04_C) (4) Another limitation was the use of asymptomatic microembolic signals as a surrogate marker. (MED_RA02_C) The second most frequent bundle in the corpus is in order to, with a raw frequency of 62 counts, i.e. about 750 pmw. By contrast to the use of, this bundle appeared in the introduction sections of the texts more often, in particular, 39 out of 62 times. Taking into account the total number of words in each corpus, BDs in linguistics show a predominant use of this bundle (22 raw hits) followed by RAs in linguistics (24), BDs in medicine (12), and medical RAs (6). Different procedure verbs such as address, determine, provide, show, solve, facilitate, and gain are used after this bundle. In order to can help writers to emphasize the study's main objective or justification, as in (5) and (6) respectively. (5) This study aims to analyse comprehension and production of false friends in students of English in a C1 level classroom in order to explore the influence of their mother tongue (L1) on a second language (L2). (LIN_BD10_I) The third most frequent bundle is yet another core bundle present in all subcorpora: as well as (43 hits). As well as appears more frequently in the introduction sections (24 times), and rather than just adding new information, this bundle helps writers to focalize and frame the ideas presented, as in (7) and (8): (7) FN is a dimeric glycoprotein that is found in plasma as well as in the extracellular matrix (ECM) of various tissues (MED_RA03_I) (8) Conclusions will be drawn to justify the analyzed usages of discursive strategies as well as the historical and social consequences that can derive from them. (LIN_BD02_I) The use of, in order to and as well as are also included on 'formulas worth teaching' (ranking 29, 4 and 5 respectively), which underlines their pedagogic relevance. In terms of length, 3-word bundles were the most frequent in the corpus (85.7% of the total bundles), while 4-and, especially, 5-word bundles were scarcely used (10.2% and 3.9% respectively). This finding was similarly reported on in previous studies, such as Biber et al.'s (1999, p. 994), who found that 3-word bundles were much more frequent in academic prose (over 60,000 times pmw) than 4-word bundles (which occur over 5,000 pmw). If we look at each subcorpus separately, in particular, we will find some interesting patterns. As can be seen in Table 2, BDs in medicine and linguistics have produced almost the same quantity of LBs in the introduction and conclusion sections (conclusions were a bit shorter in this genre compared to the introduction, which partially explains why they contain half the amount of LBs as introductions); this seems to point at a shared quantitative feature in the use of LBs between texts of two different disciplines but that belong to the same genre tokens for both introduction and conclusion sections, articles in linguistics contain almost three times more LBs than medical articles. (24) 4-w (5) 5-w (4) 3-w (30) 4-w (2) 5-w (1) 3-w (125) 4-w (17) 5-w (5) 3-w (38) 4-w (2) 5-w (0) *all values are raw counts This finding has been supported by previous literature on LBs in academic writing across disciplines (Hyland, 2008;Liu, 2012) and points towards a disciplinary difference: research suggests that soft-knowledge disciplines very often emphasize interpretative language in order to present persuasive arguments, compared to hard-knowledge disciplines, that tend to be more impersonal in their methods and discussions. The linguistic items that allow writers to achieve this objective are, more often than not, part of recurrent word combinations (e.g. it is important to, has the potential to, it can be argued that, are likely to, seems to be, it should be, needs to be), which can explain the prominent LB occurrences in linguistic RAs. Hyland (2008) reported that less mature writers had used LBs more often. This finding contrasts with our results, but only for one of the two disciplines: BDs in medicine do contain more LBs than RAs in the same discipline (3.3 vs. 1.6 bundles on average per text); particular characteristics of the BD genre with regards to its audience -for example, that of being an academic final assignment in which students need to show and convince their supervisors (as a superior entity) that they have acquired certain knowledgecan contrast with published RAs in which authors present information to peers (of more or less the same expertise) and could account for this quantitative difference. Adopting another perspective, the comparison of all LBs lists has yielded an inventory of 35 shared bundles. Some of these bundles are shared in the introduction and conclusion section of the same subcorpus, but some are also shared between genres (BDs, RAs), disciplines (linguistics, medicine), and some of them appear in all lists, regardless of their inventory "might indicate that the writers have memorized these language sequences and routinized them in their writing practices". Table 3 shows convergent bundles in the corpora: If we look at specific bundles, as previously mentioned, the use of (85 hits), in order to (62) and as well as (43) are core bundles shared across all corpora in our study. Hyland (2008, p. 12) found a total of 5 core bundles across four disciplines (on the other hand, as well as the, in the case of, at the same time, and the results of the), which is somewhat similar to our results. In terms of bundles that appear in both the introduction and conclusion section of BDs and RAs, there are a total of 23 different bundles, 19 of which appear in the introduction and conclusion sections of RAs in linguistics; these items can be a useful resource for L2 writers of academic English. Convergent bundles not only vary in their grammatical structure but also in the discourse functions they perform, as we will see in the next section. Table 4 below shows the frequency of LBs per structure across genres and disciplines, taking the four broad groups and the 15 structural categories into consideration, and provides one illustrative example for each category. An important caveat to understand the discussion of the findings that follows is that the frequencies given refer to the type of bundles used and not to the number of times each bundle type was used (raw frequency). As can be seen, there is a clear prevalence of NP-based bundles over the rest of structural categories in all corpora. This prevalence is especially evident in the expert corpus, in both linguistics and medicine (both with a total frequency of more than 40%), over the second most common group of structures, the VP-based bundles. The PP-based categories rank in the third position in all four subcorpora. It is worth looking at specific rather than general structural categories to obtain a more realistic and clarifying picture of the findings. Structures and functions of lexical bundles in the corpus Of all 15 categories, the most common structure overall is the noun phrase with of-phrase, representing in all cases more than 30% of all categories, with the highest frequency in the medicine RAs (35%). In particular, we found a total of 78 bundles with this structure, with a raw frequency of 375 -that is, LBs belonging to this category account for 32% of the total frequency of LBs in the corpus as whole. indicate that as much as 70% of the most common bundles usually consist of a noun phrase with an of-phrase fragment. The prevalence of this structure has also been found in previous studies on LBs (Chen & Baker, 2010;Hyland, 2008;Liu, 2012). As it could be expected given its high raw frequency, the use of is the most frequent bundle in this category (62 hits), with a higher presence in medicine RAs (21 hits). Other common examples are one of the (13 hits), the analysis of (the) (11 hits), and the risk of (11 hits). Examples (9), (10), (11) and (12) The second most common structure is the other prepositional phrase, that is, bundles introduced by a preposition, excluding those with an embedded of-phrase; common LBs in this category are of this paper, according to, in this study, and of the most. We noticed above that LBs tend to be incomplete structural units; when they can be used as potentially complete units, these tend to act as discourse signaling devices (Biber et al., 1999, p. 999 We have already mentioned particular examples of bundles which are especially recurrent in our corpus. One instance is in order to, which we consider a to-clause fragment (rather than a prepositional-phrase pattern; cf. Pérez-Llantada, 2014, for instance), and partly explains the relatively high frequency of the (verb/adjective +) to-clause structural pattern in all subcorpora. In addition, our data show two further common structures of bundles in specific subcorpora. One of them is the passive verb (+ prepositional phrase) with a higher use in the medicine RAs, exemplified by bundles such as is associated with, have been proposed, and can be used to, which interestingly are all found in the conclusion section of these texts. The impersonal nature of the passive construction seems to fit well with the medicine discipline, in which writers allegedly attempt to hide authorial interpretation more than their linguistics counterparts. This finding supports disciplinary differences on structural categories reported on in Hyland (2008, p. 11). The other structural category that shows a higher frequency than in other corresponding subcorpora is the noun phrase + verb phrase in BDs in linguistics. Examples of these bundles are paper aims to, this paper will focus on and this study has. We may hypothesize that this higher use is due to the emphasis placed on these non-agent text subjects in the teaching of academic discourse to university students. A general tendency emerging from the figures represented in Table 4 bundles has the potential to, play an important role in. Compared with this wide range of bundles, BDs in linguistics exhibit a less illustrative choice, with seven structural categories not represented, which can be explained by the less proficient writing skills of these authors. In the medicine corpora overall, however, the choice of bundles is definitely less varied. Curiously enough, medicine RAs show a much lesser degree of variation and representativeness in the use of LB structures, even though they belong to the same genre as their linguistics counterparts. It is difficult to say why this might be, but disciplinary variation and the topic of linguistic articles itself (language) could account for the discrepancies found. The analysis of LBs according to discourse function has also revealed interesting insights. Table 5 provides an overview of the LB functions across genres and disciplines. As can be seen, bundles with text-oriented functions are prevalent over the other two types in general. The second most common type of bundle are those performing research-oriented functions. The comparison between these two functional categories, however, provides an interesting disciplinary distinction: whereas in linguistics there is a significant difference in frequency between the text-oriented and research-oriented functions in both learners and experts, and a particularly high use of text-oriented bundles (over 50%) in BDs, in medicine, on the other hand, the figures are closer between these two functions, and in medicine BDs they are exactly the same. This is (partly) in line with Hyland (2008, p. 14), who found a greater use of bundles with a referential function in the hard sciences to the same use in the soft-knowledge fields (i.e. linguistics), providing to the former "a greater real-world, laboratory-focused sense to writing", and thus emphasizing the empirical over the interpretative, as seen above. The more evident prevalence of text-oriented bundles in linguistics would also agree with this picture. in other studies that have noted an avoidance of stance bundles in learners in comparison with English L1 authors (see Hyland, 2008, p. 19;Pérez-Llantada 2014, p. 91;Sheldon, 2018, p. 34). Pérez-Llantada (2014) notes that Spanish-speaking learner writers in English avoid personal markers to a greater extent than the corresponding expert writers of academic discourse. Our results also point to a lack of confidence on the part of the linguistics learners to express their stance and subjectivity. In order to turn now to a more detailed analysis, we present Table 6 below with the figures of bundle types for the specific discourse functions included in each of the broad functional categories just mentioned. As with the discussion of the structure of bundles, a first thing to note is the greater and richer variety of functions in the linguistics RAs, with all ten categories represented in the table, in comparison with the other three subcorpora. Concentrating on the most important functional category, that of text-oriented bundles, we see a clear preference for the structuring type in linguistics, and especially in linguistic BDs. Although the expert writers in medicine also exhibit an important use of this category, their learner counterparts, by contrast, make no use at all of these bundles, clearly preferring bundles with a resultative/inferential function instead, as will be discussed below. Structuring bundles, having an identifying and focusing meaning, allow writers to draw the reader's attention to a particular idea in the text, and to intensify the force of their arguments. Linguistics experts have used structuring bundles in their conclusions more often, a practice (13) and (14), or VP-based structures (aim of this paper is, this paper will focus on, there is a and that they are), as in (15). The word aim, as noun or verb, is a recurrent one in bundles with this function. (13) The aim of the present paper is to study the preference for the use of one-word verbs to multi-word verbs (LIN_BD09_I) As just mentioned, resultative bundles are fairly common (21.2%) in medical BDs, by comparison with the other three subcorpora (with less than half this frequency), and by contrast, no instance of the structuring function was found. Interestingly, these writers have placed almost all their resultative bundles in the conclusion sections, as illustrated in (16) and (17). Other common bundles with this function are the conclusion that, as a result of, and due to the fact that. (16) (…) call for the involvement of mental health professionals in the Emergency Room in order to offer a more complete evaluation of patients once medically stabilized. arguments with respect to others may have a genre-specific explanation; academic writing instruction may emphasize this writing strategy over others. In research-oriented bundles, the second most important functional category, an interesting tendency arises: whereas the medicine data overall favor bundles contributing to the description of research objects, especially in RAs, linguistics favors the procedural bundles. This is not entirely surprising, considering the nature and object of study of each of these academic texts. And thus, whereas in medicine the description of the 'real-world' problem (medical conditions, clinical studies, etc.) is of great importance to their studies, in linguistics texts it is important to show the procedures of the research methods and demonstrate a certain ability in explaining how the research has been conducted. Both functions, i.e. method and procedure, are overwhelmingly often expressed by a NP-based bundle and very frequently by the noun phrase with of-phrase. Common bundles of description from the medicine texts are the prevalence of, the presence of, the risk of, and from the VP-based pattern, it is a/the. To express procedure, the most commonly used bundle is, by far, the use of. Other common bundles expressing procedure are (the) analysis of (the), the role of, the ways that, and from the VP-based group of bundles, can be used to. Description and procedure bundles are exemplified in (18) and (19) The final category, participant-oriented bundles, mostly covers stance markers expressing opinion rather than facts, and may indicate degree of probability and epistemic meaning, on the one hand, or be part of the so-called 'other stance markers' (see Cortes, 2004, p. 209), on the other, which include LBs with evidential meaning, indicating the source of the information (e.g. recent studies have, have been proposed). The former type, the most common one, tends to be expressed by a recurrent set of structural categories, namely can also be expressed in other ways than 3-, 4-and 5-word bundles, and that our study refers only to stance expressed in these sequences. Interestingly, stance is more common in the conclusion sections of the BD genre, whereas RAs contain more bundles of this type in their introduction sections: persuading readers from the very beginning through evidential and epistemic bundles seems to characterize more confident writing. Finally, engagement is almost non-existent in our corpus with only one bundle, namely our understanding of, used in the conclusion section of RAs in linguistics. Conclusion This paper has analyzed the use of LBs in the introduction and conclusion sections of learner and expert academic writing in linguistics and medicine. The quantitative and qualitative analysis performed in order to explore the frequency, structures and functions of LBs has yielded interesting results: LBs are very useful devices for the construction of discourse, but they behave in dissimilar ways in different disciplines and genres. Regarding frequency, of the 218 bundles retrieved, 3-word bundles were more frequent in all subcorpora; of these, the use of, in order to, and as well as stand out as the most popular LBs. BDs in linguistics and medicine have produced a similar quantity of LBs in both sections, whereas RAs vastly differ in their frequency of use of LBs, which points towards a disciplinary difference. When comparing the learner and the expert corpus, on average, BDs in medicine contained more LBs than RAs in the same discipline, and the opposite tendency was found for linguistic BDs, which contained fewer LBs than their expert counterparts. In addition, a list of 35 convergent bundles was found, which can be a pedagogically useful resource for general academic writing. This quantitative analysis was complemented by qualitative analyses of structure and function which, after manual classification and revision of concordance lines, provided a more comprehensive picture of LB usage. In terms of structure, both learner and expert writers favored NP-based bundles; the structure noun-phrase with of-phrase was by far the most frequent one in all corpora. BDs and RAs also agreed on the second most common LB structure: other prepositional phrase, which allowed writers to include frequent discourse signaling devices in their texts. The main difference, however, lies in the greater structural variation of the LBs used by experts in linguistics; LBs in medical RAs, and in the learner genre, were definitely less varied. Finally, with regards to function, LBs performing text-orienting functions were the most prevalent in all subcorpora. The second group, LBs with research-oriented functions, was more popular among medicine expert writers, who seem to emphasize the empirical over the interpretative. The last function, participant-oriented, was the least represented one; this low frequency is especially marked in BDs in linguistics, which points towards a case of underuse. Additionally, while learners placed stance markers mostly in the last section of their texts, expert writers showed a preference for the use of stance in their introduction sections. Placement of LBs in particular sections of a text is yet another important feature that depicts writers' academic literacy. On the other hand, the lack of structuring bundles in medical BDs, and their recurrent use of resultative bundles also calls for explicit pedagogical attention. Disciplinary differences were also found regarding the prevalence of descriptive bundles in medicine, and of procedural bundles in linguistics; disciplinary conventions and the object of study of each of these texts could account for the discrepancies found. The present study has some limitations worthy of mention. The first one is a methodological limitation: in order to extract sequences of words automatically, our retrieval method only included LBs that were fixed in nature; that is, our lists do not include variable bundles or bundles with open slots (e.g. in section (…), up to (…) %, to a (…) extent). This method therefore does not capture LBs in their entirety. Including this type of permutations (e.g. using the ConcGram function in Wordsmith tools) could have helped to show a more comprehensive picture of LBs in academic writing (see O'Donnell et al., 2012). Another methodological limitation has to do with the fact that the learner corpus had not been errortagged, which could have somehow affected the number of LBs extracted (i.e. if there were typos in particular words that were part of LBs, the software did not retrieve them). All texts included in the learner corpus, however, were successful BDs evaluated by their supervisors and the evaluating committee, so the probability of containing numerous typos is unlikely. Using a larger learner corpus would also have made the findings more representative. In addition, our analysis has looked at the use of LBs in the introduction and conclusion section of academic texts, as these sections tend to be the most conventional ones in these particular genres. Analyzing LB positions, not only with regards to sections but also with regards to paragraphs or sentences, would be interesting (see Römer 2010). Finally, when comparing our findings across previous studies that utilized corpora of different lengths and breadths, it was
2019-05-13T13:05:48.897Z
2019-03-28T00:00:00.000
{ "year": 2019, "sha1": "2c652202cd6dfc5204ad97a498e20c03eae3f02f", "oa_license": "CCBY", "oa_url": "https://revistes.uab.cat/jtl3/article/download/v12-n1-navarro-martinez/794-pdf-en", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b883b741c29cdbcc1692454c3ba93271914ab517", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
266834193
pes2o/s2orc
v3-fos-license
Clinical Outcomes of Breast-Conserving Surgery with Synchronous 50-kV X-ray Intraoperative Partial Breast Irradiation in Patients Aged 64 Years or Older with Low-Risk Breast Cancer Background: Breast-conserving surgery with synchronous 50-kV X-ray intraoperative radiation therapy (TARGIT-IORT) is a convenient form of partial breast irradiation; however, the existing literature supports a wide range of local control rates. Objectives: We investigated the treatment effectiveness and toxic effects of TARGIT-IORT in a patient cohort aged 64 years or older with low-risk breast cancer. Design: Retrospective analysis. Methods: Patients who received breast-conserving surgery with synchronous TARGIT-IORT at a single institution from 2016 to 2019 were reviewed. Additional whole breast irradiation was recommended at the discretion of the treating radiation oncologist. Baseline patient demographics and treatment details were recorded. Acute and chronic toxicities, measured using the Common Terminology Criteria for Adverse Events version 3.0 or 4.0 and breast cosmetic outcomes, using the Harvard Cosmesis score, were recorded. Locoregional recurrence, distant metastasis, and overall survival were recorded, and 5-year rates were estimated using the Kaplan-Meier method. Results: 61 patients were included with a median follow-up of 3.5 years and median age of 72 years. Eight (13%) patients received additional whole breast irradiation, and fifty-four (89%) received adjuvant hormone therapy. There were no local, regional, or distance recurrences. One patient died of complications from COVID-19 infection. Grade 2 + acute and chronic toxicities were observed in 6 (12%) and 7 (14%) patients, respectively. One patient experienced a grade 3 acute toxicity. Cosmetic outcome was “excellent” or “good” in 45 (92%) patients. Conclusions: Breast TARGIT-IORT was well tolerated and conferred excellent disease control in this cohort of patients with low-risk breast cancer. While continued follow-up is required, TARGIT-IORT may be an appropriate treatment option for this population. Introduction In appropriately selected patients undergoing breast-conserving surgery, accelerated partial breast irradiation (APBI) constitutes a standard adjuvant treatment option.3][4] APBI has the possible benefit of reducing the radiation dose delivered to critical organs at risk, including the heart, lung, and total breast volume.Intraoperative radiation therapy (IORT) is a form of APBI in which the surgical cavity is focally targeted, typically as a single fraction at the time of breast-conserving surgery while the patient is under general anesthesia.][7] 2 Breast Cancer: Basic and Clinical Research TARGIT-A was a multicenter prospective, randomized trial that compared low energy X-ray IORT delivered with an Intrabeam device (TARGIT-IORT) to adjuvant whole breast irradiation. 8The study included 2298 patients aged 45 years or older with invasive ductal carcinoma measuring 3.5 cm or less.Depending on the center, randomization occurred either before the initial surgery (prepathology) or after the initial surgery (postpathology).In the former case, patients randomized to TARGIT-IORT received the single fraction during the same operative procedure as the breast-conserving surgery.Patients with prespecified adverse tumor features on final pathology underwent additional whole breast irradiation with standard techniques.Patients in the postpathology stratum had a second procedure to deliver TARGIT-IORT once final pathology from the breast-conserving surgery was available.For all patients, the 5-year risk for local recurrence was 1.3% in the whole breast irradiation arm versus 3.3% in the TARGIT-IORT arm, which met the prespecified criteria for noninferiority.When limiting the analysis to patients in the prepathology stratum, the 5-year risk for local recurrence was 1.1% with whole breast irradiation versus 2.1% with TARGIT-IORT.The updated results from TARGIT-A supported its initial findings. 9lthough adjuvant radiation treatment may be de-escalated with APBI in appropriate patients, it may also be omitted altogether in older patients with small, hormone receptor positive tumors who undergo adjuvant endocrine therapy (ET). 10,11The CALGB 9343 and PRIME II prospective randomized clinical trials demonstrated that omission of radiation therapy after breast-conserving surgery led to worse local control but no difference in overall survival in patients older than 70 years and 65 years, respectively, with low-risk breast cancer.Some patients, however, remain motivated to receive adjuvant therapy as a means of avoiding recurrent disease, which may require salvage surgery.One retrospective study demonstrated that more than 74% of patients aged 65 years or older chose IORT when presented as an option for adjuvant treatment after breast-conserving surgery, demonstrating the convenience of the option. 12At our institution, we have typically offered TARGIT-IORT as monotherapy for patients aged 64 years or older who are suitable for APBI and who generally meet criteria for omission of adjuvant radiation therapy.In this study, we report mature effectiveness and toxicity outcomes for these patients. Patients and treatment technique This was an institutional-review-board approved retrospective review of all patients undergoing breast TARGIT-IORT at a single institution from September 2016 to December 2019.TARGIT-IORT was offered to patients based on a multidisciplinary discussion between the breast surgeon and radiation oncologist.The technique of IORT used was modeled off that described by Vaidya et al. 13,14 All patients underwent TARGIT-IORT during the same operation as their breast conservation surgery.Applicator size was decided and agreed on by the breast surgeon and radiation oncologist and ranged from 1.5 to 5 cm.Following breast-conserving surgery, the skin flaps were everted and wet gauze was used to keep the skin surface away.A tungsten shield was used to prevent backscatter.After assessing for adequate skin spacing, a dose of 20 Gy prescribed to the surface was delivered in a single fraction using a 50-kVp X-ray Intrabeam device.Additional whole breast with or without regional nodal irradiation was recommended at the discretion of the treating radiation oncologist based on the presence of unexpected adverse features on final pathology, with the TARGIT-IORT treatment acting as the surgical cavity boost. Follow-up and evaluation of outcomes Patients had follow-up visits every 3 to 6 months for the first year after treatment and annually in subsequent years.Annual mammography was performed after treatment with additional breast ultrasonography and magnetic resonance imaging (MRI) at the discretion of the treating breast surgeon and radiation oncologist.Local, regional, and distant recurrence events were recorded.Toxic effects were documented by the treating radiation oncologist at each follow-up visit using the Common Terminology Criteria for Adverse Events (CTCAE) version 3.0 until February 2018, at which time CTCAE version 4.0 was used.Cosmetic outcome was also assessed at each radiation oncology follow-up visit, using a 4-point scale of Excellent, Good, Fair, or Poor per the Harvard Cosmesis score. 15 Statistical analyses Overall survival was estimated with the Kaplan-Meier method.All statistical analyses were performed using GraphPad Prism version 9.3.1 (GraphPad Software, San Diego, CA, USA). Results A total of 61 patients were included in this analysis with a median follow-up of 3.5 years (interquartile range 2.3-4.2years).Baseline patient characteristics are outlined in Table 1.The median age was 72 years.Fifty-eight (95%) patients had invasive ductal carcinoma, 2 (3%) patients had pure DCIS, and 1 (2%) patient had invasive lobular carcinoma.Of patients with invasive disease, 6 (10%) had AJCC pathologic stage IIA disease, while the remaining patients had stage IA disease.Sixty (98%) patients were estrogen receptor positive, and 3 (5%) were HER-2 positive.Fifty-six (92%) patients underwent sentinel lymph node sampling, and 2 (3%) patients were found to have lymph node positive disease.Seven (11%) patients had positive surgical margins, and each of these patients underwent re-excision with subsequent negative margins.Eight (13%) patients underwent additional whole breast irradiation after IORT, with 1 (2%) patient also undergoing regional nodal irradiation due to a positive sentinel lymph node.Three (5%) patients underwent adjuvant chemotherapy.Fifty-four (89%) patients initiated ET with a median duration of treatment of 2.5 years.Fifteen (28%) patients who had initiated ET therapy had discontinued it prior to completing the recommended treatment duration due to side effects.Fifty-three (90%) patients with invasive disease met inclusion criteria for either the CALGB C9343 or PRIME-II clinical trials, and therefore would have been candidates for omission of adjuvant radiation therapy.Thirty-seven (61%) patients were "suitable" candidates for APBI per the ASTRO consensus update.There were no local, regional, or distant recurrences.No patients died of breast cancer, and 1 patient died 18 months after IORT due to complications of COVID-19 infection.The Kaplan-Meier estimate of survival is shown in Figure 1.The breakdown of acute and chronic treatment toxicity is shown in Tables 2 and 3. Acute toxicity was defined as occurring during radiation treatment or at the first follow-up visit 3 months after completing radiation treatment.Chronic toxicity was defined as occurring more than 3 months out from treatment completion.Forty-nine (80%) patients had at least 1 follow-up visit with a radiation oncologist with toxicity recorded. Six (12%) patients experienced at least 1 grade 2 + acute toxicity, and 1 (2%) patient experienced a grade 3 acute toxicity, which was grade 3 breast pain reported at the first post-IORT follow-up visit.The breast pain had completely resolved in this patient at subsequent follow-up.Three (6%) patients had grade 2 breast edema, 2 (4%) patients had grade 2 breast hyperpigmentation, and 1 (2%) patient had grade 2 radiation dermatitis.There were no cases of acute grade 2 + fatigue or breast hypopigmentation.Seven (14%) patients experienced at least 1 grade 2 chronic toxicity, and no patient experienced any grade 3 + chronic toxicities.Five (10%) patients had grade 2 breast volume reduction, 2 (4%) patients had grade 2 fibrosis, 1 (2%) patient had grade 2 nipple deformity, and 1 (2%) patient had grade 2 telangiectasia.There was no grade 2 + fat necrosis and no accounts of arm lymphedema, myositis, rib fracture, or pneumonitis.Cosmetic outcomes were available for 49 patients and rated as "excellent" or "good" in 45 (92%) patients and "fair" in 4 (8%) patients.There were no patients with a "poor" cosmetic outcome.Toxicity data were available for 6 of the 8 patients who received additional whole breast irradiation after IORT, and none of these patients experienced any grade 2 + acute or chronic toxicities.Cosmetic outcome in this subset was "excellent" in 3 (50%) patients and "good" in 3 (50%) patients. Discussion In our cohort of patients aged 64 years or older with low-risk disease, we found that breast-conservation surgery with synchronous TARGIT-IORT resulted in low rates of acute and chronic toxicity, in-line with the prior published data. 16,17We also observed no recurrences.Multiple studies have assessed the effectiveness of breast TARGIT-IORT with low energy photons.Although TARGIT-A had shown noninferior 5-year local recurrence and noninferior 10-year local recurrence free survival, the 5-year rate of local recurrence was numerically higher with IORT, and the median follow-up for determining risk of local recurrence was only 5 years.Other institutions have published retrospective breast IORT outcomes, including the Cleveland Clinic, which reported a 2% local recurrence rate in a cohort of 201 patients at median 1.9-year follow-up. 18Rabin Medical Center in Israel reported no local recurrences in 158 patients at a mean of 2.5-year follow-up. 19Chowdhry et al 20 retrospectively reviewed 110 patients with median follow-up of 2.5 years and found a 5-year risk of local failure of 3.7%.Falco et al 21 found a 1% local failure rate at median 74-month followup in 199 patients above the age of 60 years; however, 48.7% of these patients received additional whole breast irradiation. TARGIT-R was a multiinstitutional retrospective registry intended to provide "real-world" clinical practice outcomes with TARGIT-IORT performed in North America. 22This study Abbreviations: IBTR, ipsilateral breast tumor recurrence; IORT, intraoperative radiation therapy; N/R, not reported; WBRT, whole breast radiation therapy. showed an elevated 5-year IBTR rate of 8% in patients receiving primary IORT without additional whole breast irradiation. Published prospective and retrospective studies on low energy X-ray IORT are summarized in Table 4. It is hypothesized that the difference in local recurrence rate with breast IORT on TARGIT-R compared with other published data may be at least partially explained by differences in patient populations and tumor aggressiveness.However, the patients included in the primary IORT arm of TARGIT-R appeared to have disease characteristics similar to our patient population.Their patient cohort had a median age of 68 years, a median tumor size of 1 cm, and 94% of their patients were estrogen receptor positive.Interestingly, they observed higher local recurrence rates in older patients, with patients aged 71 to 80 years having an 11% IBTR rate and patients aged >80 years having a 17% IBTR rate.TARGIT-R observed ET compliance to be an independent predictor of IBTR, with noncompliant patients having a 3.67 fold increased risk of IBTR.Our patient cohort had similar ET compliance to TARGIT-R, with 36% of patients either never initiating ET or discontinuing it early; however, no recurrences were observed at albeit shorter median follow-up of 3.5 years.The elevated IBTR rate observed on TARGIT-R in patients older than 70 years with seemingly low-risk tumors does not appear to be fully explained by ET compliance.The question of TARGIT-IORT efficacy in elderly patients with low-risk tumors was further assessed in the TARGIT-E single-arm prospective multicenter study, including 474 patients aged ⩾ 70 with T1, node-negative disease. 23With median follow-up of 3.25 years, the actuarial local relapse-free survival after 5 years was 98.5%, although ET compliance was not reported.This low rate of local recurrence is in concordance with our study. Other possible explanations for the higher recurrence rate with breast IORT in TARGIT-R include operator experience and ability to achieve tight conformality between the applicator and the surgical cavity, which is necessary for appropriate dose distribution.Without the aid of intraoperative imaging, operator experience becomes critical in ensuring optimal positioning of the applicator.The lack of image acquisition also precludes the generation of dose-volume histograms for most organs at risk.The skin may be everted and retracted away from the applicator, and the distance between the 2 can be approximated with a ruler or ultrasonography and used to ensure dose to the skin is within tolerance.Dose to the underlying lung and cardiac tissues, however, is left unmeasured.It is assumed to be very low, considering the steep attenuation of low energy X-rays across the thickness of the chest wall. 24,25nother potential concern of low energy X-ray IORT is the steep dose fall-off.With a spherical applicator and a dose of 20 Gy prescribed to the surgical cavity surface, the dose falls to 6-7 Gy 1 cm from applicator surface, depending on the applicator size. 264]27 The premise of TARGIT-IORT rests on the idea that microscopic tumor foci may never progress to clinically significant cancers. 28,29t is important to note that most of the patients included in our study would have been candidates for other convenient forms of adjuvant APBI as well.A prospective, randomized clinical trial out of University of Florence looked at 520 patients above the age of 40 years with invasive ductal carcinoma or DCIS measuring less than 2.5 cm. 4 Patients were randomized to external beam APBI of 30 Gy in 5 fractions versus whole breast irradiation of 50 Gy in 25 fractions plus a 10 Gy boost to the surgical cavity.Accelerated partial breast irradiation resulted in a 10-year risk of IBTR of 3.7% versus 2.5% with whole breast irradiation, which were not significantly different.This APBI regimen allows for image guidance and the resulting target and organ at risk dose-volume analysis, but it does require additional clinic visits for treatments compared with IORT. Limitations of our study include a relatively small sample size of 61 patients with limited follow-up of 3.5 years.Multiple prospective, randomized studies of breast IORT showed that patients remain at risk for IBTR beyond this time frame.Our clinical outcomes at 3.5-year follow-up are encouraging, though, and in-line with the TARGIT-A outcomes. 9 Conclusions Although local recurrence rates with breast IORT vary by patient population and possibly other technical factors, our data show that with an experienced multidisciplinary treatment team, breast IORT is well tolerated and results in a very low risk of recurrence in patients aged 64 years or older with low-risk disease.It is a convenient treatment option in this patient population, particularly for those who live great distances from a radiation treatment center. Table 4 . Summary of breast IORT studies.
2024-01-08T16:05:30.407Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "3203e507c75b7a4ecf356d68162f00d4244a99c9", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "4126cfe07a60f5c08a8f6dedad4e165537ad53a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244316614
pes2o/s2orc
v3-fos-license
Vulnerability and Volition in the Testamentary Law of Undue Influence and Captation This article examines how contemporary analyses of vulnerability theory are reflected in legal approaches to undue influence and captation in the Canadian common law of wills and estates and in the Civil Code of Québec in the law of succession. Critical theorists point to the risks of assuming that vulnerability lies exclusively with the elderly and persons with disabilities. The equation risks oversimplifying matters, which could compromise the equality and dignity of members of these groups. There is also a risk of overlooking the harm that may be suffered by those who are victims of social or economic oppression. A more nuanced approach posits that vulnerability is a common human trait that cuts across social identities and experiences. Due to prevailing assumptions about vulnerability, this article hypothesizes that challenges to wills based on undue influence and captation will most often occur when the testator is elderly and/or has a disability at the time of execution of the will. Canadian common law and Quebec civil law jurisprudence are examined to assess this hypothesis. This analysis reveals that certain conditions do give rise to triggers heightened judicial scrutiny of wills, but that they do not in and of themselves determine legal outcomes. The case law thus suggests a moderate—but tempered—risk that courts will draw presumptions about age and capacity when assessing the presence of undue influence or captation . Perhaps more significant is the absence of challenges to wills involving young and healthy testators. Jurists might therefore wonder whether we are at risk of overlooking some cases of untoward conduct due to the conceptual associations we make between age, incapacity and vulnerability. INTRODUCTION Testamentary freedom has long served as the conceptual cornerstone of the law of wills and successions within Western legal traditions; it has been characterized as an estates court's "duty" and "the golden rule, the fundamental principle" of estates law. 1 The primacy of testamentary freedom, although restricted by certain modern successions law doctrines, 2 persists in contemporary law.While central to the common law tradition, 3 it exists also in Quebec civil law, with the Quebec Act of 1774 having preserved in that jurisdiction full freedom of testation. 41.See, respectively, In re Tyhurst Estate, Deceased, [1932] SCR 713 at 716, 4 DLR 173, and In re Estate of Brown Estate (deceased), [1934] SCR 324 at 330, 2 DLR 588.Beyond relying on testamentary intent as a beacon for the construal of wills, a broad authority is ascribed to testamentary freedom across the law of successions.See Sheena Grattan & Heather Conway, "Testamentary Conditions in Restraint of Religion in the Twenty-First Century: An Anglo-Canadian Perspective" (2005) While various legal doctrines might limit or counter this principle of testamentary freedom, one of these is undertheorized in the context of wills and successions law.Undue influence in the common law, comparable to captation (sometimes referred to, in its English translation, as "fraudulent capture") in the civil law, refers to situations in which the person who makes a will-the testator-acts according to the wishes, and pursuant to the coercive pressure, of another.In such instances, because the operative force driving the juridical act does not reflect the testator's volition, a court will be justified in voiding the will.Testamentary freedom is not viewed as unduly compromised since, in such scenarios, the law perceives testamentary intent as having been overtaken by the acts of another. Undue influence and captation can have a critical impact on a will's outcome, and there is no shortage of case law addressing the topic.Yet, while these doctrines have been the subject of judicial analyses, as well as considerable attention in common law doctrinal analyses of contract law, 5 scholars have not paid it extensive heed in connection with wills and successions.Similarly, relatively little has been written on the concept of captation by civil law theorists. 6 Quebec, as is the case in many civil law systems around the world.Elsewhere, the reserve requirement ensures that particular family members of the deceased benefit from an inheritance regardless of what the will of that person might provide.This requirement is found in France, the Netherlands, Germany, and Italy, for example.In this article, we set out to examine the concepts of undue influence and captation, respectively, within the Canadian common law of wills and estates and within the law of successions in Quebec civil law.7 While not purporting to undertake a comprehensive analysis of the case law on this topic, the analysis here is intentionally transsystemic, in that it provides a reflection on a juridical issue with the frames of two different legal orders.We explore how social conceptualizations of human vulnerability operate to affect juridical approaches to this area of law.We conclude that dominant social norms associated with vulnerability-notably, norms that associate ageing and disability with a state of being vulnerable-are reflected in the law's appreciation of when and how undue influence and captation arise.While the application of these norms does not have a decisive impact on juridical outcomes, they nonetheless feature consistently through legal analyses of undue influence and captation, notably in cases where these grounds are invoked to contest wills.This article invites reflection on whether a more refined understanding of vulnerability might facilitate the pursuit of twin goals in the law of wills and successions-which must be "delicately balanced"-namely, upholding testamentary freedom and protecting the interests of those whose volition is compromised by exogenous pressures.8 To be precise, then, this article considers how contemporary analyses of vulnerability theory are reflected in juridical approaches to undue influence and captation.As the analysis here will demonstrate, conventional presumptions between vulnerability, on the one hand, and seniority and disability, on the other, are to some extent borne out in the law of undue influence and captation.Yet this is principally on account of the fact that nearly all cases in this area involve challenges to wills made by very old and/or infirm testators.By and large, judges have been vigilant to avoid synonymies between vulnerability age and disability, looking instead to factors beyond the testator's physical state to consider whether undue influence or captation were present.What is perhaps most intriguing about this area of law, however, are the cases that do not appear.These are the cases that involve wills made by young and healthy testators, which seem rarely-if ever-to be challenged for undue influence or captation.This 7. Throughout this article, we will refer to this area of law in both of traditions together as "the law of wills and successions."(2021) 51 R.G.D. 67-95 suggests, then, that such testators are autonomous and could never have their intentions captured or directed by the will of another.All of this invites us, as jurists, to consider whether and to what extent the law of wills and successions might shift-especially insofar as the doctrines of undue influence and captation are concerned-were we to embrace a refined understanding of vulnerability that resists basic presumptions.Such an approach would instead appreciate that we are each-regardless of age and ability-potentially vulnerable to the coercion and misuse of power and dependencies. Following a brief introduction to the law of undue influence and captation (Part I), we examine (in Part II) critical approaches to vulnerability developed by legal theorists who underline the risks of equating vulnerability with two particular social classes: the elderly and persons with disabilities.This discussion exposes this equation as a potential oversimplification that might compromise the fundamental equality and dignity rights of these groups.Moreover, conventional vulnerability theory risks neglecting the harm that might attend others who, while not aged or disabled, face other social or economic oppressions.A more textured approach posits that vulnerability is a shared human trait that cuts transversally across social identities and experiences.Accordingly, susceptibility to undue influence and captation might not always be plain and obvious. Understanding vulnerability as socially transversal or universal provides a foundation on which to develop a critical analysis of the law's engagement with the doctrines of undue influence and captation.Part III thus tests the following hypothesis: given prevailing social understandings of vulnerability as inherent to disability and ageing, will challenges based on undue influence and captation are most likely to occur, and succeed, when the testator was of old age and/or disabled at the time of the will's execution.To test this presumption, Part III examines representative case law from common law Canada and civil law Quebec.As the discussion will show, courts have taken a nuanced approach to undue influence and captation, resisting simple presumptions about vulnerability and susceptibility to coercion based on age or ability.Largely this stems from the centrality of testamentary freedom as a principle in the law of wills and successions.But it appears also to be rooted in the goal of protecting individual autonomy and dignity.Hence, rather than drawing a direct line between perceived personal vulnerability and the inability to make a valid will, judges in both legal traditions have integrated analyses of other factors to assess the presence of undue influence or captation.Two such factors emerge with notable frequency, one in the common law and one in Quebec civil law.Common law cases show a preoccupation with perceived social force or control exerted at the hands of the defendant beneficiary, 9 notably through manoeuvres that isolated the testator or rendered the latter dependent.In the civil law, the key consideration is the presence of deceit or fraud that had a determinative impact on testamentary dispositions. Overall, an analysis of relevant jurisprudence reveals that conditions presumed as giving rise to vulnerability (old age, disability) will trigger heightened judicial scrutiny of testamentary instruments.But by themselves, these conditions do not determine legal outcomes, nor do they even give rise to a legal presumption of undue influence or captation.In other words, even where a testator was very old or disabled, courts will look for clear evidence of undue influence in fact before coming to a conclusion.As mentioned, these facts will normally pertain to social isolation and dependence in the common law and deceit in the civil law.This means that cases that involve either very old or disabled testators are more likely to yield judicial findings of undue influence or captation, yet at the same time, the vast majority of such claims are raised vis-à-vis elderly or disabled testators.As indicated, wills made by young, healthy people are not challenged for undue influence or captation.A question for juridical consideration, then, is whether we miss cases where this untoward conduct has actually occurred because of the conceptual associations we draw between age, disability, and vulnerability.Were we to perceive vulnerability as socially transversal, we might spot situations of undue influence and captation in varied populations, not just those who are aged or disabled.As such, current approaches reflect a risk of underreach.There is also a risk of overbreadth, albeit seemingly diluted by the fact that courts have been scrupulous in their analyses of undue influence and captation.This risk arises from the possibility that current cases potentially capture and invalidate the wills of older or disabled testators who in fact were capable of resisting outside pressures and influences on their testamentary decisions.9. We use the term "defendant beneficiary" throughout this article to refer to the party/ parties who are the subject of juridical scrutiny in an evaluation of undue influence or captation.Often, this party will be a prime beneficiary in the impugned testamentary instrument.But this is not always the case; undue influence and captation can be carried out by someone who does not benefit directly from the testamentary dispositions in question. (2021) 51 R.G.D. 67-95 While judicial analyses have, for the most part, been careful about resisting simple presumptions about age and ability, there remains room for building a more fulsome understanding of vulnerability into legal approaches to undue influence and captation.Some reflections to this end are offered in the article's conclusion, specifically in the context of relationships between parties who have unequal bargaining power.Before launching into the substance of this article, some preliminary work is needed to situate the reader in connection with the law of undue influence and captation.The discussion that follows takes up that task. I. UNDUE INFLUENCE AND CAPTATION In the common law, undue influence is understood as arising in situations where testamentary intent was overridden by the influence of another who stands to benefit-directly or indirectly-from subsequently executed testamentary dispositions.As such, a testator's free will is said to have been vitiated by the power that another individual exercised over the testator in relation to the execution of the latter's will.The equitable doctrine of undue influence was thus established to prevent the "unconscientious use of any special capacity or opportunity that may exist or arise of affecting the donor's will or freedom of judgment." 10At the same time, that doctrine's roots in probate law within the common law predated and were distinct from the equitable doctrine that developed in relation to inter vivos transfers. 11To establish undue influence within testamentary contexts in common law, it is not necessary to demonstrate malfeasance or bad faith by the defendant beneficiary, even though the latter is normally perceived as someone who manipulates or coerces a testator with a view to advancing their own interests.Undue influence has thus been associated with the notion of "moral guilt." 12It has also been likened to duress, in that the testator's actions while under the testator's control are not aligned with the testator's own intentions or choice. 13As explored below, cases where undue influence is found have typically involved manipulative and self-serving conduct, even if not reaching the threshold for fraud.Rather, it is enough for the party who challenges a will to show that the nature of the relationship between the testator and the defendant beneficiary was such that it was possible for the latter to "dominate" the former's will, and that such domination did in fact occur. 14While the moral implications of undue influence can raise concerns about justice, the path taken to address this challenge within The Restatement (Second) of Torts (Restatement) strikes this author as ill-advised.There, "interference-with-inheritance" is framed as an actionable civil wrong. 15Not surprisingly, this entanglement of tort law and estates law engendered controversy, since it raised to the level of a private law right a would-be beneficiary's interest in an estate, which in fact remains a simple hope or expectation until the testator's death and the probate of the latter's will.The Restatement has also been critiqued as attempt to create, within tort law, a "rival," "less structured" alternative to the law of probate and restitution, erroneously deploying torts to "[play] the role of equity" where the regimes meant to address the problems in question yield seemingly unsatisfactory or unjust results. 16e question of the applicable standard and burden of proof in the common law of undue influence in testamentary contexts has generated, over many decades, considerable uncertainty, 17 some of which 17. Writing in 1938, Cecil A Wright stated: "Although superficially simple, problems involved in litigation concerning the establishment of a deceased person's will against attacks of lack of testamentary capacity, fraud and undue influence, are, in the writer's opinion, second to none in difficulty" ("Wills -Testamentary Capacity -Suspicious Circumstances -Burden of Proof"), Case Comment, (1938) 16:5 Can Bar Rev 405 at 406).Sopinka J cited this passage in support of the following statement: "The interrelation of suspicious circumstances, testamentary capacity and undue influence has perplexed both the courts and litigants since the leading case of Barry v Butlin […]" (see Vout v Hay, [1995] 2 SCR 876 at para 16, 125 DLR (4th) 431 (2021) 51 R.G.D. 67-95 was clarified by the Supreme Court's decision in Vout v Hay. 18There, Justice Sopinka for the Court affirmed that the common law's presumption of a will's validity benefits the party who defends it in the face of a judicial challenge ("the propounder").Even where there is evidence of "suspicious circumstances" that call a will's validity into question, this presumption persists where such circumstances pertain to potential undue influence. 19In other words, there is no presumption of undue influence in testamentary contexts even where suspicious circumstances of undue influence are present. 20 contrast to undue influence in the common law of wills and estates, the notion of captation in civil law successions is anchored to the concept of fraud.Specifically, fraudulent acts that directly prompt testamentary decisions will be deemed to have "captured" the testator's free will.Accordingly, they will invalidate the resultant dispositions.The legal elements of captation set a high bar for the party who advances this claim; it will not be enough for the will's challenger to show that a defendant beneficiary curried favour with the testator [Vout]).Barry v Butlin, [1838] 12 ER 1089, 12 WLUK 52 is an 1838 decision rendered by the Judicial Committee of the Privy Council. 18. See Wright, supra note 17. 19.In Vout, supra note 17, the Court identified three types of circumstances that might be deemed "suspicious" in testamentary contexts: those related to the context of will's formation or preparation, those related to the testator's capacity, and those related to the possible presence of fraud or of undue influence.In the former two contexts (will's formation and capacity), evidence of suspicious circumstances "spends" the presumption favouring the will's propounder and the onus then shifts to them to prove that the will was legitimately executed and/or that testamentary capacity existed.The onus does not, however, shift where undue influence is in question; it rests throughout with the party attacking the will's validity (at paras 16-29).See also John E S Poyser, Capacity and Undue Influence, 2th ed (Toronto: Thomson Reuters, 2019) at 247-48.20.In Quebec civil law, the Quebec Court of Appeal has refrained from deciding whether such a presumption exists, leaving us to infer, on the basis of principles of civil procedure, that the burden must be shouldered by the party who challenges the will: De or appealed to the latter's affections. 21Instead, the court will call for evidence of deceit and manipulation that induced the testator into a mistaken belief of fact.Moreover, to establish captation, the evidence must show that such deceit and manipulation had a clear and direct effect on the expressions made in the will.Ultimately, the court will be called upon to assess the circumstances under which the testator executed the will, and the reasonableness of the will in light of the testator's life circumstances and family relationships. 22though the Civil Code of Québec (CcQ) does not define captation, it integrates a reference to the concept in article 761. 23 Here, the legislator makes clear that testamentary gifts to persons delivering care in a health and social services facility-who are not the testator's spouse or any other close relative-are void if made during the time that the testator received services in that setting.Similarly, gifts to foster family members are without effect if made by the testator while living with that family.While no similar doctrine exists in the common law, some jurisdictions have passed legislation comparable to article 761 CcQ, 24 which arguably gives effect to presumptions about a testator's lack of autonomy when reliant for their care on persons who are not family members.In this way, these legal provisions are animated by the theme discussed in Part I below.Article 761 applies to a narrow set of circumstances (i.e.testamentary acts made by persons living in permanent 23.Art 761(1) CcQ: "A legacy made to the owner, a director or an employee of a health or social services establishment who is neither the spouse nor a close relative of the testator is without effect if it was made while the testator was receiving care or services at the establishment"; art 761(2) CcQ: "A legacy made to a member of a foster family while the testator was residing with that family is also without effect.(2021) 51 R.G.D. 67-95 care facilities).Cases of alleged captation or undue influence, however, arise in a broader range of settings and, as explored in Part II, more nuanced judicial analyses are brought to bear on testamentary acts in these cases. Ultimately, undue influence and captation are strongly overlapping juridical concepts developed within separate legal traditions.Their principal distinction lies in the fact that the latter is more explicitly anchored to the notion of fraud.Hence, captation requires evidence of pernicious intent and outcome, whereas the same is not true, at least in theory, of the concept of undue influence in the common law of wills and estates.Yet, even in cases where captation is raised before a Quebec court, judicial evaluations may refer to the language and concept of undue influence. 25Likewise, although malevolent intent is not a required element of undue influence in the common law, the conduct and motives of the party said to have engaged in undue influence are typically impugned by the will's challenger on a moral basis, and common law courts will evaluate these claims accordingly. 26 II. VULNERABILITY AS SOCIALLY TRANSVERSAL Social presumptions about ageing and disability, which find reflection in juridical conceptions of undue influence and captation, drive the hypothesis underlying this article, namely, that will challenges based on undue influence and captation will be most likely to occur, and to succeed, in situations involving old or disabled testators.Conventional perceptions of "vulnerability" drive this hypothesis.Whereas some groups (such as the elderly, the disabled, and children) are presumed vulnerable and in need of external sources of protection, adults who have yet to reach "old age" and who appear to be of sound mind and body are understood to be autonomous.Thus, the latter group is far less susceptible than the former to having their legal undertakings challenged on the basis of undue influence or captation. Over the last decade, some legal theorists have challenged this binary understanding of personal vulnerability and autonomy, and the conditions typically associated with each state. 27Specifically, the presumption of vulnerability tied to ageing and disability has been understood as compromising the fundamental dignity of persons, essentializing these groups, and failing to account for variances in capacities among the aged and disabled.Conventional presumptions of vulnerability are therefore said to be dehumanizing, especially in the case of persons with disabilities, as their capacities are measured against a perception of the "ideal" or "perfect" human who is understood as physically and mentally robust and unencumbered by dependencies.This notion, however, rests on "false ideas," 28 what person hood constitutes.Those who are not fully autonomous and thus fall short of this benchmark can encounter barriers that can undermine dignity, risk social exclusion, or situate them within "a separate category of human existence." 29 recent vulnerability theorists underline, actual human realities do not reflect social suppositions of vulnerability.In fact, each of us can and will experience vulnerability at different moments in time, and in varying contexts.In this way, vulnerability-similar to disability (that is, reduced abilities in contrast to the norm)-should be understood as "inherent" to the human condition rather than as a "negative characteristic." 30The law's oversight of human interactions and agreements, especially in private law, could thus benefit from an understanding of vulnerability as universal and a "fundamental" human condition. 312021) 51 R.G.D. 67-95 While vulnerability might be a shared human characteristic, our experiences of vulnerability are not identical, nor do they overlap in time or space.Vulnerability exists on account of a range of circumstances that may heighten or reduce dependencies.Mackenzie's work on this point is instructive, creating a "taxonomy" that illustrates how vulnerability can result from inherent human traits (e.g. because we are embodied and have social and affective needs), from particular contexts (e.g.economic, geopolitical, or personal situations), or be "pathogenic" (e.g.disability that heightens exposure to abuse by others).These three sources of vulnerability can coexist; they may be permanent or endure for just a short period of time. 32ile this socially transversal understanding of vulnerability is appealing for the light that it can shine on the myth of full autonomy that only some people enjoy, it is susceptible to (at least) two principal critiques.The first is that the claim that everyone can be vulnerable risks masking degrees of dependence and susceptibility to abuse or harm.A theory of vulnerability as foundational or universal to the human condition is at risk of being both under and overinclusive, drawing on scarce resources to advance the interests of those who are more than capable of doing so without public supports.Further, understanding vulnerability as transversal could hinder efforts to concentrate on the needs of those who face the greatest risk of having their interests neglected or subverted.It is therefore necessary to avoid an absolutist, all-or-nothing approach to vulnerability-wherein we are either "constantly dependant [...] or not dependent at all." 33 This calls for refined understandings of the factors and conditions that might trigger dependence and give rise to vulnerability, as well as an assessment of whether the law can or should extend measures to protect individuals against the harm that may arise on account of these factors.This point leads to the second challenging aspect of understanding vulnerability as inherent to humanity, which is especially important to jurists.While we might acknowledge that each of us-at least transiently-can be or become vulnerable, it is not clearly desirable or possible for legal actors and doctrines to start from the premise that everyone needs protection and that this protection should come in the form of juridical oversight and intervention.While this approach might be lauded as a challenge to neoliberalism, 34 it would also engender compelling critiques about paternalistic interference by state authorities in private decisions and transactions. Scholars engaging with the concept of vulnerability have recognized these challenges to its universalist framings.Much of this work expresses concern over the potential for claims about shared dependencies to transform into justifications for state or private action that would undermine individual autonomy and dignity. 35Thus, some have argued for a more tailored approach that would aim to build resilience as a way to boost individual capacities and resist exploitation and harm.In other words, rather than presuming that a public approach to vulnerability requires intervention in private ordering, an understanding of vulnerability becomes reconcilable with an effort to respect individual dignity and autonomy by focusing on proactive measures that build personal resilience and capacity to withstand opportunism or exploitation by others.In this vein, Mackenzie, Rogers & Dodds have sought to bridge the notion of relational autonomyrecognizing that autonomy can be achieved because of healthy relationships, rather than instead of them-with efforts that privilege the dignity of the individual. 36us, rather than justifying "unwarranted paternalistic interventions" 37 that override expressions of intent and free will, a more nuanced juridical engagement with the notion of vulnerability could instead strive to deepen our individual capacities to spot and resist situations of vulnerability.As will be explored below, this should not translate into an individual's burden; the responsibility for spotting and resisting vulnerabilities can and should be shared.Where vulnerability is claimed as a basis for intervening in private decision-making or ordering, juridical actors would be called to evaluate whether the evidence presented reflects harm, unmet needs, and/or exploitation rather than drawing on simple presumptions tied to age and ability.In this way, the law's focus would be on building and recognizing resilience and abilities, rather than defaulting to presumptions that too often leave the elderly and persons with disabilities with "a troubling (2021) 51 R.G.D. 67-95 sense of powerlessness, loss of control, or loss of agency." 38Such an approach aligns with Mattson and Katzin's analysis of vulnerability among the elderly, who-in pointing out how ideals of autonomy offset the burden of care responsibilities to private actors-remind us that recognition of shared vulnerability "cultivates the virtues of compassion and cooperation" whereas "full autonomy and self-sufficiency are neither attainable nor desirable." 39w might these recently advanced theories of vulnerability inform analyses of undue influence and captation in the law of wills and successions?Some insight can be drawn from Herring's analysis of vulnerability in the law of contracts. 40Herring skilfully demonstrates how contract law already acknowledges, at least to some extent, our shared susceptibility to vulnerability.He advances this claim through examples of contractual doctrines centred on the protection of the ostensibly weaker bargaining party (e.g.non est factum, duress, and undue influence).At the same time, Herring stresses that these doctrines are exceptions to the general principle of freedom of contract, which itself is "premised on the ideal of the self-sufficient, informed, autonomous businessman who should be free to make his business deals for himself." 41Herring challenges this benchmark that we use to evaluate a contract's validity, noting that "real people, not the people in contract law's imagination, are sentimental and not entirely driven by rationale." 42He thus calls for an alternate approach to contract law that would integrate a duty for parties to recognize and account for each other's vulnerabilities rather than seeking exclusively to promote ideas tied to free will, market, and exchange. 43ke contracts, wills are private juridical acts rooted in an ideal of individual freedom.But, unlike contracts, wills are unilateral instruments and thus depend on the decisions of solely one person, sometimes recorded without anyone else present or having knowledge of the act until long after its execution. 44Herring's analysis, which establishes how vulnerability can and should be taken into account in a more robust way within private ordering, can be extended from contract law to the law of wills and successions.That said, the application of this principle will necessarily take a different form in the law of wills and successions, since-as there is no co-contracting party-legal approaches in this realm cannot depend on expecting that "[parties] look out for the vulnerabilities each other have and share." 45e question of who, in Herring's words, can or should "look out" for a testator's interests will be picked up further in the conclusion to this article.For the moment, though, it is important to consider how we might expect the law of wills and successions to engage with undue influence and captation based on the foregoing discussion about vulnerability. Despite the critiques of vulnerability set out in more recent literature, we might assume that the law's understanding of this concept remains based on differentiation rather than universality.We would thus expect that courts would be more inclined to make presumptions about an elderly or disabled testator's dependencies and susceptibility to external pressures.This being said, because of the primacy that is placed on the principle of freedom of testation-in a manner that is closely aligned with Herring's characterization of the orthodox approach to freedom of contract-it is also predictable that marshalling evidence of undue influence or captation would be a formidable endeavour for a party challenging a will on this ground.One would reasonably anticipate that, in most cases, wills are likely to be left untouched, without interference by a judge.Hence, we might reasonably expect that-because of conventional ideas about vulnerabilityclaims for the judicial override of freedom of testation (especially successful claims) are most likely where testators are old and/or disabled, and thus presumed to be dependent and vulnerable.The discussion that ensues reflects the extent to which this expectation bears out in case law developed in the law of wills and successions in common law Canada and Quebec civil law. 44.Many provinces and territories recognize the holograph will, that is, a will that the testator handwrites and signs, without witnesses and without registering it in any public office.See e.g.Succession Law Reform Act, RSO 1990, c S 26, s 6 (Ontario); Wills and Successions Act, SA 2010, c W-12.2, s 16 (Alberta); art 726 CcQ.In British Columbia, however, a handwritten will is invalid unless attested by witnesses (see Wills, Estates and Succession Act, SBC 2009, c 13, s 37). III. VULNERABILITY AND TESTAMENTARY VOLITION Considering the foregoing discussion, it is not surprising that the case law on undue influence and captation in testamentary contexts involves testators cast as vulnerable, at least according to dominant social norms.The jurisprudence is consistent for its focus on the testamentary dispositions of persons who are old and/or disabled.Rare are the cases that involve young testators; where they occur, the testator was a person in a weakened state of health at the time of will-making. 46his suggests, then, that a testator's social entourage-family or friends-is unlikely to claim undue influence or captation when that testator was young and of sound mind and body, further reflecting presumptions of who is or is not capable of will-making.Conventional vulnerability theory thus underpins judicial approaches to undue influence and captation; in each case on point, judicial reasoning will turn on an assessment of whether the testator was in fact "vulnerable" at the time of making a will. 47Just the same, the ostensible presence of a state of vulnerability on the testator's part will not by itself ground a conclusion of undue influence or captation.Rather, courts generally have shown "solicitude" in their analyses, 48 upholding testamentary autonomy unless and until there is compelling evidence demonstrating that the will reflected the intentions and wishes of someone other than the testator.Hence, while a connection is made in this jurisprudence between old age and disability, on the one hand, and the conception 46.See e.g.Lamontagne v Lamontagne, 1996 CarswellSask 658 at para 42, 67 ACWS (3d) 417 (SK QB), involving a will made by a testator at age 18.While the court did not conclude undue influence was exerted by his brother, the latter was found to be "in a position to attempt to exert undue influence" given his role as a caregiver to his brother, who was a quadriplegic. 47.Although vulnerability is, today, predominantly a social rather than a legal concept and thus not defined in juridical sources, the concept is central to judicial decision-making throughout the jurisprudence related to undue influence and captation in testamentary contexts.See e.g. in Canadian common law, Ross-Scott v Potvin, 2014 BCSC 435 ("The question is whether Mr Groves was a vulnerable person and in a state of incompetence and unable to resist pressure improperly directed on them by the other spouse" at para 229), and Maronda v Colliton, 2010 ABQB 354 ("I also accept that Mrs Colliton was a vulnerable woman.She was elderly, had recently undergone fairly major surgery, was on medications and was far more dependent than she was used to being" at para 79).In Quebec civil law, see e.g.Commission des droits de la personne (Succession de Poirier) c Bradette Gauthier, 2010 QCTDP 10 ("[L]e testament est préparé dans un contexte où monsieur Poirier est dans un état de vulnérabilité, de dépendance, d'isolement.Il est sous l'emprise des Défendeurs, dans une situation de mise à profit de leur part à son détriment" at para 81), and Filion c Desmarais, 2015 QCCS 338 ("Certains témoignages portent à conclure que Rock Filion, diminué physiquement, s'est isolé et s'est retrouvé en situation de vulnérabilité qui a pu affecter ses choix testamentaires" at para 63).48.See Morin, "Libéralités et personnes âgées", supra note 8 at 154. of vulnerability, on the other, courts have remained scrupulous in their analyses to determine whether, on the facts, undue influence or captation was present. A review of judicial analyses reveals that two particular factors shape judicial outcomes.The first is whether the defendant beneficiary created circumstances giving rise to the testator's social isolation and dependence.The second is whether the defendant beneficiary engaged in deceit to mislead the testator in a manner that affected testamentary decisions.Whereas the common law has focused on the first of these factors (social isolation and dependence), the second factor (deceit) is central to Quebec civil law cases evaluating the presence of captation.The discussion that ensues examines each of these factors in turn. A. Social Isolation and Dependence Will challenges on the basis of undue influence are most likely to convince a court where the evidence demonstrates that the testator, at the time of will-making, had few or no social connections beyond the defendant beneficiary.In most such cases, the testator's isolation and reliance on that beneficiary would have occurred by the latter's design. A case widely cited in the Canadian law of undue influence in the context of wills and estates, Banton, 49 offers a prime example.Here, Justice Cullity found that the testator had fallen subject to the "overwhelming and irresistible" influence of his new wife who was fiftyseven years his junior. 50Critical to this conclusion was the judge's finding of measures the defendant beneficiary took to sever the testator's relationships with his children.Justice Cullity found that the testator's "contact with his family virtually ceased" once he began cohabiting with his new spouse, who intercepted calls and visits from his children. 51The result was to render the testator "a mere puppet" in guardianship proceedings, "easy prey" for a woman whom the court 49.Banton, supra note 26.50.Ibid at para 124.51.Ibid at para 89. (2021) 51 R.G.D. 67-95 cast as having "designs on his property." 52In these circumstances, undue influence was found to have overtaken testamentary intent and the will was consequently set aside. The facts of Tribe are comparable. 53The testator made a will that benefitted his live-in caregiver, who was four decades his junior.In contrast to Banton, the couple in Tribe did not have a conjugal relationship, but the evidence indicated that they regularly expressed their affection or love for one another.The testator's son successfully contested the will on the basis of undue influence showing that, although he maintained some connection with his father until the latter's death, the testator was nonetheless "socially isolated." 54Justice Cohen further found that the defendant beneficiary had operated to render the testator dependent on her, enabling her "domination" over his intentions and financial affairs and wealth transfers.This, in turn, drove wealth transfers that the testator had made both inter vivos and through his will, which were to the advantage of the defendant beneficiary. 55in to Banton and Tribe, Re Kozak Estate 56 provides a third example of a court in common law Canada overriding a will following the interventions of a younger woman affecting the estate planning of an older, infirm testator.In Kozak, the court, finding the testator to be "unhealthy" and "naïve" at the time he made the impugned testamentary dispositions, concluded that the defendant beneficiary had duped him into believing that she would one day marry him. 57She had further intervened to limit the testator's contact with his family.For Justice Renke, 52.Ibid at para 124.Vout, supra note 17, where the judicial finding of the testator's independence despite his senior age (81 years old) in comparison with the defendant beneficiary, a 29-year-old woman who lived with and carried out chores for them, led to conclusion that undue influence was not present: Clarence Hay, on the evidence, was not a befuddled, senile old man whose mind had been captured by Sandra Vout and who, like the testatori in Eady v Waring […], was physically and emotionally controlled and isolated by those persons who stood to benefit.In fact, the reverse is true.Clarence Hay was self-reliant and independent, was not easily influenced, lived alone and visited all members of the Hay family regularly, and he was all these things both before and for three years following the execution of the Will (at para 30).This is the decision of the trial judge that was cited to by Sopinka J for the Supreme Court of Canada, which upheld the trial decision. 53. these efforts at social control and isolation reflect "a hallmark of undue influence." 58Having so concluded, the will was voided. Not all cases wherein a court concludes that undue influence overrode testamentary intent involve younger women beneficiaries and male testators decades their senior. 59Re Morash Estate 60 involved a reversal of gender roles, calling upon a court to determine the validity of a will that benefitted the testator's husband and his extended family while disinheriting her only child.Finding that the defendant "dominated" decisions regarding the testator's health and legal affairs during her lifetime such that the latter was fully reliant on him, Justice Hall denied probate. 61Similarly, the court in Marsh Estate 62 examined the validity of a will made by a senior testatrix benefitting the defendant beneficiary, who had looked after her business affairs.Here, a factual finding was made that the defendant beneficiary's communications to the testator "implicitly, if not expressly, threatened to withdraw his assistance from [the testator] if the Will was not changed" to his advantage. 63Such communications were deemed to constitute undue influence, particularly given the testator's "minimal contact with other support systems." 64As such, Marsh further underscores the centrality of social isolation and dependence to judicial analyses. While circumstances of social dependence can arise in a number of contexts, they are most likely to occur in the context of relationships marked by a power differential.The Supreme Court of Canada in Geffen thus determined that analyses of undue influence-whether in testamentary or other legal contexts-must always begin by examining the parties' relationship, inquiring "whether the potential for domination 58.Ibid at para 179.59.Some legal scholars have written about this phenomenon of relatively young women manipulating, for personal gain, their spouses' testamentary under the banner of "predatory" relationships or marriages.While beyond the scope of this paper, the gendered implications of such analyses merit critical scrutiny, given their potential to propagate stereotypes about the frailty and vulnerability of seniors, especially senior men, and of younger women using sex as power to serve their own ends.See Dorota Miller, "Elder Exploitation Through Predatory Marriage" (2012) 28:1 Can J Fam L 11; Albert H Oosterhoff, "Predatory Marriages" (2013) inheres in the nature of the relationship itself." 65This would include relationships that the law characterizes as premised on trust (e.g.parent-child, solicitor-client), as well as those that reveal dependencies and power imbalances pursuant to specific fact-based analyses.While the latter types of relationships "defy easy categorization," 66 the potential for exploiting the weaker party tracks that which exists in formal fiduciary contexts.Although the Court in Geffen suggested that the presence of a relationship of dependence might trigger a presumption of undue influence, it later affirmed in Vout v Hay 67 that such presumption does not arise in testamentary contexts. 68t even if a state of isolation or dependence does not give rise to a legal presumption of undue influence, it will-when coupled with a state of vulnerability-shape judicial outcomes.This means that circumstances of dependence and vulnerability-particularly in relation to our social environments and relationships-matter more to our ability to make autonomous decisions than our age or physical abilities.This recognition is in line with Justice Wilson's acknowledgement in Geffen that each of us has the potential to be rendered relationally dependent. 69This line of thought coheres with the argument of recent vulnerability scholars, described above, which positions vulnerability as transversal and inherent to the human condition.Such an understanding of vulnerability, particularly in its intersection with testamentary contexts, calls for a more refined understanding of when and how undue influence might occur.This point is explored later in this discussion, following the analysis of Quebec civil law jurisprudence, to which the discussion now turns. B. Deceit Whereas common law jurisprudence on undue influence has been premised on examinations of whether a testator was isolated or dependent, analyses within Quebec civil law in relation to captation have concentrated on whether a defendant beneficiary's manipulative efforts induced testamentary decision-making.So, while the common law asks whether a defendant beneficiary created circumstances 65.Geffen, supra note 14 at 378. 66. Ibid at 378-79 (per Wilson J). 67.Vout, supra note 17. 68.See discussion in Karpinski v Zookewich Estate, 2018 SKCA 56 at paras 28-32.69.See Geffen, supra note 14 at 377. leading to the testator's isolation or dependence, Quebec civil law is most interested in whether the defendant beneficiary duped or deceived the testator to advance their own interests.Doctrine and jurisprudence in the civil law hence focus the captation inquiry on whether the evidence can establish fraudulent conduct that had a determinative impact on testamentary outcomes.This is aptly summed up by Justice Roy in her decision in Gatti c Barbosa Rodrigues: Il n'est pas contraire à la loi, en soi, de s'attirer les faveurs d'un testateur.La captation n'entraîne la nullité d'un testament qu'en présence de fraude ou de manoeuvres dolosives.Le Tribunal doit être convaincu de l'existence d'un dol et que ce dol a été déterminant sur la volonté du testateur exprimée dans le testament attaqué. 70ile Quebec civil law pivots on fraud, an analysis of doctrine and jurisprudence indicates that the circumstances which lend themselves to finding captation will often not differ significantly from those that common law courts have assessed when adjudicating undue influence claims.Acts by which the defendant beneficiary renders a testator isolated or socially dependant might be considered fraud leading to captation. 71Thus, intercepting contact and communication between a testator and family members could amount to captation before a Quebec court. 72But this will only be the case where that court finds that the conduct was grounded in deception; "schemes," "diversions," or "lies" that actively seek to mislead a testator with a view to influencing testamentary outcomes are a sine qua non of captation in Quebec Civil law. 73 in the common law, the civil law does not ascribe to a simple correlation between old age and disability, on the one hand, and a 70.Gatti c Barbosa Rodrigues, 2011 QCCS 6734 at para 186.71.See e.g.Cadieux, supra note 25.See also Beaulne, Morin & Brière, supra note 4 at 245-46.72.See Germain Brière, Droit des successions, 3rd ed (Montréal: Wilson & Lafleur, 2002) at 182.This is one example of a "pratique artificieuse" (deceitful practice) that crosses the line between efforts that seek to win a testator's affection or favour and those that constitute a fraudulent act that vitiates free intent. 73. See Lago c Lachaîne, 1996 CanLII 12033 at para 87, [1997] RL 136 (where the Court stated: "La captation est une forme particulière de dol, soit un vice de consentement qui résulte de mensonges, de manoeuvres frauduleuses ou de manigances de la part de bénéficiaires potentiels, utilisant des pressions, des détournements ou de l'influence indue sur le testateur.")See also Hogue (Succession de) c Sigouin, 2014 QCCQ 2936 at para 87 (where Landry J stated: "la 'captation' est synonyme de ruse, de tromperie, de mensonge et/ou de manoeuvres dolosive.")(2021) 51 R.G.D. 67-95 presumption or conclusion of captation, on the other. 74Rather, concrete evidence that a testator's volition was "captured" is necessary before a court will be prepared to annul a testamentary act.Just the same, Quebec civil law reflects the view that old age and disability can reduce a testator's ability to resist fraudulent tactics aimed at effecting testamentary outcomes. 75ecifically, where a testator is frail or elderly, they can be more readily cast as impressionable and pliable.Thus, the will of a sick and elderly testator favouring her "aviseur spirituel"-whose relation the Court described "comme une cire molle entre les mains d'un homme énergique parlant au nom de la religion"-was invalidated as the product of "artifices" and fraud. 76This reasoning echoes the Supreme Court's leading decision on captation.In Stoneham, 77 Justice Beetz affirmed the judgment at first instance holding that the defendant beneficiaries engaged in what, in the English translation of the reasons for judgment, was called "undue influence," 78 notably by pressuring a frail testator into making a new will on the premise of deliberate misrepresentations.The Court further affirmed that, when considered individually, a testator's ill-health does not typically suffice to invalidate a will.75.See Châteauguay Perrault, Les mélanges Bernard Bissonnette (Montréal: University of Montréal, 1963) ("L'âge, l'état de santé, la condition sociale du testateur pourront avoir joué un rôle quant au degré de résistance qu'il pouvait opposer aux manoeuvres dont il était l'objet" at 458-59), as cited with approval in Laroque c Gagnon, supra note 20 at para 97 (Kasirer JA, as he then was, for the Court). 76. Barbeau c Feuiltault, [1908] 17 BR 337 at paras 45-54.77.Stoneham, supra note 22. 78.Although a case arising from within Quebec civil law, the claim at issue is framed as "undue influence."While the term is used in the English version of the judgment, Beetz J rightly insisted that "undue influence" as understood in this case was not the same as the concept developed at common law: The Court was referred by both sides to a large number of English decisions or decisions in cases from other provinces, for the reason that the unfettered freedom to devise or bequeath one's property by will comes from English law, and that there are analogies between the concept of undue influence in English law and undue influence (captation) in the civil law.The case at bar does not concern the unfettered freedom to devise any more than it concerns a will in the form derived from the laws of England.Moreover, undue influence applies to gifts inter vivos as it does to wills, and gifts are purely a matter for the civil law.In such circumstances, I not only hesitate to use decisions from other provinces in a civil law matter, I am not in any way bound by a decision of this Court which was cited by counsel for the respondent (ibid at 204). However, referring to Mignault, 79 Justice Beetz underscored the relevance of deceit to the captation analysis and concluded that, when applied to a person in ill-health, deceit that drives testamentary decisions amounts to captation. Much more recently, 80 Justice Bachand reasoned that deceit at the hands of a testator's daughters captured the will-maker's intentions, resulting in bequests produced by fraud rather than the testator's own volition.While the court did not refer expressly to manipulation or control by the defendant beneficiaries, these elements were at the core of the decision, which concluded that "deceitful actions" had a direct impact on the testator's decisions. 81Even more pertinent to this article's thesis, Justice Bachand's reasoning illuminates how disability and old age enhance vulnerability.While this state will not, by itself, yield a finding of captation it can render a testator "not as impervious to undue influence." 82though social isolation is a characteristic that appears to be less central to analyses of captation than it is to common law courts' evaluations of testamentary undue influence, its presence can contribute to finding captation.For example, in Lafortune c Bourque, 83 the court found a defendant beneficiary's efforts to isolate a testator from her family-notably through disparaging the latter in order to alienate the testator-was a reflection of "pure cynicism" and "much bad faith."Similarly, in a case involving a claim based on article 48 of the Quebec Charter, 85 the Court accepted a will challenge in light of evidence of a defendant beneficiary's efforts to bar a testator's contact with his (2021) 51 R.G.D. 67-95 family, which in turn affected his testamentary decisions. 86Likewise, in Rioux c Babineau, the court distinguished simple and fraudulent captation, finding that only the latter would invalidate a will.In that case, the court found fraudulent captation where a defendant beneficiary isolated the testator and hid his affairs from other family members. 87e foregoing discussion illuminates how, as in the common law, cases in Quebec civil law in which a testator may be viewed as vulnerable on account of age or disability will not necessarily yield conclusions of captation.However, where testators in such circumstances are subject to fraudulent manoeuvres by a defendant beneficiary, their testamentary acts will be annulled.In this way, much like the jurisprudence that Canadian common law courts have developed, case law in Quebec reflects conventional social interpretations of vulnerability, as testators whose wills are challenged for captation are those who are aged or disabled.Recalling the critical approach to vulnerability theory developed in Part I, we might ask how judicial outcomes might differ if law understood vulnerability as socially transversal rather than as restricted to particular groups.Some thoughts on this question are offered in the conclusion that ensues. CONCLUSION This paper explores how social conceptions of vulnerability find reflection within juridical approaches to undue influence and captation in testamentary contexts.Critical interrogations demonstrate the pertinence of understanding that we all-at one point or another-are vulnerable.This universalist approach calls for jurists to move away from preconceptions that some social groups are especially vulnerable, meriting our exclusive focus in relation to the question of who deserves particular vigilance and support.Care must be taken to protect those who face risks of harm while avoiding encroachments on their autonomy and dignity.At the same time, effort is needed to identify circumstances of vulnerability that might not be obvious, that is, in cases where the vulnerable person or group might appear to be a member of an "empowered" social group.A review of the case law developed by courts in Canada and Quebec demonstrates how social understandings of vulnerability factor into judicial analyses of testamentary challenges based on claims of undue influence or captation.Jurisprudence in both legal traditions reflects social expectations and norms about vulnerability, in that wills challenged on these bases are nearly always made by very old or infirm testators.When faced with facts demonstrating that a will was executed by an ostensibly vulnerable testator, courts will engage in careful analyses to discern the presence of undue influence or captation, and-centring the principle of testamentary freedom-will resist simple presumptions about that testator's susceptibility to the pressures of a self-serving defendant beneficiary. See Commission des droits de la personne (Succession de Consider, though, what such cases might look like if the law drew on a more textured understanding of vulnerability in the face of claims that a will is marred by undue influence or captation.Such an approach would be premised on an understanding that every testator, regardless of their age or abilities will, at some point or another, experience vulnerability and dependence.This should not prompt a presumption that everyone who makes a will is at risk of having their intentions overtaken by the acts of another.Rather, the analysis would call for all actors-both juridical (lawyers, notaries, judges) and social (those who challenge or defend wills)-to resist presumptions about who is or is not vulnerable, focusing instead on facts and evidence revealing whether an impugned will was driven by exogenous pressures amounting to coercion or fraud.In many cases, age and ability will remain relevant, but not always.A testator who presents as wholly self-sufficient may lean heavily, financially, socially, or emotionally, on a third party, resulting in "relational" rather than personal vulnerability. 88Within families, particularly spousal relationships, the idea that competent persons of full age have equal bargaining power has long been critiqued. 89We can imagine, therefore, that pressure may be undue or result in deceit in such contexts.Again, the point is not to dilute the law's commitment to testamentary freedom and autonomy, but instead to consider how presumptions about vulnerability may (2021) 51 R.G.D. 67-95 lead to heightened scrutiny in a way that might compromise the autonomy of some testator, to the legal and social neglect of situations where undue influence or captation might have occurred even though the players concerned were young and abled.Hence, where there is evidence that one person shaped the intentions that ultimately find expression in a testamentary instrument (for example, by being the principal point of contact with a lawyer or notary who drafted the instrument), undue influence or captation can arise, regardless of the testator's age or state of ability. Aside from thinking about litigation involving will challenges on the basis of undue influence or captation, it is also worth thinking about how law and policy can reflect for greater sophistication in their engagement with vulnerability.Here, it is opportune to recall Herring's discussion of vulnerability in contracts, and his proposal for a legal approach that acknowledges universal dependencies and weaknesses, while incorporating a duty on parties to "look out" for one another's shared and individual vulnerabilities. 90Since a will involves just one legal party (the testator), who could be charged with "looking out" for their interests?Aside from judges called to intervene after the testator's death, the obvious contenders are parties who have a hand in drafting a will, notably lawyers and notaries. 91While the extension of a legal duty to witnesses to a will's execution would seem inopportune, witnesses are often called upon in probate or homologation proceedings. 92An enumeration of the precise duties that could be warranted in this context (such as interrogating the testator's actual relationships with named beneficiaries) lies beyond the scope of the present paper, yet some helpful discussion has emerged on the responsibilities of legal professionals in regard to vigilance against undue influence. 93Some authors have also astutely noted that jurists called upon to oversee the execution of testamentary instruments can play a key role in assessing whether a testator's intentions are 90.See Hall, supra note 88. 91.Admittedly, this analysis could not extend to cases of holograph wills, where lawyers and witnesses are not present.In those cases, the only possibility for "looking out for" the testator can occur after death, when the will is opened and circumstances are analysed to discern whether the instrument veritably represents testamentary intent. 92 being driven by undue influence.In this regard, the latter's vulnerability is not to be presumed, but rather might be indicated by factors such as "age, infirmity, disability, language barriers, or involvement in abusive relationships." 94This recognition of a broadened array of social conditions that might induce vulnerability to juridical acts driven by pressure rather than by free choice is important to the thesis of this article, namely, that undue influence and captation can occur in a range of circumstances that include-but are not relegated tothose involving old age and/or infirmity. As the law in this domain advances, a commitment to integrating an understanding of vulnerability as socially transversal promises a richer analysis of undue influence and captation in testamentary contexts.Notably, it calls upon jurists to recognize that even those ordinarily presumed as vulnerable merit legal analyses that presume and further autonomy 95 while awakening us to the possibility that even seemingly autonomous testators might succumb to strong external pressures.In this way, a more textured approach to vulnerability should facilitate juridical efforts to achieve the "delicate balance" of preserving testamentary freedom while also protecting the interests of all those whose interests and volition have been overtaken by undue influence or captation. 10. Matthew Tyson, "An Analysis of the Differences Between the Doctrine of Undue Influence with Respect to Testamentary and Inter Vivos Dispositions" (1997) 5 Aust Prop 38 at 43. 11.See ibid at 55 ("The probate doctrine of undue influence was not a creature of equity and good conscience, but was instead the progeny of the ecclesiastical courts.")12. See Louise M Mimnagh, "Probate Actions and 'Suspicious Circumstances': A Third Standard of Proof for Allegations Involving Moral Guilt" (2014) 19 Appeal 95. " Art 761 is a specific reflection of the fundamental right that the Quebec Charter of Human Rights and Freedoms, CQLR c C-12, s 48 [Quebec Charter] extends to the elderly and to disabled persons: Every aged person and every handicapped person has a right to protection against any form of exploitation.Such a person also has a right to the protection and security that must be provided to him by his family or the persons acting in their stead.24.See also Robert Barton, Lisa M Lukaszewski & Stacie T Lau, "Gifts to Caretakers: Acts of Gratitude or Disguised Malfeasance?New Statutes May Decide for Us" (2015) 29:3 Probate & Property 1 (citing similar legislation in California, Maine, Nevada, and Illinois at 2-3); Bernard v Foley, [2006] 39 Cal 4th 794. 13. See Tyson, supra note 10 at 70-71.For a thorough discussion of the origin and historic and contemporary jurisdiction of probate courts, see Albert H Oosterhoff, "The Discrete Functions of Courts of Probate and Construction" (2017) 46 Adv Q 316.14.Geffen v Goodman Estate, [1991] 2 SCR 353 at 377-78, 81 DLR (4th) 211 [Geffen].15.Restatement (Second) of Torts § 774B (1979): "One who by fraud, duress or other tortious means intentionally prevents another from receiving from a third person an inheritance or gift that he would otherwise have received is subject to a liability to the other for the loss of the inheritance or gift."16.John C P Goldberg & Robert H Sitkoff, "Torts and Estates: Remedying Wrongful Interference with Inheritance" (2013) 65:2 Stan L Rev 335 at 365, 392.
2021-11-19T00:33:19.016Z
2021-09-21T00:00:00.000
{ "year": 2021, "sha1": "7ad491de21c89ff31bfd00c24e3e31bc7e8d1602", "oa_license": null, "oa_url": "https://www.erudit.org/fr/revues/rgd/2021-v51-n1-rgd06406/1081837ar.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ba7c2f34111a8d7cc474dffe08b6ca8f7c068c26", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Political Science" ] }
7578245
pes2o/s2orc
v3-fos-license
Anisotropic Optical Response of Dense Quark Matter under Rotation: Compact Stars as Cosmic Polarizers Quantum vortices in the color-flavor locked (CFL) phase of QCD have bosonic degrees of freedom, called the orientational zero modes, localized on them. We show that the orientational zero modes are electromagnetically charged. As a result, a vortex in the CFL phase nontrivially interacts with photons. We show that a lattice of vortices acts as a polarizer of photons with wavelengths larger than some critical length. Introduction.-The strong interaction, which is one of the fundamental forces in nature, is fully described by quantum chromodynamics (QCD). QCD matter shows a rich variety of phenomena at finite temperatures and/or baryon densities [1], and the determination of the phase diagram has been a topic of considerable interest in high-energy physics. Quark matter is expected to exhibit color superconductivity, triggered by quark-quark pairings, at high baryon densities and low temperatures [2,3]. It has been reported in Ref. [2] that the ground state is the color-flavor locked (CFL) phase at very high densities, in which the three light flavors (up, down and strange) of quarks contribute to the pairing symmetrically. The CFL matter is both a superfluid and a color superconductor because of the spontaneous breaking of the global U(1) B baryon number symmetry and the local SU(3) C color symmetry, respectively. It is expected to exist in the cores of dense stars, although observational evidence has been elusive. The purpose of this Letter is to propose a possible observational signal of the CFL matter. The key ingredients are the topological vortices. These vortices are created under rotation owing to the superfluidity of the CFL matter [4,5]. If the CFL phase is realized in the cores of dense stars, the creation of vortices is inevitable since the stars rotate rapidly. The superfluid vortices discussed in Refs. [4,5] were found to be dynamically unstable, decaying into sets of constituent vortices [6]. The stable ones are the so-called non-Abelian vortices, which are superfluid vortices as well as color magnetic flux tubes [7]. Their properties have been studied using the Ginzburg-Landau theory [6,[8][9][10][11][12][13] or the Bogoliubov-de Gennes equation [14]. Interestingly, there are fermionic and bosonic degrees of freedom localized on a vortex. Non-Abelian vortices are endowed with a novel kind of non-Abelian statistics because of the multiple fermion zero modes trapped inside them [15]. On the other hand, the bosonic degrees of freedom are called the orientational zero modes [6,11,13], which are the Nambu-Goldstone bosons that are associated with the symmetry breaking inside vortices. In this Letter, we investigate the electromagnetic properties of non-Abelian vortices in the CFL phase. Although the CFL matter itself is electromagnetically neutral, the orientational zero modes are naturally charged, as is discussed later. The electromagnetic property of vortices can be phenomenologically important as it may lead to some observable effects. As an illustration of such an effect, we show that a lattice of vortices in the CFL phase acts as a polarizer of photons. The rotating CFL matter should be threaded with quantum vortices along the axis of rotation, which results in the formation of a vortex lattice [6,9,10], as in In the present analysis, we neglect the mixing of photons and gluons. The gauge field, A ′ µ , which remains massless in the CFL phase, is a mixture of the photon A µ and a part of Here, the mixing angle ζ is given by where g and e are the strong and electromagnetic coupling constants. At accessible densities (µ ∼ 1GeV), the fraction of the photon is given by sin ζ ∼ 0.999, and so, the massless field A ′ µ consists mostly of the ordinary photon and includes a small amount of the gluon. As a first approximation, we neglect the mixing of the gluon to the massless field. Orientational zero modes.-The color superconductivity is brought about by the condensation of diquarks. At very high densities, the ground state is believed to be the CFL phase, which is characterized by the spinless and positive parity condensates of the form where q is the quark field, i, j, k = u, d, s (a, b, c = r, g, b) are the flavor (color) indices, C is the charge conjugation matrix, ∆ is a BCS gap function, and the transpose is employed with respect to the spinor index. The symmetry breaking pattern is, apart from discrete symmetry, where SU (3) appear Nambu-Goldstone (NG) modes confined in the core of the vortex, which parametrize the coset space known as the two-dimensional complex projective space [6,11], There exist classically degenerate vortex solutions, characterized by the value of CP 2 orientational moduli. We denote the NG modes by a complex three-component vector φ ∈ CP 2 , which satisfies φ † φ = 1. When we neglect the electromagnetic interaction, the low energy effective theory on the vortex which is placed along the z axis is shown to be described by the following CP 2 nonlinear sigma model [11], where the orientational moduli φ are promoted to dynamical fields, and C and K α are numerical constants. Under the color-flavor locked transformation, the CP 2 fields φ transform as φ → Uφ, with U ∈ SU(3) C+F . Now, let us consider the electromagnetic fields. The electromagnetic U(1) EM group is a subgroup of the flavor group SU(3) F , which is generated by T 8 = 1 √ 6 diag(−2, 1, 1) in our choice basis. The electromagnetic interaction is incorporated by gauging the corresponding symmetry. Therefore, the low-energy effective action on the vortex should be modified to the gauged CP 2 model, where the covariant derivative is defined by Photon-vortex scattering.-Here, we investigate the consequence of the charged degrees of freedom on the vortex. The low-energy behavior is described by photons propagating in three-dimensional space and the CP 2 model localized on the vortex. Hence, the effective action is given by Let us consider the scattering of photons by a vortex. The equation of motion of the gauge fields derived from the effective action is given as where δ(x ⊥ ) ≡ δ(x)δ(y) is the transverse delta function. We consider the situation where a linearly polarized photon is normally incident on the vortex and assume that the electric field of the photon is parallel to the vortex. Then, the problem is z-independent and we can set θ = θ(t), A t = A x = A y = 0, and A z = A z (t, x, y). The equation of motion can be rewritten as where we have defined Equation (10) where λ is the wavelength of the incident photon, η is a numerical factor of order unity, where n v is the number of vortices per unit area. Here, L is defined by with the inter-vortex spacing ℓ. As the cross section depends on the internal state (value of ϕ) of the vortex, we have introduced the averaged scattering cross section dσ/dz over the ensemble of the vortices. Let us denote the intensity of waves at distance x from the surface of the lattice as I(x). I(x) satisfies Therefore, the x dependence of I(x) is characterized by the following differential equation This equation is immediately solved as I(x) = I 0 e −x/L , where I 0 is the initial intensity. Hence, the waves are attenuated with the characteristic length L. We can obtain a rough estimate of the attenuation length. The total number of vortices can be estimated, as in Ref. [5], as where P rot is the rotation period; µ, the baryon chemical potential; and R, the radius of the CFL matter inside dense stars. These quantities are normalized by typical values. The intervortex spacing is given by Therefore, the characteristic decay length of the electromagnetic waves is roughly estimated where, we have assumed that the variable φ is randomly distributed in the CP 2 space. This assumption is natural as there is no particularly favored direction in the CP 2 space for the case with three massless flavors [18] [19]. We have also taken η = 1, µ = 900 MeV and ∆ = 100 MeV, from which the values of C and K 3 are determined accordingly [12]. If we adopt the value of R ∼ 1 km for the radius of the CFL core, the condition that the intensity is significantly decreased within the core is written as L ≤ 1 km. This condition can be rewritten in terms of the wavelength of the photon as Therefore, a lattice of vortices serves as a wavelength-dependent filter of photons. It filters out the waves with electric fields parallel to the vortices if the wavelength λ is larger than λ c . The waves that pass through the lattice are the linearly polarized ones with the direction of their electric fields perpendicular to the vortices, as schematically shown in Fig. 1. One may wonder why a vortex lattice with mean vortex distance ℓ can serve as a polarizer for photons with wavelength many-orders smaller than ℓ. It is true the probability that a photon is scattered during its propagation for a small distance (∼ ℓ, for example) is small. However, while the photon travel through the lattice, the scattering probability is accumulated and the probability that a photon remains unscattered decreases exponentially. Namely, the small scattering probability is compensated by the large number of vortices through which a photon passes. This is why the vortex mean distance and the wavelength of the attenuated photons can be different. [19] The presence of a finite strange quark mass does not change the qualitative feature of the polarizing phoenomenon. The strange quark mass gives rise to a potential in the effective model, as discussed in Ref. [12]. When m s is larger than the typical kinetic energy of the CP 2 modes, which is given by the temperature T ≤ T c ∼ 10 1 MeV, and is small enough so that the description by the Ginzburg-Landau theory based on the chiral symmetry is still valid, the orientation of vortices falls into φ T 0 = (0, 1, 0). This assumption is valid for the realistic value of m s ∼ 10 2 MeV. The orientation dependence of the cross section is encapsulated in the function f (φ) defined in Eq. (11). Since f (φ 0 ) = 1/3 = 0, photons still interact with the vortex in the presence of a finite strange quark mass. Assuming that all the vortices are with the orientation φ 0 , we can redo the numerical estimates as follows. The decay length of the photon intensity is recalculated to be L ∼ (1.2 × 10 −11 m 2 )/λ, instead of Eq. (19), and the condition that the intensity of photons is significantly decreased within the CFL core of order 1km is given by λ ≥ 1.2 × 10 −14 m, instead of Eq. (20).
2012-08-09T12:13:51.000Z
2012-03-22T00:00:00.000
{ "year": 2012, "sha1": "cb77fc11d63a4a3bef9ca82d8fe258f3b6b7ca1c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1203.5059", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cb77fc11d63a4a3bef9ca82d8fe258f3b6b7ca1c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
59604592
pes2o/s2orc
v3-fos-license
Evolution of mechanical properties of lava dome rocks across the 1995–2010 eruption of soufrière hills volcano, montserrat Lava dome collapses pose a hazard to surrounding populations, but equally represent important processes for deciphering the eruptive history of a volcano. Models examining lava dome instability rely on accurate physical and mechanical properties of volcanic rocks. Here we focus on determining the physical and mechanical properties of a suite of temporally-constrained rocks from different phases of the 1995–2010 eruption at Soufrière Hills volcano in Montserrat. We determine the uniaxial compressive strength, tensile strength, density, porosity, permeability, and Young’s modulus using laboratory measurements, complemented by Schmidt hammer testing in the field. By viewing a snapshot of each phase, we find the highest tensile and compressive strengths in the samples attributed to Phase 4, corresponding to a lower permeability and an increasing proportion of isolated porosity. Samples from Phase 5 show lower compressive and tensile strengths, corresponding to the highest permeability and porosity of the tested materials. Overall, this demonstrates a reliance of mechanical properties primarily on porosity, however, a shift toward increasing prevalence of pore connectivity in weaker samples identified by microtextural analysis demonstrates that here pore connectivity also contributes to the strength and Young’s Modulus, as well as controlling permeability. The range in UCS strengths are supported using Schmidt hammer field testing. We determine a narrow range in mineralogy across the sample suite, but identify a correlation between increasing crystallinity and increasing strength. We correlate these changes to residency-time in the growing lava dome during the eruption, where stronger rocks have undergone more crystallization. In addition, subsequent recrystallization of silica polymorphs from the glass phase may further strengthen the material. We suggest the variation in physical and mechanical rock properties shown within the Soufrière Hills eruptive products be included in future structural stability models of the remaining over-steepened dome on Montserrat, and that consideration of rock heterogeneity and its temporal variation if possible, be made in other, similar systems. INTRODUCTION Collapse of volcanic flanks and lava domes has been shown to influence subsequent eruptive behavior (e.g., Voight and Elsworth, 2000) and represents a major hazard through generation of pyroclastic flows and debris avalanches. Structural stability modeling is therefore vital in understanding the hazard associated with, and the consequences of volcanic collapse events. This has been explored through various modeling efforts, including: analog modeling (Vidal and Merle, 2000;Cecchi et al., 2004;Tibaldi et al., 2006;Andrade and van Wyk de Vries, 2010;Nolesini et al., 2013); Limit Equilibrium Methods (LEM; Apuani et al., 2005;Simmons et al., 2005;Borselli et al., 2011;Schaefer et al., 2013;Dondin et al., 2017); Finite Element Modeling (FEM; Voight, 2000;Schaefer et al., 2013); Finite Difference Methods (FDM; Apuani et al., 2005;Le Friant et al., 2006) and Discrete Element Modeling (DEM; Morgan and McGovern, 2005a,b;Husain et al., 2014Husain et al., , 2018Harnett et al., 2018). Although modeling studies expand our knowledge of mechanisms of volcanic structural instability, they are often limited by the availability of mechanical data for edifice rock properties. In particular, a recurrent challenge in modeling volcanic failure is representing the spatial and temporal heterogeneity of material (e.g., Schaefer et al., 2015;Heap et al., 2016b). The logistical difficulties in accessing deposits and outcrops during or after an eruption also prevent direct observation and quantification of erupted material. Numerical models are often forced to adopt 'typical' values for the physical and geomechanical properties of the material from the volcano in question, thus increasing the uncertainties associated with any model. As such, it is important to investigate the spatiotemporal evolution of material forming a volcano. Volcanic products are typically very heterogeneous, with varied eruptive conditions leading to large ranges in pore architecture (i.e., connected vs. isolated vesicles vs. fractures) and permeability (Mueller et al., 2005;Heap et al., 2014aHeap et al., , 2018cFarquharson et al., 2015;Colombier et al., 2017). Experimental investigations into volcanic rock properties have increased in recent years, including compressive and tensile strength, elastic properties, and resultant physical changes induced during deformation (e.g., Lavallée et al., 2007Lavallée et al., , 2008Lavallée et al., , 2013Schaefer et al., 2015;Heap et al., 2016aHeap et al., , 2018aLamur et al., 2017;Marmoni et al., 2017;Coats et al., 2018), as well as research into the relationship between activity at dome-building volcanoes and their respective rock properties (e.g., Smith et al., 2009Smith et al., , 2011Kendrick et al., 2013Kendrick et al., , 2016Heap et al., 2015Heap et al., , 2016aKushnir et al., 2016;Lavallée et al., 2019). This increase in research has started to show the importance of understanding how mechanical properties of rock influence the eruptive style at a volcano, for example at Mt. St. Helens where porosity, and as such strength, was shown to be a determining factor in whether a lava dome or spine was extruded (Heap et al., 2016a). Geomechanical properties not only influence eruptive style, but also structural stability. For example, although the interior of a lava dome is subjected to moderate confining pressures, outer talus slopes are often unconfined. This complex stress field influences the development of tensile and shear fractures. Although the mechanical behavior of materials in compressive stress fields has received most of the attention by the rock physics community in recent decades (e.g., Paterson and Wong, 2005), there is more investigation to be done into the tensile rock strength of volcanic materials, whose structural stability is commonly challenged by tensile stresses due to lack of confining and high pore pressure (Kilburn, 2018). The tensile strength of rocks is found to be ∼8% of the compressive strength (Jaeger et al., 2009;Perras and Diederichs, 2014), and can be as low as ∼4% (Zorn et al., 2018). As such, rock failure (even under compressive shear stress) generally follows the nucleation, propagation, and coalescence of tensile fractures (with the exception of supershear rupture; e.g., Das, 2015). We therefore investigate tensile strength and the ratio to compressive strength, and its relationship to other physical rock properties. In addition to determining mechanical properties and variation of the physical properties of volcanic rock, it is important to consider how variation in petrology and geochemistry may also influence dome stability. For example at Mt. Unzen a temporal change in chemistry due to phenocryst abundance was shown to correlate with temporal changes in effusion rate (Nakada and Motomura, 1999), and such evolution in eruptive style will also alter dome stability. Similarly, the occurrence of secondary mineralization may modify the porous structure and coherence of rocks, affecting the structural stability (Horwell et al., 2013;Coats et al., 2018) especially when water is present in the pore space (Heap et al., 2018b). Here, we focus on quantifying the physical, mineralogical, and mechanical properties of a temporally-constrained sample set, and the variability of these properties, required as inputs for numerical models assessing dome collapse hazard. To do this, we focus specifically on the Soufrière Hills volcano (SHV), and we aim to demonstrate the importance of, and encourage incorporation of, rock heterogeneity in future dome stability modeling efforts. In addition to showcasing the range in material properties, we also speculate how these may be temporally-linked to specific phases of the eruption. GEOLOGICAL SETTING Soufrière Hills volcano is an andesitic volcanic complex on the Caribbean island of Montserrat, located in the northern Lesser Antilles island arc (Figure 1). The current eruption started in July 1995 with a series of phreatic explosions, which led to the emplacement and growth of a lava dome (Young et al., 1998). This was followed by a series of dome growth and collapse cycles, involving large scale pyroclastic density current (PDC) generation and explosive activity. The eruption of SHV included five phases of dome growth (Wadge et al., 2014;Stinton et al., 2017): Phase 1 (November 15, 1995-March 10, 1998 2, 2008-January 3, 2009; Robertson et al., 2009);andPhase 5 (October 8, 2009-February 11, 2010). These phases were separated by pauses characterized by no magma extrusion, and Phases 3, 4, and 5 were preceded by transitional periods with increases in seismicity and/or ash venting. Several lava dome collapses occurred throughout the eruptive period, with the largest of these (>10 7 m 3 ) shown in Figure 2. The end of the last phase of lava extrusion was marked by a major dome collapse on February 11, 2010 (Stinton et al., 2014b). The scale of collapses throughout the eruption ranged from frequent (up to 140 per day) small scale rockfalls , to larger whole dome collapses such as the total dome collapse on July 12-13, 2003 (Herd et al., 2005). Petrological studies of products throughout the eruption have shown that SHV has produced lavas of relatively similar composition hornblende-bearing andesites Christopher et al., 2014;Wadge et al., 2014), with an increasing proportion of mafic inclusions in later phases . Long-term petrology across the eruption was explored by Christopher et al. (2014) and although they found systematic changes in Fe-content across time, they concluded that there was no progressive change of bulk composition, with SiO 2 content consistently between 56 and 62% throughout the eruption. However, previous studies have documented that geomechanical rock properties of chemically indistinguishable lavas can vary broadly as a result of distinct pore structures (Kendrick et al., 2013;Schaefer et al., 2015;Heap et al., 2016a), local heterogeneities (Farquharson et al., 2016), anisotropy (Bubeck et al., 2017), and post-emplacement alteration (Pola et al., 2014;Siratovich et al., 2014;Coats et al., 2018). We therefore aim to explore how the petrographic textures of the Soufrière Hills products and the temporal variation in these textures affect both rock strength and volcanic behavior, even where there is a narrow range in bulk rock compositions. The quantity and quality of observations recorded throughout the eruption makes SHV an ideal test site for exploring temporal variability in erupted products, as records of collapse events enable linking of specific pyroclastic deposits to specific eruptive phases. FIGURE 2 | Eruption history at Soufrière Hills, Montserrat. Extrusion rate data shown in black, calculated for Phases 1-4 using erupted volume data from Wadge et al. (2014) and extrusion data for Phase 5 from Stinton et al. (2014a). Red shows eruptive phases, whilst green shows pauses in activity. Annotations show state of the dome at the end of each phase (standing dome with relative size indicated, wholesale collapse, partial collapse), and stars mark major (>10 7 m 3 ) dome collapses across the eruption. Sampling Strategy For this experimental study, seven block samples were collected from different PDC deposits around SHV. Deposits were selected based on the certainty with which the blocks could be tied to not only a particular collapse, but also to ensure the material was erupted during a given eruptive phase. Hence, deposits that were selected occurred in the middle or toward the end of an eruptive phase to avoid sampling rocks that were extruded in previous phases of activity. Samples can be confidently tied to their respective phase due to the directionality of collapse in each case ( Table 1). Within each selected deposit, safely accessible blocks were examined and the Schmidt hammer method (detailed below) was employed to gain an overview of variability in material properties in the field. One block was collected from Phase 1, and two blocks collected for each of Phases 3, 4, and 5 (Figure 1). No samples were available for Phase 2 due to inaccessibility, and as the majority of the deposits entered the ocean (Trofimovs et al., 2008). Since the deposition of all samples occurred via PDCs, they are likely to represent the strongest material from each of the phases, as weaker material could have been preferentially broken down by the collapse and transport processes. Whilst we cannot be certain that the material is the most representative of each phase, we present here one of the first temporally-resolved examinations of rock property evolution during an eruption. Sample Preparation From each of the seven blocks collected, cores were prepared with a diameter of 26 mm and were cut and ground parallel to a nominal length of 52 mm for use in porosity and permeability measurements, and for testing in uniaxial and cyclic loading experiments (sample properties provided in Supplementary Table S1). Samples were then oven-dried for at least 12 h at 70 • C and thermally equilibrated to ambient conditions before any measurements were performed. All cores were taken at the same orientation within a given block. One core was prepared from each block with 37 mm diameter and nominal length of 80 mm. The density of these samples (provided in Supplementary Table S1) was calculated using their mass and sample dimensions, and these samples were used for testing in cyclic loading experiments to determine Young's modulus. From each of the 7 blocks, 37 mm diameter by ∼18 mm thick disks were also prepared for use in Brazilian tensile strength tests (Supplementary Table S2). These samples have an approximate aspect ratio of 1:2 as recommended by ISRM and ASTM. Sub-samples of each block were taken from offcuts of these cores and set in epoxy, in the same orientation as the cores were prepared. Thick sections were created for mineralogical and textural characterization by polishing and carbon coating the epoxy-mounted samples. QEMSCAN Analysis Mineralogical and textural analyses were performed on the prepared thick sections. The variation in phase abundances across the sample range was quantified using QEMSCAN (Quantitative Evaluation of Minerals by Scanning electron microscopy) at the University of Liverpool. The QEMSCAN is an automated SEM-EDS (scanning electron microscopy/energy dispersive X-ray spectroscopy) system manufactured by FEI Company. The QEMSCAN uses a 15 kV electron beam to produce X-ray spectra which provide a semi-quantitative chemical map of the different phases, here at a resolution of 10 µm over an average area of 10.5 mm by 10.5 mm. The identified chemical compositions are compared to known compositions stored in a reference library. Additional mineral and glass chemistry definitions are manually added to the supplied database to ensure all chemical compounds are recognized. Crystallographic features are not discriminated by QEMSCAN, and so polymorphs of the same composition cannot be differentiated (for example, quartz and cristobalite would both be classified as "silica polymorphs" by QEMSCAN processing). We then used the iDiscover software to create color images showing the distribution of mineral phases, and used this data to determine the normalized mineral abundances of the sample as area-percentages. Schmidt Hammer The Schmidt hammer is a portable, hand-held instrument originally designed for non-destructive index testing of concrete. It records the rebound height of a spring-loaded mass to indicate material strength (Torabi et al., 2011); this 'rebound value' can be correlated to various mechanical properties such as uniaxial (unconfined) compressive strength and Young's modulus (e.g., Deere and Miller, 1966;Yasar and Erdogan, 2004). Schmidt hammer testing has previously been used on volcanic rocks (e.g., Dinçer et al., 2004;Del Potro and Hürlimann, 2009) and provides a method of collecting in situ data where outcrop accessibility is problematic. In this study, we used an L-type Schmidt hammer to carry out field testing in accordance with the International Society of Rock Mechanics (ISRM) guidelines (Ulusay and Hudson, 1979). The Schmidt hammer rebound values (R L ) were corrected for angle of testing where necessary, following the normalization procedure set out by Basu and Aydin (2004); this often results in non-integer rebound values. The Schmidt hammer was calibrated using a steel anvil, which gave a R L value of 72. Hard rocks such as granites generally have high R L values of >50, whereas softer rocks such as chalk are likely to have a R L value < 30 (Katz et al., 2000;Ericson, 2004;Goudie, 2013). We present results of Schmidt hammer tests on 24 blocks, measured during a field campaign in January 2016, from deposits where the eruptive phase is known (4 from Phase 1, 3 from Phase 3, 9 from Phase 4, and 8 from Phase 5). These tests were carried out at the same locations as sample sites (Figure 1), but on blocks exceeding 30 cm in all dimensions and therefore these were not collected for laboratory experimentation. We therefore consider the Schmidt hammer data a verification of the collected blocks. We also present results from 28 Schmidt hammer tests on samples located in Belham River Valley (BRV); these cannot be attributed to a specific phase, but from collapse direction information we can determine that these boulders were emplaced during Phases 3-5. This gives additional constraint of the range of expected values. Physical Characterization Permeability and porosity were determined for all 26 mm diameter cores. The density of each core (ρ rock ) was determined by measuring its mass and volume, and calculating the ratio between the two (Supplementary Table S1 and Supplementary Figure S1). Connected porosity was determined for each core using a helium pycnometer (Micromeritics AccuPyc II 1340), providing sample void volumes with an accuracy of 0.1%. Total porosity was also determined for each of the seven blocks by creating a powder of the rock sample and measuring its density (ρ powder ). Total porosity exceeds connected porosity as it includes calculation of isolated pores that could not be accessed by helium during pycnometry. Total porosity (φ T ) is calculated using: Permeability was measured using a benchtop GasPerm permeameter developed by Vinci Technologies. We measured permeabilities of 49 samples using nitrogen as permeating fluid and by imposing a flow rate that created, depending on the permeability of the sample, a minimum pressure differential ( P) between the inflow and outflow of 0.5 psi (0.0035 MPa). Measurements were made on each sample at three confining pressures. The confining pressure was held constant at each of 100, 200, and 300 psi (0.7, 1.4, and 2.1 MPa) for the duration of the measurement. In cases where Darcian conditions were not achieved (i.e., the flow rate resulted in too high P and turbulent flow/gas slippage in the porous medium), we applied Klinkenberg and Forchheimer's corrections to retrieve the equivalent Darcy permeability. Uniaxial Compressive Strength Testing Uniaxial compressive strength (UCS) testing was carried out at ambient (room) temperature on one sample from each block (7 total) using 26 mm diameter samples (for which permeability and porosity had already been determined). The cores were loaded axially at a constant strain rate of 10 −5 s −1 using a 5969 Instron uniaxial benchtop press with a 50 kN load cell at the Experimental Volcanology and Geothermal Research Laboratory at the University of Liverpool. The measured axial displacement was corrected to subtract the compliance of the apparatus (i.e., pistons and frame) during loading. While one sample from each block was loaded to failure to measure the compressive strength, we established the repeatability of the mechanical data of the materials by determining Young's modulus using 22 stress cycling experiments (see Section "Cyclic Experiments"), as higher Young's modulus relates to higher peak strength (e.g., Schaefer et al., 2015). Brazilian Tensile Strength Testing Indirect tensile strength was measured using the Brazil testing method (Ulusay and Hudson, 1979), in which a compressive load is applied diametrically to the curved edge of a cylindrical, diskshaped rock sample. This is a commonly used method to induce tensile failure due to the logistical difficulty of measuring direct tensile strength (Perras and Diederichs, 2014). Tensile strength, σ t , is calculated using the following formula: where P is the applied load (N), D is sample diameter (m), and L is sample thickness (m). In total, 66 samples were prepared at 37 mm diameter (with aspect ratio of 1:2 to meet ISRM standards), and were loaded at a Figure S2) -the mineralogical key is shown below the images, with white used to portray the pore space. See Table 2 for full mineral phase analysis; (E-H) pore distribution in one sample from each phase (other samples provided in Supplementary Figure S2) using processed QEMSCAN images with solid fraction shown in gray, and all porosity in black. Samples from Phases 3 and 4 are denser, with evenly distributed pore-space, whereas samples from Phases 1 and 5 have higher pore content and show pore-localization and a high connectivity. Backscattered electron images for the same samples are shown in Supplementary Figure S3. constant deformation rate of 0.0037 mm/s (equivalent diametric strain rate of 10 −4 s −1 ), again using the Instron uniaxial press in the Experimental Volcanology and Geothermal Research Laboratory at the University of Liverpool. Cyclic Experiments The UCS tests were used to inform the cyclic loading tests by defining a threshold of 50% peak stress for each sample type. Cyclic loading experiments were then performed on 22 cores of 26 mm diameter, and 7 cores of 37 mm diameter (both with 2:1 aspect ratio); the samples were axially loaded to this threshold at a constant strain rate of 10 −5 s −1 , and then unloaded at the same rate. This was performed to examine the repeatability of the stress-strain response to loading, and to calculate elastic moduli. By loading only to 50% of peak stress, we considered the rock to behave purely elastically (Walsh, 1965;Nihei et al., 2000;David et al., 2012), and therefore assumed that no lasting damage was done to the sample and that it could rebound and recover deformation. Young's Modulus Determination Young's modulus (E) is a key parameter in volcanic modeling (Hale et al., 2009a,b;Husain et al., 2014;Harnett et al., 2018). Young's modulus is traditionally an elastic parameter, defined in GPa, and although these rocks do not behave in a purely linear elastic manner throughout compression, the stress-strain response is linear following crack-closure and prior to damage accumulation (e.g., Heap and Faulkner, 2008). Here, to fall confidently within this regime we consider the linear portion of the curve as between 40 and 50% of peak rock strength. Therefore for all 29 cores with 26 mm diameter and 7 cores with 37 mm diameter, we calculate Young's modulus within this range. Following ISRM guidelines (Ulusay and Hudson, 1979), we calculate the Young's modulus using: where σ is stress and ε is strain, at a given percentage of peak rock strength (denoted by the subscript). Microstructural Analysis QEMSCAN analysis illustrates mineral assemblages and their relative abundance in each of the samples. An exemplar rock from each of Phases 1, 3, 4, and 5 is shown in Figure 3, with the remaining rocks from this study shown in Supplementary Figure S2, and backscattered electron images shown in Supplementary Figure S3. In addition to color images showing the mineral distribution and texture in each sample, a grayscale image shows the pore structure highlighted in black. We explore mineral abundance within the sample suite, and show the area percentage calculated from QEMSCAN imagery of interstitial glass combined with silica polymorphs, and plagioclase (separated into calcium-rich and sodium-rich; Figures 4A,B). Percentages for all mineral components as a proportion of the solid phase in all samples are shown in Table 2. Plagioclase is dominant across all samples, totaling between 42.5 and 56.1% with zoned crystals evident in all samples (Figure 3). Slight increases in solid fraction total plagioclase content in Samples H and F (Phases 3 and 4) correspond to overall increase in crystallinity of these samples, and as such, slight depletions in total glass and silica polymorph phases (Figure 4). There is a higher proportion of interstitial glass compared to silica polymorphs in Samples M and J (Phases 1 and 5) compared to Samples H and F. The "glass" that is identified by QEMSCAN consists of fine-grained mesostasis which may comprise fine grains of various compositions that are smaller than the X-ray interaction volume of the QEMSCAN instrumentation; it thus may not necessarily represent the mechanical and rheological properties of quenched interstitial melt. Amphiboles are mostly in the form of pseudomorphs of break-down products, and clusters of pyroxene. Clinopyroxene is more dominant than orthopyroxene, particularly in Samples H and F. Oxides are rare in all samples, and generally occur in the form of microphenocrysts. In addition to having lower crystalline fractions (i.e., more glass and silica polymorphs), Samples M and J also have larger, more heterogeneously distributed pore spaces. Porosity is greatest in Sample J (Table 3), and comprises vesicles in between crystals whereas in samples from earlier phases (e.g., Samples H, F), much smaller pore spaces are found within the groundmass. Overall, QEMSCAN analysis shows low variability in the componentry and the mineralogical assemblage throughout the samples tested. Schmidt Hammer We present the results of 52 Schmidt hammer tests (Figure 5), both on blocks from known eruptive phases, and from a random selection of blocks in BRV. The data show that blocks from Phase 5 appear to be the weakest (average R L = 26.4). Samples from Phases 1, 3, and 4 exhibit similar Schmidt hammer results, with average rebound values of 34.5, 39.7, and 37.4, respectively ( Figure 5A; raw values given in Supplementary Table S3). The Schmidt hammer rebound values from all of the samples from known eruptive phases have a range of 32.6 from 15.2 to 47.8. The ranges within each phase are 20.6 (Phase 1), 7.3 (Phase 3), 17.0 (Phase 4), and 26.0 (Phase 5); the rebound values from the random boulders in the BRV have a range from 6.0 to 48.1 (a spread of 42.1), showing a similar distribution to that of the temporally-constrained blocks. Assuming there were no systematic variations in rock strength across time, the same variation would be found within the samples from each phase. However, the difference between the spread of randomly sampled blocks (42.1) far exceeds the difference within blocks attributed to a particular phase (max R L = 26.0 for Phase 5). However, the 25th-75th percentiles of the entire dataset span a relatively narrow range of 21.8 to 42.6, highlighting that the extremes of these values represent rarer outliers ( Figure 5B). Physical Properties Connected porosities extend from approximately 20-40% across all samples ( Figure 6A and Table 3), with ranges for Phases 1, 3, 4, and 5 of 8.2, 3.3, 4.9, and 11.0%, respectively (all values of both connected and total porosity provided in Supplementary Table S1). Sample M (Phase 1) has an average connected porosity of 22.8% and an average total porosity of 23.2%. Samples B and FIGURE 4 | Abundances of (A) glass and silica polymorphs, the remaining solid fraction is composed of the primary minerals (crystallinity, Table 2); (B) plagioclase -both sodium rich and calcium rich; shown as percentage area, calculated from 10 µm resolution QEMSCAN images (shown in Figure 3 Similarly, the density of the 26 mm samples varies from 1.61-2.22 g/cm 3 , with average densities for Phases 1, 3, 4, and 5 of 2.13, 2.14, 2.14, and 1.76 g/cm 3 , respectively. The density values for Samples M, B, H, F, and G are very similar (as observed for porosity), with a clear decrease in density in Samples J and K. The relationship between density and porosity is broadly linear (Supplementary Figure S1), although deviation from linearity results primarily from the varied abundances of isolated pores. Permeability across all samples ranges from 10 −15 to 10 −11 m 2 ( Figure 6B and Table 3), and relates non-linearly to the 2 | Quantitative analysis of mineral phases present in each sample, calculated as a percentage area of the solid fraction, and porosity as % of total area, using the 10-µm resolution QEMSCAN analysis of a 10.1 mm × 10.1 mm area. connected porosity (Figure 6C; all values of permeability are provided in Supplementary Table S1). Tight clustering is to be expected within one rock sample (e.g., Schaefer et al., 2015), but permeability also remains very consistent between two different blocks attributed to the same eruptive phase (Figures 6B,C), even with increased confining pressure (Supplementary Figure S4). The difference between the permeabilities of samples from each phase is therefore determined to be greater than the variation expected from natural heterogeneity within one block. In the tested samples there is a systematic decrease in permeability from Phases 1 to 4 ( Table 3), and Phase 5 samples show the maximum permeability across the erupted materials tested, with an average permeability for the samples from Phase 5 of 9.2 × 10 −12 m 2 TABLE 3 | Physical properties calculated for each sample, using the following methods: (a) average density, measured using a helium pycnometer and the standard deviation for each block; (b) connected porosity: measured using a helium pycnometer and the standard deviation for each block; (c) total porosity, calculated by measuring the density of a powder and using Equation (1) (although some were too permeable to obtain a value). The decrease in permeability across Phases 1-4 occurs despite a relatively constant connected porosity (Figure 6C), although the proportion of isolated pores increases across the same range (Supplementary Table S1). Uniaxial Compressive Strength To maximize data gathering from a limited sample set, we performed UCS testing on one prepared 26 mm sample from each block (Figure 7A), resulting in 7 UCS values. Where there are two individual blocks from one phase, we find very similar results between the two blocks ( Figure 7B), and we confirm the phase repeatability using cyclic loading tests to non-destructively measure Young's modulus for each sample (see section "Cyclic Loading and Young's Modulus"). The results from the UCS tests generally show expected behavior, where the stress-strain curve can be broken into an initial stage of compaction of pre-existing pores and microfractures within the rock, an elastic loading phase, a brief period of strain hardening, and then a fracture marked by a sudden stress drop (Figure 7B; as described by Scholz, 1968;Heap and Faulkner, 2008). The UCS curves for Samples J and K show a more creep-like behavior due to their high porosity (>30%). These rocks did not exhibit a sharp stress drop, but rather ongoing compaction of pore spaces within the sample. The maximum load was recorded as the uniaxial compressive strength, and the tests were stopped when the stress showed a marked decrease (more than 10% stress drop) over time, suggesting that the rock had ruptured and was unable to bear any more load. The results are summarized in Figure 8, along with all the mechanical results for each sample. Sample M (Phase 1) has a UCS of 25.1 MPa. For the remaining phases, two tests were FIGURE 5 | (A) Schmidt hammer rebound value (R L ) results from field testing at sampling locations for Phases 1, 3, 4, and 5. Belham River Valley (BRV) results show values obtained on a random selection of blocks from Phases 3, 4, and 5. Raw data shown by circles, with the mean R L for each phase shown by a square; (B) box plot diagram to show median (red line), mean (black squares), 25th and 75th percentiles, and range for Schmidt hammer rebound values from each phase. Results from BRV span the overall range in values seen in other phases and highlight that Phase 5 material is the weakest of the erupted products tested, although the maximum R L across all phases is similar. FIGURE 6 | Physical properties of 26 mm cores from eruptive Phases 1, 3, 4, and 5: (A) connected porosity evolution and (B) gas permeability evolution throughout the eruption; (C) permeability as a function of porosity for all samples. Results show that porosity is consistent between Phases 1, 3, 4, and increases in Phase 5, whereas permeability systematically decreases from Phase 1 through to Phase 4, and then increases in Phase 5. Phases 1 and 5 follow a near-continuous trend on the porosity-permeability plot, while Phases 3 and 4 plot distinctly, suggesting contrasting pore morphology and connectivity. carried out (one from each block, Figure 8A). The average UCS values for Phases 3, 4, and 5 are 27.8, 49.8, and 6.6 MPa, respectively (raw sample data are provided in Supplementary Table S1, with averages and standard deviations provided in Supplementary Table S4). The lowest UCS results (<7 MPa) are found in Samples J and K (Phase 5) and correlate to the highest porosity among the samples tested ( Figure 8B). These samples are more friable and have more evident pore space in hand specimen (Figure 7A), and the pore distribution maps from QEMSCAN analysis further highlight the connectivity of the porous network (Figure 3). Lower sample porosities correspond to higher uniaxial compressive strengths, however, for porosities between 20 and 25%, UCS values vary between 25 and 50 MPa. Although the porosity of these samples is similar, there is a higher proportion of isolated pores and lower permeability in the stronger Samples F and G (Phase 4). The porosity-strength relationship identified in this study fits well with other datasets from dome-building volcanoes ( Figure 8B). Cyclic Loading and Young's Modulus Similarly to the UCS results, the Young's modulus increases with decreasing porosity across the sample suite. Young's Modulus increases from Phase 1 to Phase 3 to Phase 4, with a drop to the lowest values in Phase 5 samples (Figure 8C). A higher Young's modulus correlates to lower porosity values, and as such, higher Young's modulus values typically correspond to higher UCS values. Cyclic testing showed good repeatability of mechanical data (i.e., stress-strain curve morphology, Supplementary Figure S5) within rock types, and to an extent within phases irrespective of sample size (26 or 37 mm diameter). Young's modulus determined from the UCS tests gives average values in Phases 1, 3, 4, and 5 of 7.2, 7.0, 11.1, and 3.2 GPa, respectively. We also determined Young's modulus using the cyclic tests, which indicated a range of Young's modulus within each sample suite of less than 3 GPa, and average values for Phases 1, 3, 4, and 5 of 4.6, 7.2, 10.9, and 2.5 GPa, respectively. There is good agreement between the Young's modulus values from UCS and cyclic testing, as the same portion (40-50%) of the peak stress of the loading curve was used for the analysis (Figure 8C; with raw data in Supplementary Table S1, and averages and standard deviations given in Supplementary Table S4). Increasing Young's modulus values correspond most systematically to an increasing proportion of isolated porosity and therefore to decreasing permeability ( Figure 9B and Table 3). Tensile Strength We performed 66 Brazilian indirect tensile tests to constrain the tensile strength (UTS; Figure 8D) (Figures 8D, 9). The variability within each sample set is higher than for UCS (there are more tests), although each phase still has a considerably smaller range than the sample suite as a whole and there is good agreement between the different blocks within the same phase (Supplementary Table S2). UCS/UTS Ratio We show that in our study both compressive and tensile rock strength is inversely proportional to density (Figure 9A), and we consider bulk rock density here to be a proxy for total porosity (Supplementary Figure S1). That said, for a given FIGURE 7 | (A) Photos of one core from each block tested (M; B; H; F; G; J; K), with the corresponding phase marked; (B) UCS results from tests carried out at a constant strain rate of 10 -5 s -1 on one core from Phase 1, and two cores from Phases 3, 4, and 5. UCS curves labeled with the block from which each rock was cored. Phase 5 samples show creep-like (i.e., undergoing significant strain prior to failure) behavior due to high porosity, while the other samples display sharp failure curves. density the UCS/UTS ratio is highly variable (Figure 10A), the ratio for Phases 1, 3, 4, and 5 is 11.8, 10.9, 15.5, and 6.9, respectively. Instead, UCS/UTS ratio systematically decreases with increasing permeability (Figure 10B). We also compare the average Schmidt hammer rebound values for each phase to the UCS/UTS ratio, where the Schmidt hammer rebound values increase with increasing UCS/UTS ratio ( Figure 10C). This is likely due to the sensitivity of the Schmidt hammer to the rock stiffness, as Young's modulus also correlates very well with the UCS/UTS ratio ( Figure 10D). Co-variance of Physical and Mechanical Properties In this study, we have demonstrated a wide range in physical and mechanical properties of dome rock from Soufrière Hills volcano (SHV). We show how these properties vary in relation to one another, and in addition, by gathering these data from temporally-constrained samples, we are able to speculate how this could reflect the changing eruptive behavior across this wellobserved 15-year eruption. We verify the trends observed in our limited laboratory sample suite using Schmidt hammer rebound testing on a wider range of samples in the field, and find R L values to be in broad agreement with the observed temporal trends of strength and Young's Modulus. The identified links in physical and mechanical rock properties are necessary for assessing volcano dynamics, and the temporal relationships could prove important if corroborated using a wider suite of rocks. The SHV dome rocks examined here range in porosity from 19.7 to 40.2%, with inversely-proportional permeabilities spanning the range from 10 −15 to 10 −11 m 2 . Our corresponding densities of 1.61-2.34 g/cm 3 also agree well with the range of densities measured on 85 blocks from block-and-ash flows in 1997 and to the porosity range of 15.1-45.5% observed for a smaller subset of these 1997 lava samples (Formenti and Druitt, 2003). Moreover this spectrum of our samples exceeds the porosity and permeability range spanned by banded pumice samples collected from block-and-ash flow deposits at SHV (Farquharson and Wadsworth, 2018). The strength of the dome rocks measured at SHV varies by almost an order of magnitude from 6.2 to 51.1 MPa in compression, and 0.5-4.1 MPa in tension, which show a non-linear decrease with increasing porosity and permeability. We demonstrate a higher UCS/UTS ratio for stronger, stiffer material, highlighting the different effect FIGURE 9 | (A) Uniaxial tensile strength (UTS; hollow symbols) and uniaxial compressive strength (UCS; filled symbols) as a function of rock density (a proxy for porosity, see Supplementary Table S1 and Supplementary Figure S1). (B) Young's modulus as a function of connected porosity, determined from UCS tests (black symbol outlines) and from cyclic tests (no symbol outline). (C) Permeability as a function of Young's modulus, for 26 mm samples (thin symbol outline) and 37 mm samples (thick symbol outline). of pore connectivity on compressive and tensile strength. This is an important consideration when modeling structural dome instability, as using a constant UCS/UTS ratio in numerical models could result in overestimation of a dome's tensile strength, and therefore underestimation of the failure likelihood of the unconfined portion of lava domes. The current SHV dome at Montserrat is likely to have cooled to an extent where viscous flow no longer dominates eruptive behavior (Ball et al., 2015); as such, tests of rock properties at ambient temperatures are relevant to the modeling of ongoing stability of the volcano, but moreover, a number of studies have demonstrated that the strength of volcanic rock at elevated temperature is either comparable (Heap et al., 2014a(Heap et al., , 2018a or higher (Schaefer et al., 2015;Coats et al., 2018) than at room temperature, suggesting that domes are at their weakest following cooling. For the same sample suite, Young's modulus values range from 1.4 to 12.3 GPa and correspond to higher values in less porous, denser samples ( Figure 9B). A strong correlation is shown between Young's modulus and sample permeability (Figure 9C), where lower permeabilities correlate to higher stiffness values. This suggests a dependence of Young's modulus on not only porosity, but also pore connectivity, which also controls the permeability. Mechanical data from experiments show a general trend of increasing strength (compressive and tensile) and stiffness in samples from Phase 1 to Phase 4, with a corresponding decrease in permeability (and increasing proportion of isolated pores). The samples from Phase 5 show significantly lower strength and stiffness and have both the highest porosity and permeability. Therefore porosity can be considered as a controlling factor in both strength and stiffness of volcanic rocks (as described previously for other volcanic rocks; Heap et al., 2014bHeap et al., , 2016aSchaefer et al., 2015;Colombier et al., 2017;Marmoni et al., 2017; FIGURE 10 | (A) UCS/UTS ratio as a function of density; (B) UCS/UTS ratio as a function of permeability; (C) UCS/UTS ratio as a function of average Schmidt hammer rebound value (R L ); and (D) UCS/UTS ratio as a function of Young's modulus. Phase averages shown in each case. A higher UCS/UTS ratio correlates to lower permeabilities, higher Schmidt hammer rebound values, and higher Young's modulus values. Coats et al., 2018). We compare the correlation between porosity and uniaxial compressive strength in this dataset to published data from other dome-building volcanoes (Volcán de Colima, Mexico, Mount St. Helens, United States, and Mt. Unzen, Japan) and find that our samples fit well with existing data (Figure 8B). Although we speculate that the properties identified in this study could suggest a temporal evolution in mechanical behavior at Soufrière Hills, we show here that examining the mechanical properties as a function of the physical rock properties may be more appropriate. Although cracks are present in these samples (particularly in Sample M), we note that the samples in this study do not show the pervasive micro-fractured textures that have been observed in similar andesites from Volcán de Colima (Heap et al., 2014a). The QEMSCAN images highlighting porosity (Figures 3E-H) show that the samples with higher porosities (e.g., Sample J from Phase 5) have larger, more heterogeneously distributed pore space with a higher degree of connectivity. Lamur et al. (2017) showed that the addition of a macro fracture in samples with relatively high porosity (above 18%) has little impact on the resultant permeability, and as such we surmise that permeability in our sample suite is controlled by preexisting pore connectivity, rather than pervasive fractures. Further, we also demonstrate that pore morphology and connectivity has an important control on mechanical properties (UCS, UTS, and Young's modulus); where total porosity is similar (Phases 1, 3, and 4), lower connectivity (and thus permeability) in Phase 1, then Phase 3, and finally lowest in Phase 4, corresponds to a significant increase in compressive (7% from Phase 1 to 3 and 85% from Phase 3 to 4) and tensile (16% from Phase 1 to 3 and 30% from Phase 3 to 4) strength, and stiffness (Young's modulus, 35% from Phase 1 to 3 and 53% from Phase 3 to 4). By showing that the rocks are not heavily micro-fractured and pore connectivity is a controlling factor in mechanical behavior, we also demonstrate that differences found between the rocks in this study are unlikely to be due to damage during transport in pyroclastic density currents, and rather represent the textural heterogeneity of the eruptive products. In order to establish whether porosity is exerting the only control on the mechanical properties of the rocks tested here, we also examine the mineralogy of the samples. Variation in glass, silica polymorph and plagioclase content is nonsystematic through time, although we do see co-variance of a number of physical and mechanical properties. For example, total crystallinity (Table 2) as a proportion of the solid fraction of each sample (i.e., excluding the glass and silica polymorph phase) correlates positively to the mechanical behavior (Figure 11), with the lowest crystallinity (Phase 5, 62-66% crystallinity) corresponding to the lowest rock strength and Young's Modulus (UTS = 1.0 MPa, UCS = 6.6 MPa, YM = 2.9 GPa), and the highest crystallinity (Phase 4, 75-81% crystallinity) corresponding to the highest rock strength and Young's Modulus (UTS = 2.8 MPa, UCS = 49.9 MPa, YM = 10.7 GPa). Such relationships of strengthening with increasing crystallinity have been noted in partially crystalline polymers (e.g., Brady, 1976). The crystallinity-strength relationship at a dome-building volcano was discussed by Bain et al. (2019), where low crystallinity samples were associated with low repose times between volcanic explosions, and therefore low residency times within the upper conduit and dome. We speculate that a longer residence time at elevated temperature within the volcano leads to increased densification of material as well as increased crystallization. This could have particular importance when considering the likely mechanical behavior of dome rock. The relationship between crystal fraction and strength was modeled up to 40% crystallinity by Heap et al. (2016b), who found that UCS decreased with increasing crystal content up to 15%; our system differs in that it exceeds the maximum loose packing as the groundmass has crystallized and interlocked in situ, and thus is contrasting to the simplified two-phase system modeled in Heap et al. (2016b). As observed by previous work (e.g., Zorn et al., 2018), porosity and crystallinity are inversely proportional (Figure 11); the more porous samples have lower crystallinity and are more glassy than the denser samples. Thus despite the correlation between crystallinity and strength, it is difficult to determine if there is an independent effect of crystallinity with the sample suite tested, as porosity is generally believed to impart the greatest control on strength (Kendrick et al., 2013;Heap et al., 2014aHeap et al., , 2016bFarquharson et al., 2015;Schaefer et al., 2015). We also use Schmidt hammer testing to support the laboratory results. The Schmidt hammer is a well-known tool for field testing to infer both UCS and Young's modulus (Katz et al., 2000;Ylmaz and Sendr, 2002;Dinçer et al., 2004;Yagiz, 2009). We do not directly correlate our Schmidt hammer results to UCS values here due to the variability in published correlations; however, we see that the raw data from the Schmidt hammer index testing shows a similar trend to UCS results (Figures 10B,C). This supports our UCS data by providing analysis of a larger sample set, although the Schmidt hammer results differ from the UCS results by indicating a more similar strength between the samples from Phase 3 to Phase 4. The slight discrepancy between the Schmidt hammer data and the experimental results likely arises from the sensitivity of the Schmidt hammer to sample porosity (Yasar and Erdogan, 2004;Aydin and Basu, 2005;Yagiz, 2009). As the rock porosities appear to have very similar ranges in Phases 1, 3, and 4, we suggest the Schmidt hammer is insensitive to the small differences in pore connectivity, as evidenced by the permeability differences which seem to correlate to tensile and compressive strength as well as stiffness observed in the mechanical tests. The Schmidt hammer does, however, show clearly that the samples from Phase 5 are the weakest material tested. Links to Eruptive Activity We find a slight increase in strength from Phase 1 to Phase 3 (Figures 8A,C,D), as well as slightly lower permeabilities than those from Phase 1 and a significant increase in glass recrystallization to silica polymorphs which can serve to block pores by vapor phase deposition (Horwell et al., 2013) and decrease permeability. The lack of explosions during Phase 3 (Wadge et al., 2014) and enhanced residence time in the lava dome as a result could explain these textural differences to the earlier phases of the eruption. Phase 3 had one major collapse on the May 20, 2006 (from which the Phase 3 samples in this study are collected) compared to several collapses in the earlier phases. The average extrusion rates are, however, very similar in Phase 1 and Phase 3, at 4.5 and 5.3 m 3 s −1 , respectively. This could explain the similar porosities between the samples from each phase (e.g., Collombet, 2009), and therefore the similarities in strength (e.g., Coats et al., 2018). It is important to note that the extrusion rates within each phase were highly variable, as shown in Figure 2, and therefore the rock properties defined in the study are likely to be determined by short-term emplacement conditions, rather than representative of the whole eruptive phase. Unlike the other eruptive phases at Soufrière Hills, Phase 4 occurred in 2 short episodes from August 08, 2008 until October 08, 2008, and then from December 02, 2008 until January 03, 2009 (Stinton et al., 2017). The samples from Phase 4 are collected from the explosion on July 29, 2008 and are the strongest of the erupted products tested here. The other rocks in this study are samples from events that occurred during periods of active extrusion and so are likely to have been stored in the dome for shorter time periods, whereas the Phase 4 products follow a period of quiescence and are likely to have had longer residence times within the lava dome. Previous work (Horwell et al., 2013) has shown that recrystallization that occurs after emplacement of material within the dome is likely to increase the fraction of silica polymorphs (likely to be cristobalite) at the expense of glass. Horwell et al. (2013) suggested that by additionally filling pore space with recrystallized silica polymorphs, rock strength may be increased; although it is difficult to distinguish between all the contributing variables, recrystallization of interstitial glass to silica polymorphs ( Table 2) is highest in the strongest samples, present in Phases 3 and 4. It is clear here that understanding the events preceding each collapse (Table 1) is an important factor in determining a rock's history, and therefore its likely mechanical properties. For example, although the samples from Phases 4 and 5 in this study are both collected from deposits that are associated with explosions, they exhibit very different mechanical properties. The July 29, 2008 event marked the beginning of Phase 4a and was preceded by no extrusion (Table 1); therefore, the material from this event is likely to be mechanically distinct from material that collapses during extrusion. This is important to feed into future numerical models, as it suggests increased mechanical strength from alteration following increased repose time. Phase 5 at SHV was also short-lived compared to Phases 1 and 3, but was punctuated by several vulcanian explosions and did not contain the frequent small scale collapses seen in Phase 1 (Stinton et al., 2014a). The time-averaged extrusion rate during Phase 5 is estimated at 7 m 3 s −1 . The samples from Phase 5 have larger phenocrysts than samples from the previous two phases (Figure 3), suggesting a longer crystallization time of magma prior to the final ascent and eruption. This could be due to the absence of wholesale dome collapse after May 2006 (Figure 2), that plugged the upper conduit, preventing magma extrusion. We also suggest that the high permeability of the Phase 5 samples contributes to efficient outgassing of the dome, leading to relatively degassed magma; as previously observed by Cole et al. (2014). All dome material emplaced from the beginning of the eruption in 1995 until May 2006 was removed by repeated collapse events (Wadge et al., 2014). Extrusion resumed almost immediately after the May 2006 collapse, and dome growth in Phases 4 and 5 occurred primarily on top of the remaining Phase 3 dome. The February 2010 collapse likely removed most of the material emplaced in Phase 4, suggesting the dome that still remains on Montserrat mostly comprises material emplaced in Phases 3 and 5. We suggest therefore that future modeling efforts of the current dome include rock heterogeneity (both temporal, and spatial if available), as this could significantly influence overall structural stability (e.g., Schaefer et al., 2013). CONCLUSION We present here a study of the physical and mechanical properties of a suite of temporally-constrained rocks from Soufrière Hills volcano (SHV). We clearly demonstrate the variability and co-variance of physical and mechanical rock properties (porosity, permeability, UCS, UTS, Young's modulus, and Schmidt hardness) across a broad spectrum volcanic rocks, representative of the extruded products of SHV (e.g., Formenti and Druitt, 2003). These parameters vary extensively for the materials tested. Across all phases, we observe a range in connected porosity of 19.7-40.2%, permeability of 10 −15 to 10 −11 m 2 , tensile strength of 0.53-4.15 MPa, compressive strength of 6.2-51.1 MPa, Young's modulus of 1.39-12.29 GPa, and Schmidt hammer rebound values of 12.5-47.9. We find that while porosity has a dominant control on strength and Young's Modulus, higher pore connectivity (at a given porosity) also weakens material, decreases the UCS/UTS ratio and enhances permeability by up to two orders of magnitude. In addition, we show how more crystalline samples have lower porosity, and have the lowest proportion of pristine glass. Both higher total crystallinity, and higher recrystallization of glass into silica polymorphs correlate with higher strength and Young's Modulus in our sample suite, though these also correlate positively to the control porosity has on strength and thus crystallinity is judged to have a lesser influence. The temporal evolution, from the samples tested in the laboratory and field in this study, indicates an increase in rock strength from Phase 1 to Phase 3 to Phase 4, and then shows a large decrease in strength in samples from Phase 5 of the eruption, with all samples following the same physical and mechanical relationships as defined above. We acknowledge that the samples tested in this study only provide us with a "snapshot" during the phases of a complicated eruptive history at SHV, and that more samples would be required from varied locations to test if this trend is truly observed for the eruption as a whole. However, our dataset demonstrates a large range in mechanical properties (strength and stiffness) that can be linked to the rock's texture (porosity and crystallinity) and permeability, and we use field Schmidt hammer testing to support the laboratory investigation, finding good correlation. We conclude that even at a volcano with a narrow range of eruptive material and chemical composition, taking single values for mechanical parameters is insufficient for the purpose of numerical modeling. Consequently, the inclusion of temporal and spatial heterogeneity should be strongly considered in future structural stability models. AUTHOR CONTRIBUTIONS CH led the project, conducted fieldwork and experiments, prepared all figures, and wrote the manuscript. JK and AL helped to conceptualize the project, conducted experiments, assisted with data processing, and improved the manuscript and figures. MT and AS conducted fieldwork, helped to conceptualize the project, and revised the manuscript. PW and JU conducted the QEMSCAN analysis and processing, and revised the manuscript. WM and JN provided useful discussion and revised the manuscript. YL helped to conceptualize the project and improved the manuscript. FUNDING CH was funded through a NERC studentship as part of the Leeds York Spheres Doctoral Training Partnership (DTP) (Grant No NE/L002574/1). JN acknowledges the Centre for the Observation and Modelling of Earthquakes, Volcanoes and Tectonics (COMET). YL acknowledges financial support from the European Research Council Starting grant on Strain Localisation in Magma (SLiM,No. 306488).
2019-02-06T14:02:21.980Z
2019-02-06T00:00:00.000
{ "year": 2019, "sha1": "7b21da7a1189eeec82e59f17a59d1da00c779350", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/feart.2019.00007/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d3b49ccb15baadbbf33ebf4e13b6cff31a9f42b4", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
29434379
pes2o/s2orc
v3-fos-license
Psoriatic Alopecia in a Patient with Systemic Lupus Erythematosus Psoriasis is a chronic, recurrent, and relatively common inflammatory dermatologic condition, which demonstrates various clinical manifestations including hair loss. It was once believed that alopecia was not a presentation of scalp psoriasis, but it is now widely accepted that psoriatic alopecia exists. Although the majority of patients get hair regrowth, it can potentially lead to permanent hair loss. Herein, we report a case of 26-year-old female patient with systemic lupus erythematosus who presented with scalp hair loss and nonpruritic scaly plaques on the scalp. Her clinical presentation, dermoscopic, and histopathologic findings were consistent with psoriatic alopecia. Additionally, we also described a novel scalp dermoscopic pattern of “patchy dotted vessels” which we detected in the lesion of scalp psoriasis. Introduction Psoriasis is a chronic, recurrent, and relatively common inflammatory dermatologic condition, which demonstrates various clinical manifestations including hair loss. It was once believed that alopecia was not a presentation of scalp psoriasis, but it is now widely accepted that psoriatic alopecia exists [1]. Herein, we report a female patient with systemic lupus erythematosus who presented with scalp hair loss and nonpruritic scaly plaques on the scalp. Case Report A 26-year-old woman presented with scalp hair loss that had been noticed for 1 month. She also reported nonpruritic scaly plaques on the scalp. She denied a history of hair pulling. Eleven years before, she had been diagnosed as having systemic lupus erythematosus by the presence of malar rash, polyarthritis, proteinuria, positive antinuclear antibody (ANA), and positive anti-double stranded DNA (dsDNA) and was currently in a period of remission. She has been treated with chloroquine for 10 years, and the current dosage was 250 mg 4 times a week. She reported no family history of psoriasis. Physical examination showed multiple scaly erythematous alopecic plaques on the frontal and both parietal regions of the scalp (Fig. 1a). Hair pull test was positive and revealed all telogen hairs. There was no nail, mucosal, or other skin lesions. Other physical examinations were unremarkable. Initially, the thick silvery scales were obtained for potassium hydroxide examination but no organisms were seen. Dermoscopic evaluation showed decreased hair density, increased vellus hairs, diffuse white scales, and numerous groups of dotted vessels distributed on the scalp (Fig. 1b). A 4-mm punch biopsy was done on the alopecic scaly erythematous plaque of the scalp for routine histopathologic examination. Histopathologic findings included superficial perivascular infiltration mainly composed of neutrophils, psoriasiform epidermal hyperplasia, parakeratosis, dilated tortuous blood vessels, decreased number of terminal hair, and sebaceous gland atrophy (Fig. 2). Correlating the clinical, dermoscopic, and histopathologic findings, the diagnosis of psoriatic alopecia was established. Treatment in our case included topical desoximethasone 0.25% scalp lotion to apply the scalp lesions twice daily, tar shampoo, and olive oil to the scalp before shampooing. After 3 weeks of treatment, she reported significant improvement in reduction of scales and stabilization of hair shedding (Fig. 3a). She achieved 75% hair regrowth by 3 months after therapy and had no recurrence of alopecia at the 1-year follow-up (Fig. 3b). Discussion Psoriasis is a chronic, recurrent, and relatively common inflammatory dermatologic condition which exhibits various clinical manifestations. It was believed that alopecia was not a presentation of scalp psoriasis until approximately 4 decades ago [1]. Nevertheless, it is now widely accepted that psoriatic alopecia exists. Hair loss is not restricted only to erythrodermic and generalized pustular psoriasis, but also seen in individuals with plaquetype psoriasis [2]. To date, the pathogenesis of psoriatic alopecia remains to be determined [3]. Alopecia could be a result of telogen effluvium secondary from an inflammatory process or mechanical changes due to friction [4]. Another possible explanation is that normal hair growth may be disturbed by thick adherent scales causing the inability of the hair shafts to grow normally [5]. Recently, it has been reported that hair loss in psoriatic skin may result from abnormal sebaceous gland function [6]. In 1972, Shuster [1] first described 3 types of psoriatic alopecia including psoriatic alopecia confined to lesional skin, acute hair fall associated with telogen effluvium, and psoriatic destructive or scarring alopecia. Psoriatic alopecia confined to lesional skin, which is the most common type, is characterized by nonscarring alopecia, finer hairs, and an increased number of dystrophic bulbs on silvery plaques. The second type, acute hair fall associated with telogen effluvium, is usually found in individuals suffering from severe psoriasis. Psoriatic scarring alopecia, the third type, is the least frequent form with only few published reports [1,4]. In the largest case series, scarring alopecia secondary to psoriasis was noted in 12% of patients with psoriatic alopecia [5]. It has been suggested that a history of severe psoriasis, long-standing scalp involvement [3], immunosuppression [7], some genetic variants [8], and probably, staphylococcal infection [2,9] are predisposing factors in developing follicular fibrosis. The clinical features show cicatricial alopecia on erythematous plaques with silvery scales [3]. Diagnosis of psoriatic alopecia can be established mainly based on characteristic clinical features. However, Runne et al. [5] found that 66% of affected individuals have never had psoriasis before and up to 36% experienced only scalp involvement without other manifestations. Therefore, histopathologic examination can be performed in order to confirm the diagnosis as well as to exclude other causes of alopecia. The histopathologic features of psoriatic alopecia include follicular hyperkeratosis, increased number of telogen hairs, perifollicular lymphohistiocytic cell infiltrate around the isthmus and infundibular region [5]. The sebaceous glands are decreased in size and number. However, the sebaceous gland atrophy can also be found in scalp psoriasis without alopecia [3]. In psoriatic scarring alopecia, the interfollicular epidermis shows psoriasiform epidermal hyperplasia with parakeratosis, intraepidermal microabscesses of neutrophils as well as hypogranulosis. In horizontal sections, the follicular units are reduced in number and replaced by fibrotic tracts. Sebaceous glands are decreased in size and number, or even absent. There are also moderately dense lymphocytic cell infiltrate surrounding the isthmus and infundibular region [3]. In our case, differential diagnoses of the alopecic scaly erythematous plaques with silvery scales on the scalp consist of psoriatic alopecia, tinea capitis, severe seborrheic dermatitis, and importantly, chronic cutaneous lupus erythematosus (CCLE), which can occur in association with her underlying condition. Nevertheless, in our case, scalp dermoscopic and histopathologic evaluation showed concomitant findings of psoriatic alopecia as mentioned above. Kim et al. [10] reported that the most significant dermoscopic features of scalp psoriasis were red dots and globules, twisted red loops, and glomerular vessels. It is consistent with the dermoscopic finding of our case by the presence of numerous groups of dotted vessels distributed on the scalp. We coin the term "patchy dotted vessels" for this novel finding that we detected in the lesion of scalp psoriasis. Similarly, the histopathologic examination also highlighted relatively typical changes of psoriatic alopecia with the absence of CCLE findings. The histopathological features of psoriatic alopecia and CCLE are summarized in Table 1. As a result, CCLE and also other causes of alopecia were confidentially excluded. The link between psoriasis and lupus erythematosus, particularly in view of the induction or exacerbation of psoriasis during the treatment with synthetic antimalarials, has been reported in the literature [11]. Synthetic antimalarials including chloroquine and hydroxychloroquine have been noted to involve in triggering psoriasis, probably due to an alteration of the activity of enzymes that play a role in the epidermal proliferation process [12]. The average latency period for synthetic antimalarials was 3 weeks but it could take as long as 40.5 weeks in case of pustular eruptions in the patient with preexisting psoriasis [13]. Although it is worth considering drug-induced psoriasis in our case, it is somewhat unlikely. This can be supported by the fact that the latency period was relatively too long, her condition improved well after conventional treatment, and there was no exacerbation despite the continuous use of antimalarials. Our case experienced psoriatic alopecia with the absence of pre-existing psoriasis or other psoriatic features. This confirms the fact that alopecia can be the first and only single manifestation of psoriasis. To the best of our knowledge, no case of psoriatic alopecia in a patient with systemic lupus erythematosus has been previously described in the literature. In respect to scalp psoriasis, no specific treatment for psoriatic alopecia, has been reported in the literature. Topical corticosteroid is a mainstay of treatment as it can inhibit epidermal proliferation, decrease inflammation, and modulate immune functions [14]. Overnight occlusion by using a shower cap can enhance drug penetration and efficacy. In patients with thick adherent scales or pityriasis amiantacea, topical keratolytics such as salicylic acid should be given as the first step in order to remove thick scales and to allow penetration of other topical medications [15]. Calcipotriol is an antipsoriatic agent since it can reduce epidermal proliferation, enhance normal keratinization, and also has anti-inflammatory effects. Calcipotriol can be used in combined formulations with topical corticosteroid. Other topical agents with active ingredients such as coal tar, dithranol, retinoids, and antifungals provided only minimal benefits [14]. Even though the prognosis of psoriatic alopecia is generally favorable, it can occasionally lead to permanent hair loss. It was reported that, of 41 patients, 34 patients achieved complete hair regrowth, whereas 2 patients suffered from persistent psoriatic alopecia after 7 years of follow-up. Cicatricial alopecia was observed in 7 patients who failed to regrow hair [5]. Thus, early recognition of the potential for permanent alopecia together with appropriate treatment are truly important in order to minimize the chance of developing irreversible scarring alopecia. In conclusion, we report the case of psoriatic alopecia in a patient with systemic lupus erythematosus with the absence of other psoriatic features. Despite the rarity of psoriatic alopecia, it can be a manifestation of psoriasis and potentially result in permanent scarring alopecia, if it is overlooked or inadequately treated.
2017-09-17T09:10:50.622Z
2017-03-03T00:00:00.000
{ "year": 2017, "sha1": "5be552592f66f55ceb0f7f74a109cf6ea59a1fb9", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/462958", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5be552592f66f55ceb0f7f74a109cf6ea59a1fb9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54869148
pes2o/s2orc
v3-fos-license
Projecting UK Mortality using Bayesian Generalised Additive Models Forecasts of mortality provide vital information about future populations, with implications for pension and health-care policy as well as for decisions made by private companies about life insurance and annuity pricing. Stochastic mortality forecasts allow the uncertainty in mortality predictions to be taken into consideration when making policy decisions and setting product prices. Longer lifespans imply that forecasts of mortality at ages 90 and above will become more important in such calculations. This paper presents a Bayesian approach to the forecasting of mortality that jointly estimates a Generalised Additive Model (GAM) for mortality for the majority of the age-range and a parametric model for older ages where the data are sparser. The GAM allows smooth components to be estimated for age, cohort and age-specific improvement rates, together with a non-smoothed period effect. Forecasts for the United Kingdom are produced using data from the Human Mortality Database spanning the period 1961-2013. A metric that approximates predictive accuracy under Leave-One-Out cross-validation is used to estimate weights for the `stacking' of forecasts with different points of transition between the GAM and parametric elements. Mortality for males and females are estimated separately at first, but a joint model allows the asymptotic limit of mortality at old ages to be shared between sexes, and furthermore provides for forecasts accounting for correlations in period innovations. The joint and single sex model forecasts estimated using data from 1961-2003 are compared against observed data from 2004-2013 to facilitate model assessment. Introduction The future level of mortality is of vital interest to policy makers and private insurers alike, as lower mortality results in greater expenditure on pension payments and higher social care spending. Individuals are living longer because of improved mortality conditions and will reach higher ages in greater number as the post-war baby boom cohort ages, and thus forecasts of mortality at the oldest ages are becoming more important. However, these remain challenging to produce, as the available mortality data at these ages are sparse and concentrated in the most recent years. The work of Dodd et al. (2018a) in producing the 17th iteration of the English Life Tables provided a methodology for mortality estimation that combines smoothing based on generalized additive models (GAMs) (Wood, 2006) at the youngest ages with a parametric model at older ages. This paper extends this approach to a forecasting context and introduces period and cohort effects, producing fully probabilistic mortality projections within a Bayesian framework. Mortality rates The raw materials for stochastic mortality forecasts are data on the number of deaths d xt in year t and age last birthday x, and matching population counts P xt derived from census data 1961; , 1981; , 2001; , 2013 adjusted for births, deaths and migration in the intervening period. The appropriate exposures to risk, which are needed for the calculation of mortality rates, can be estimated from these population counts. Most often, the estimated mid-year population totals P x.t+0:5/ are used to approximate exposures R xt , directly, over the whole year under the assumption that births, deaths and migrations occur uniformly throughout the year. The observed deaths rates d xt =R xt for the UK for the years 1961, 1981, 2001 and 2013 are displayed in Fig. 1, based on data taken from the human mortality database (Human Mortality Database, 2018). The human mortality database uses a more sophisticated method of approximating exposure to risk than that described above, accounting for the distribution of deaths within single years of age (Wilmouth et al., 2017). The mortality rates plotted can be seen to decrease with time, and consistently to increase with age beyond early adulthood, as might be expected. The empirical rates appear volatile at higher ages where there are fewer survivors and therefore less data. The central mortality rate, which is the quantity which we wish to estimate and forecast, is defined as . 1/ This is equal to the force of mortality or hazard of death μ.x/ within the year and age group under the assumption that the force of mortality is constant over that interval. Thatcher et al. (1998) and Keyfitz and Caswell (2005) have provided more detail on the exact relationship between these quantities. Models of mortality A large part of the existing literature on stochastic mortality modelling has developed from the work of Lee and Carter (1992). This approach models the log-mortality rate log.m xt / by using an age-specific term α x , giving the mean mortality rate for each age x, and a bilinear term β x κ t , where the κ-vector describes the overall pace of mortality decline, whereas the β-coefficients describe how this decline varies by age, so that log.m xt / = α x + β x κ t : . 2/ This reduces the complexity of the forecasting problem, as only the κ-component varies over time. This can be modelled by using standard Box-Jenkins methods (most often a random walk with drift), which also provide for measures of forecast uncertainty. The simplicity of the Lee-Carter model has led to a large range of other adjustments and extensions. Brouhns et al. (2002), for example, estimated the parameters through maximization of a Poisson likelihood for the observed deaths rather than working with a Gaussian likelihood on the log-rates, as in Lee and Carter (1992). Renshaw and Haberman (2003a), in contrast, included multiple bilinear age-period terms to capture a greater proportion of the total variation than is possible with a single term. Renshaw and Haberman (2006) went further by adding a cohort term β .2/ x γ t−x to allow for differences in mortality by year of birth. Models that include cohort terms are attractive as in some countries, and notably in the UK, cohort effects are prevalent in the underlying mortality data, possibly reflecting the different life experiences and lifestyle habits of those born in different periods (Willets, 2004;Cairns et al., 2009). Standard age-period-cohort (APC) models can therefore capture such characteristics of the data but, given the linear dependence in such models (in that c = t − x, with c indexing cohort), identifying constraints are needed for fitting. APC models are widely used in the field of cancer research to make predictions of future cancer rates (e.g. Møller et al. (2003)). The work of Cairns and collaborators (Cairns et al., 2009;Dowd et al., 2010) describes a family of models where mortality is modelled through sums of terms of the form β x κ t γ t−x , where β x refers to age effects, κ t to period effects and γ t−x to cohort effects. Any of these elements may be constant or deterministic in particular models, and so the Lee-Carter and APC models are incorporated as special cases. The in-sample and forecasting performance of these models were assessed against a number of criteria in Cairns et al. (2009). A notable finding was the lack of robustness of many of the models that were investigated that included cohort effects; in particular, parameters in such models were found to be sensitive to the fitting period. Furthermore, Palin (2016) has identified some concerns regarding potentially spurious quadratic patterns in cohort effects in several of the models that were discussed above, caused by variation in mortality improvement rates by age being captured in the cohort effect. Renshaw and Haberman (2003b) identified commonalities between the Lee-Carter model and their generalized linear model approach to mortality modelling focusing on mortality reduction factors. Instead of modelling declines in mortality by using a bilinear term b x κ t , however, Renshaw and Haberman included a term b x t that is linear in time, simplifying the fitting process. The b x -parameters now represent age-specific mortality improvements, where improvements are defined as differences in log-mortality. In a similar vein, and building on the cohort enhancement proposed by Renshaw and Haberman (2006), an APC model for improvements has been developed by the Continuous Mortality Investigation (2016). However, this forces a deterministic convergence to user-specified long-term rates of mortality improvement rather than using time series methods for forecasting. Richards et al. (2017), however, have provided full stochastic forecasts by using the APC for improvements model by fitting time series models to the period and cohort effects, and also found that this model fits the data better in sample than either the APC or Lee-Carter models. The smoothing of mortality rates is important in forecasting applications to avoid roughness in the age profile of log-mortality due to random variation being perpetuated into the future. Various smoothing models have thus been proposed. Hyndman and Ullah (2007) approached the problem of mortality forecasting from within the functional data paradigm. From a different perspective, Currie et al. (2004) fitted a two-dimensional P-spline to mortality and produced forecasts by extending the spline into the future. The penalization of differences in the basis function coefficients that was used in the P-spline method to ensure smoothness in sample also provides for extrapolation. Although this model fits the data well, forecasts that are wholly dependent on extrapolation from splines are likely to be oversensitive to data and trends at the forecast origin. Bayesian methods are also increasingly being employed for mortality forecasting to incorporate prior knowledge about underlying processes, and to provide distributions of future mortality risk accounting for multiple sources of uncertainty. Girosi and King (2008) demonstrated methods for mortality forecasting within a Bayesian framework that allow for smoothing the underlying data together with borrowing strength across regions, as well as jointly forecasting cause-specific mortality. Wiśniowski et al. (2015) used the Lee-Carter method for all three components of demographic change (fertility, mortality and migration), again using Bayesian methods to obtain predictive probability distributions. The method that is developed in this paper combines elements of many of the approaches above, including allowing for smooth functions of age and cohort, while providing stable estimates of mortality at extreme ages and avoiding some of the problems that are caused by lack of robustness in parameter estimation that was discussed above. The model also shares some features with the APC for improvements model of Richards et al. (2017), particularly in the structure of the main part of the model. However, there are some significant points of difference; the model that is described here applies to the entire age range and adopts a Bayesian approach to account for all sources of uncertainty. Structure The remainder of the paper is structured as follows: Section 3 sets out the features of the model that are used in later sections. Section 4 details the data that are used and the estimation procedure. Section 5 presents the posterior distributions of the GAM components and provides predictive distributions for log-rate forecasts, and Section 6 displays posterior distributions combined over several alternative models on the basis of in-sample predictive performance, using the method of Yao et al. (2018). Section 7 presents an alternative model where the sexes are fitted jointly, whereas Section 8 compares out-of-sample performance of the single-sex and joint models, using the years 2004-2013. Section 9 contrasts forecasts from the joint model with those made by the UK Office for National Statistics (ONS) (Office for National Statistics, 2016), and the final section offers some conclusions and directions for future work. The programs that were used to analyse the data can be obtained from http://wileyonlinelibrary.com/journal/rss-datasets Bayesian generalized additive models GAMs provide a flexible framework for modelling outcomes where the functional form of the response to covariates is not known with certainty but is expected to vary smoothly. The general form for such models is as follows (Wood, 2006): g{E.y i /} = x i θ + s 1 .x i1 / + s 2 .x i2 / + : : : : Here, the expectation of the outcome y, possibly transformed by link function g.·/, is modelled as the sum of a purely parametric part x i θ and a number of smooth functions of covariates s.·/. Various possible choices exist for the implementation of the individual smooth functions, but P-splines are chosen in this case. P-splines are appealing because they are defined in terms of strictly local basis functions, with the domain of each function defined by a set of knots spread across the covariate space (Wood, 2016). Following the Bayesian P-splines approach of Lang and Brezger (2004), prior distributions are used to represent a belief that adjacent P-spline covariates β will be close to one another. Multivariate normal prior distributions are used, with the covariance matrix constructed from two matrices: A providing a penalty on the first differences of the vector of coefficients β, and B penalizing the null space of A ensuring that the resulting prior is proper (Wood, 2016): Generalized additive models for mortality forecasting The method of mortality forecasting that is developed in this paper fits a GAM to the majority of the age range, while applying separate parametric models to older age groups and to infants. This enables a flexible but smooth fit where the data allow and imposes some structure on the model where data are sparse, particularly at very high ages. Deaths d xt are considered to follow a negative binomial distribution parameterized in terms of the mean, which in this case is equal to the product of the relevant exposure E xt and expected death rate m xt . The dispersion φ captures additional variance relative to the Poisson distribution: An APC GAM for the log-mortality improvement ratios log.m xt =m x.t−1/ / could be expressed with P-spline-based smooth functions for age and cohort improvements, and an additional period component κ: . 4/ An equivalent expression of this model can be made in terms of mortality rates rather than mortality log-improvement-ratios .5/ with the cohort and period terms now accumulated versions of their equivalents in equation (4). This is the model that is used in the estimation process. There are now two smooth functions of age: s α .x/, which describes the underlying shape of the log-mortality curve, and s β .x/, which describes the pattern of (linear) mortality improvements with age. Knots are spaced at regular intervals in both the age and the cohort direction (every 4 years), with three knots placed outside the range of the data at either end of the age range, enabling a proper definition of the P-spline at the edge of the data. In common with other models involving age, period and cohort elements, constraints are needed to identify the different effects because of the linear relationship between the three components. For this, the cohort component s γ .t − x/ is constrained so that the first and last components are equal to 0, and the sum of effects over the whole range of cohorts is 0. The period components κ t are similarly constrained to sum to 0 and to display zero growth over the fitting period. The full set of constraints is thus with C here indicating the most recent cohort and T the latest year. These constraints ensure that linear improvements in mortality with time are estimated as part of the s β .x/ term. For older ages, a parametric model is adopted because of the sparsity of the data in these regions-the additional structure that is provided by specifying a parametric form guards against overfitting and instabilities in this age range: .7/ A logistic form is used, allowing mortality rates to tend towards a constant ψ as age increases, as in the model in Beard (1963). Such a pattern in mortality at the population level has some theoretical justification, as it can result when heterogeneity ('frailty') is applied to rates that follow a log-linear Gompertz mortality model at the individual level, and this frailty is assumed to be distributed among the population according to a gamma distribution (Vaupel et al., 1979). In the life table context, Dodd et al. (2018a) found that the logistic form performed better than the log-linear equivalent when assessed by using cross-validation techniques. Linear age and time effects are included in the old age model, together with an interaction term, and the cohort and period effects are held in common with the model that is applied to younger ages and are applied multiplicatively to the logistic model. Constraints are also applied to the parameters of the old age model to ensure that the derivative of the parametric part of the model with respect to age (ignoring the period and cohort effects) is never less than 0; this reflects our prior belief that mortality should not decrease with age after middle age. The constraints that are required are as follows, with H describing the most distant time for which forecasts are desired: Infant mortality is also excluded from the GAM, as it behaves differently from mortality at other ages. The model for infants is given a similar structure to the old age model, except that the period effect κ t is excluded, as variation in infant mortality with time does not appear to follow the same pattern as it does over the rest of the age range: The period-specific effects κ t in equations (5) and (7) are common across ages and capture deviations from the linear trend that is described by the smooth improvements s β . These effects are not modelled as smooth, as they may capture effects such as weather conditions or infectious disease outbreaks that would not be expected to vary smoothly from year to year. The innovations in these period effects are given a normal prior with variance σ κ , so that .10/ However, these effects are constrained to identify the APC model, so we need to account for this by conditioning on the two period constraints that are given in equation (6). This is achieved by transforming the -parameters by using a matrix Z, constructed so that the final T − 2 parameters remain unchanged, but the first two transformed parameters will equal 0 if the constraints on the cumulative sum of the -series hold (see the on-line appendix). The resulting vector η has a multivariate normal distribution .11/ A distribution conditioning on the first two elements of η, denoted η † , equalling 0 can be obtained by using standard results for the multivariate normal distribution. This conditional prior on η Å (which contains the last T − 2 elements of η) is the distribution that is used for sampling, and the full set of values of can then be recovered deterministically: where subscripts on the covariance matrices indicate partitions, so Σ Å † is the submatrix of Σ with rows corresponding to η Å and columns to η † . For forecasts, innovations of the period coefficients are unconstrained and so have independent normal distributions with variance σ 2 κ . The same method is used to define a distribution for the innovations in the basis functions coefficients for the cohort spline, accounting for the cohort constraints in equation (6) and replacing the prior in expression (3). In contrast with the period effects, however, the transformation matrix that is used accounts for the fact that the constraints apply to the resulting smooth function and not the coefficient values themselves. Knots for the basis functions of the cohort smooth are evenly spaced along the range of cohorts to be estimated, so forecasts of future cohort values can be obtained by drawing new coefficient innovations from the normal distribution with mean 0 and variance σ 2 γ which replaces σ 2 A and σ 2 B for this effect. Full details are given in the on-line appendix. Priors for the model hyperparameters are generally vague, although not completely uninformative: The adoption of weakly informative priors aims to capture something about the expected scale and location of the parameters in question; this aids convergence of the Markov chain Monte Carlo samples, but with reasonable amounts of data should not affect the final inference to any great extent . The scale of the data and covariates is also important in determining the interpretation of these priors; the use of standardized age and time indices means that regression coefficients are unlikely to take large values. The use of the addition symbol as a subscript appended to the normal distribution, N + , indicates that only the positive part of the normal distribution is used and therefore refers to a half-normal distribution. Estimation Samples from the posterior distributions of the parameters and rates were drawn by using Hamiltonian Monte Carlo sampling and specifically using the stan software package (Stan Development Team, 2015). stan and its interface in the R programming language (R Core Team, 2017) allows the construction of a Hamiltonian Monte Carlo sampling 'no U-turns sampler' (Hoffman and Gelman, 2014) from a simple user specification of the Bayesian model to be estimated. The code that is required to fit the model is also available from https://github.com/ jasonhilton/mortality-bgam. Hamiltonian Monte Carlo sampling is a special case of the more general Metropolis-Hastings algorithm for Markov chain Monte Carlo sampling and uses the derivatives of the log-posterior with respect to the parameters of interest in the sampling process, often enabling the posterior to be traversed much more quickly than under standard methods (Neal, 2010). The model was fitted using human mortality database data for the UK from 1961 to 2013 (Human Mortality Database, 2018). The first five cohorts (those born before 1856) were excluded, as exposures are very low for these groups. Four parallel chains were constructed, each with 8000 samples, and the first half of each chain was used as a warm-up period (during which stan tunes the algorithm to reflect the characteristics of the posterior best) and discarded. Parallel chains were used to assess convergence to the posterior distribution better; the diagnostic measure that was advocated by Gelman and Rubin (1992) indicates that all parameters have converged to an acceptable degree. The 16000 post-warm-up samples were 'thinned' by a factor of 4 by discarding three values in four to avoid excessive memory usage, leaving 4000 posterior samples for inference for each model. Initial results Some preliminary results are displayed in this section, conditionally on a particular choice for the point of transition between the GAM to the parametric old age model. Fitting a similar model to ONS data for England and Wales for 2010-2012, Dodd et al. (2018a) found by using cross-validation methods that the most probable points of transition were age 91 for females and 93 years for males. Samples were obtained for models using these transition points, and the posterior distributions of the parameters of the GAM model are given in Figs 2 and 3 for males and females respectively. The colour scheme in these plots identifies intervals containing various proportions of the posterior density, so that the darkest represents the central 2% interval, whereas 90% of the posterior density is contained between the lightest bands. The distributions of mortality improvement rates for both males and females display greater uncertainty at younger ages where there are fewer deaths. As might be expected, uncertainty for cohort effects increases for the oldest and most recent cohorts, as these have the fewest data points. Note that it is the differenced cohort and period effects (s Å γ .t − x/ and κ Å t from equation (4)) that are plotted rather than their summed equivalents. Differences between the sexes are most notable in the age-specific component, for which the accident hump for young males is more prominent, and in the improvement rates, for which males show lower rates of improvement than females in their late 20s. Cohort and period contributions to mortality decline show similar but not identical patterns for each sex. Posterior distributions for log-rates generated from this model fit the data relatively closely. However, Fig. 4 displays forecasts of log-rates at 50 years into the future, which, although appearing reasonable, contain small discontinuities at the point of transition between the GAM and the parametric model. The discontinuity is particularly evident in the forecast for males. This suggests that some sort of averaging over or combination of models using different transition points might be advisable. Transition points and model stacking The choice that is made regarding the age at which the model transitions from the GAM (which is used over the majority of the age range) to the parametric model for old ages is essentially arbitrary; we do not believe that there is a switch between data-generating processes at some point x old , but rather that the task of predicting mortality is better served by two models. There is thus no 'true' value for the point of transition, and decisions regarding transition should be governed by model performance. The methodology that was used in the latest English Life Tables (Dodd et al., 2018a) used cross-validation to obtain posterior weights over a set of models M defined by K different points of transition, based on mortality data from 2010 to 2012. In that analysis, age 91 for females and 93 years for males are the most probable points of transition, and the final predictive distribution was obtained by averaging over models using the calculated weights. However, the model that is described here differs from that used in Dodd et al. (2018a) in that it varies in time and applies to a period spanning many years, so the question of the distribution of the transition between the parametric model and the GAM must be revisited. Separate models were therefore estimated for transition points ranging from 80 to 95 years, and their accuracy was assessed by using the leave-one-out information criterion (LOOIC), developed by Vehtari et al. (2016). The LOOIC is a measure of how well we might expect a model to perform in predicting a data point without including it in the data that are used to fit the model. It is based on an approximation of the leave-one-out log-pointwise predictive density Σ n i=1 log{p.y i |y −i /}, where the y −i subscript indicates a data set excluding the ith observation, θ is a vector of parameters and p.y i |y −i / = p.y i |θ/p.θ|y −i /dθ: Rather than fitting the model n times (once for every data point), Vehtari et al. (2016) provided a method for approximating the LOOIC from just one set of posterior samples of the predictive density computed from the full data set, implemented within the loo R package. This uses importance sampling to approximate the leave-one-out log-predictive-density, correcting for instabilities caused by the potentially high or infinite variance of some importance weights by fitting a Pareto distribution to the upper tail of the raw weights. The LOOIC scores for males and females for the models with transition points k = [80, 81, : : : , 95] years are given in Fig. 5. Later cut points tend to be preferred because the greater flexibility of the GAM model gives lower LOOIC values even at relatively high ages, although the absolute differences between the models are small. Models with points of transition above age 95 years are not considered, as this would leave too few data points with which to estimate the old age model effectively. Although the LOOIC is not a measure of forecast performance as such, as it is focused on how the model would perform at predicting data points that are contained within the original data set and does not consider the times at which data points become available, it does provide an indication of how well the models specified reflect the structure of the data. Following the work by Yao et al. (2018), these LOOIC values can be used as the basis for 'stacking' the predictive distributions of each model to obtain a distribution which combines models in a principled way, with weights determined by approximate cross-validation performance. Stacking is often used for averaging over point estimates in ensemble models, but Yao et al. (2018) extended the approach to apply to combining distributions. More specifically, the The estimated model weights are shown in Fig. 6; the greatest individual weight is given to models with the latest points of transitions, reflecting the pattern in the LOOIC measure. Other models with earlier transition points are also given weight, however, reflecting that they perform well at predicting some data points which are not so well estimated by the late transition model. Samples from the combined posterior predictive distribution were obtained by using the estimated weights by sampling from the posterior distribution that is associated with each model in proportion to its weight. The resulting stacked forecasts are given in Fig. 7; the discontinuities that were seen previously are now smoothed out through the process of taking the weighted combination of distributions. Jointly modelling male and female mortality In the work that was described above, models for males and females were estimated separately. However, much of what drives the underlying processes of mortality and how it changes over time is likely to be common between sexes. Thus, we may gain from borrowing strength across models and also from explicitly representing covariances between parameters for each sex, as in Wiśniowski et al. (2015). Because males tend to die sooner than females, there are fewer data points (i.e. lower total exposure) with which to estimate parameters in the old age model. For this reason, the parameter ψ, representing the asymptote of the logistic function in the old age model, is now shared between sexes. We also allow the innovations in the period effects κ t to be correlated, so that joint forecasts can be generated accounting for the fact that, in potential futures where mortality for females is high, it will tend to be high for males as well. The joint distribution for the period innovations for both sexes, conditionally on the constraints, is obtained in a similar way to that for the single-sex models, described in Section 3. Full details are given in the on-line appendix. As before, LOOIC scores and model weights were obtained for the joint model (Fig. 8). The pattern of LOOIC scores and weights are similar to those for the separate models, with the highest transition point obtaining most weight, but considerable weight also attached to earlier transitions. Joint forecasts of log-mortality are displayed in Fig. 9. The estimated correlation in the innovations of the period effects (the off-diagonal elements of P) is high-generally above 95%. Model assessment To assess the robustness and forecasting accuracy of the models that were described above, fitting was conducted on a truncated data set, excluding the years 2004-2013. Robustness was then assessed by comparing posterior means of the main smooth functions estimated on this reduced data set against the same quantities estimated on all the data. Fig. 10 displays such a comparison for males, plotting posterior means for each point of transition and fitting period. Estimates of period and cohort effects are relatively stable, particularly in the interior of the data. Although some differences are evident in the pattern of improvements, the general shape of the curve is notably similar, and the downward shift appears to reflect real increases in the rate of mortality decline after 2003, particularly for younger adults. The shape of the age effect is again very similar, and the differing location of the smooth curve is accounted for by a change in the location of the intercept of the time index in equation (5) for different data periods. Both the single-and the joint-sex models that were presented above appear to give reasonable forecasts for future mortality. Figs 11 and 12 display predictive distributions and empirical rates for younger and older ages respectively. Comparing the predictive posterior distributions against the observed outcomes, it is evident that, for most of the age range, empirical rates fall within the 90% predictive interval. The exception is young adult males, between the ages of about 15 and 40 years, for whom recent drops in mortality far outpace those seen in the observed data 1961-2003. More formal assessments of forecast performance are difficult, as we observe only one correlated set of outcomes (i.e. male and female log-rates 2004-2013). Focusing on older ages (Fig. 12), we can see that there are few differences between the predictive distributions of the joint-and single-sex models, and those that are evident occur only at high ages. In part, this may be because the weighting procedure works to select models with , 1961-2003; , 1961-2013) and transition points, males: (a) age; (b) cohort; (c) improvement; (d) period similar properties. Other considerations may be taken into account when deciding between the two models; the joint model is more parsimonious in that fewer parameters are required to fit it, and it allows for correlations in the paths of mortality by sex to be taken into account. In contrast, the single-sex model is less computationally demanding, particularly with respect to memory, as each sex is fitted and processed separately. Comparison with offical projections and variants The final stacked forecasts from the joint model in the previous section are now compared with forecasts that are produced by the UK ONS in the 2014-based national population projections (NPPs) (Office for National Statistics, 2016). These work with the predicted probabilities of deaths q x rather than the central mortality rates m x ; the former represents the probability of dying by age x + 1 given that an individual attains age x. Posterior predictive samples of q xt were acquired by using the approximation q xt ≈ 1 − exp.−m xt /: .14/ As well as the principal ONS projection from the 2014-based NPP, the variant projections involving high and low mortality scenarios have been included, allowing some understanding of how the existing indications of uncertainty resulting from different projection assumptions compare with the fully Bayesian probability distributions. Fig. 13 shows posterior distributions of log-transformed probabilities of death q x for a forecast horizon of 25 years for both males and females, together with the equivalent q x+0:5 - quantities for the same year (2038) obtained from the ONS 2014-based NPP. For most of the age range, the forecasts are similar, with the principal projection falling close to the median prediction under the GAM-based model. However, the ONS model projects lower mortality for young adults for both sexes, to the extent that the principal projections fall outside the outermost 90% predictive interval of the probabilistic projections. This is due to a greater weight that is given by the ONS methodology to more recent high improvement rates at these ages (see Office for National Statistics (2016) for more details regarding the ONS methodology). Life expectancy Period life expectancy at birth is a useful summary measure of the mortality conditions in a given year. It captures the expected number of years lived of a hypothetical individual who experiences a given period's schedule of mortality rates over the course of their whole life. Fig. 14 compares the posterior distribution of life expectancy at birth, e 0 , from the jointly fitted GAMbased model with the equivalent quantity from the NPP. The GAM-based forecasts appear more optimistic than the ONS equivalent, with median life expectancy higher than the principal ONS projection because of the lower predictions of mortality at ages 70-95 years under the GAM-based model. Fig. 14 also reveals that uncertainty in e 0 initially grows more quickly in the Bayesian approach that was developed above, in that the gap between the high and low variants is much narrower than the fan intervals for at least the first decade of the forecast. After 30 years, however, the range that is spanned by the ONS variants becomes wider than the 90% probabilistic interval from the GAM-based model. The uncertainty in the probabilistic forecast reflects past variability in the observed data and, from the comparison with hold-back data given in Figs 11 and 12, the calibration of this uncertainty appears reasonable. As a result, we believe that the probabilistic intervals provide a better indication of the uncertainty around future life expectancy than the scenario-based equivalents, at least in the short term, particularly as they have a readily understandable interpretation in terms of probability. Discussion and conclusion This paper details methodology for the fully probabilistic forecasting of mortality rates, accounting for uncertainty in parameter estimates as well as in forecasting. The approach uses a GAM to produce smooth rate estimates at younger ages and combines this with a parametric model at higher ages where the data are more sparse, allowing rate estimates to be obtained for extreme old ages. The use of Hamiltonian Monte Carlo sampling and the stan software package allowed posterior sampling to be conducted with reasonable efficiency. Stacking predictive distributions following the approach of Yao et al. (2018) provides a principled approach to avoiding a single choice of transition point between these two submodels governing younger and older age ranges. These weights are based on approximate leave-one-out cross-validation performance and thus weight models on the basis of their ability to predict data that are contained in the original fitting period. An alternative approach may be to fit models on a subset of data, and to produce weights based on model performance in forecasting data at the end of the time period. However, this would involve additional model refitting, and it may also be that such assessments are overly sensitive to characteristics of the held-out data. Furthermore, log-scores based on a single set of observed outcomes are likely to be highly correlated, and thus rolling n-step-ahead forecasts may be required to assess forecast performance robustly, which would necessitate repeated model fitting with even greater computational expense. A comparison with ONS forecasts provides an indication of how Bayesian predictive intervals compare with the deterministic scenario-based indicators of forecast variability that are produced by the ONS. For life expectancy in particular, the probabilistic intervals are considerably wider over a short time horizon than those suggested by the high and low mortality scenarios. Future work could investigate the inclusion of expert opinion in probabilistic mortality forecasting models like that presented in this paper. The NPP uses experts to provide target rates of mortality improvement over longer time horizons (25 years) (Office for National Statistics, 2016), reflecting the fact that extrapolative methods may prove inferior to expertise at this distance into the future. A similar approach within a Bayesian framework would have to consider that using expert opinion about future rates is different from the standard approach of eliciting information about model parameters directly. Work in Dodd et al. (2018b) describes one way in which this could be achieved. Beyond this, there are also opportunities to investigate the possibility of extending similar methods to other demographic components, particularly fertility.
2018-02-09T13:05:21.000Z
2018-02-09T00:00:00.000
{ "year": 2019, "sha1": "8c959bb6e0535473a77ffef36c4d6c800ca802a6", "oa_license": "CCBY", "oa_url": "https://rss.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/rssc.12299", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "9043b2f04e0d9a2100c6aeacfd3d9c1b2ca1082e", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Mathematics" ] }
119477188
pes2o/s2orc
v3-fos-license
Shell-model calculation of isospin-symmetry breaking correction to superallowed Fermi beta-decay We investigate the radial-overlap part of the isospin-symmetry breaking correction to superallowed $0^+\to 0^+$-decay using the shell-model approach similar to that of Refs. [1, 2]. The 8 sd-shell emitters with masses between $A=22$ and $A=38$ have been re-examined. The Fermi matrix element is evaluated with realistic spherical single-particle wave functions, obtained from spherical Woods-Saxon (WS) or Hartree-Fock (HF) potentials, fine-tuned to reproduce the experimental data on charge radii and separation energies for nuclei of interest. The elaborated adjustment procedure removes any sensitivity of the correction to a specific parametrisation of the WS potential or to various versions of the Skyrme interaction. The present results are generally in good agreement with those reported in Refs. [3, 4]. At the same time, we find that the calculations with HF wave functions result in systematically lower values of the correction. (Received January 10, 2017) We investigate the radial-overlap part of the isospin-symmetry breaking correction to superallowed 0 + → 0 + β decay. The 8 sd-shell emitters with masses between A = 22 and A = 38 have been re-examined. The Fermi matrix element is evaluated with realistic spherical single-particle wave functions, obtained from spherical Woods-Saxon (WS) or Hartree-Fock (HF) potentials, fine-tuned to reproduce the experimental data on charge radii and separation energies for nuclei of interest. The elaborated adjustment procedure removes any sensitivity of the correction to a specific parametrisation of the WS potential or to various versions of the Skyrme interaction. The present results are generally in a good agreement with those already reported. At the same time, we find that the calculations with HF wave functions result in systematically lower values of the correction. Physics of superallowed β decay It has been pointed out that the superallowed 0 + → 0 + nuclear β decay provides an excellent tool to probe the fundamental symmetries underlying the Standard Model of electroweak interaction, including the Conserved Vector Current (CVC) hypothesis and the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix. According to the CVC hypothesis, the corrected Ft value should relate to G V , a fundamental vector coupling constant for a semi-leptonic decay, and thus be constant for all emitters. Traditionally, this relation is expressed as where K/( c) 6 = 2π 3 ln(2) /(m e c) 5 = (8120.2716±0.012)×10 −10 GeV −4 sec, ∆ V R , δ R and δ NS are transition-independent, transition-dependent and nuclear structure-dependent parts of a radiative correction [1], δ C is an isospinsymmetry breaking correction, defined as a deviation of the Fermi matrix element squared from its model-independent value The quantity f t is determined experimentally by measuring the partial half-life, the Q EC value and the Fermi branching ratio. The most recent survey of world data [2] finds 14 of these superallowed transitions with measured f t values known to 0.1% precision or better. If the CVC holds, one can thus extract G V . By comparing it to the vector coupling constant from a muon decay, the CKM mixing matrix element between u and d quarks, |V ud | can be determined, providing a precise test of the unitarity condition of the CKM matrix. On the theoretical side, there is still no consensus between various calculations of δ C (see Ref. [2] for a recent review). The present work explores the differences between shell-model calculations supplemented by WS or HF radial wave functions in comparison with the previous studies [1,3]. Shell-model description of the isospin correction Within the shell model, the Fermi matrix element of the β + decay between an initial |i and final |f many-body states can be written as where α denotes a full set of spherical quantum numbers of a single-particle state, π refers to a complete set of states of an (A − 1) nucleus compatible with the angular momentum and parity conservation, and α n |t + |α p π is the single-particle matrix element of the isospin operator 1 between proton and neutron radial wave functions For harmonic oscillator functions, the latter is equal to one. To overcome this artefact, we have to replace the harmonic oscillator radial wave functions by realistic radial wave functions obtained from a spherically-symmetric WS or the self-consistent HF potential. A sum over intermediate states π in Eq. (2.1) allows us to go beyond the closure approximation and take into account the dependence of Ω π α on the excitation energies of the intermediate states, E π . For each E π , we fine-tune our potential so that the individual energies of valence space orbitals match experimental proton or neutron separation energies. Substituting Eq. (2.1) into Eq. (1.2), we obtain a suitable expression for δ C , as a sum of two terms, δ C ≈ δ RO + δ IM . The first term, δ RO , is the contribution due to the deviation from unity of the overlap integral between the radial parts of the proton and neutron single-particle wave functions. It is called a radial-overlap correction and can be expressed as where the reduced matrix elements, f ||a † αn ||π T and i||a † αp ||π T are related to the spectroscopic amplitudes [3] for neutron and proton pick-up respectively. The superscript T means that these quantities are computed with an isospin-invariant effective interaction. The other term, δ IM , is the so-called isospin-mixing correction [5,6], arising due to the isospin-mixing in many-body configurations of the initial and final states. It is obtained from the shell-model diagonalisation using a charge-dependent two-body effective interaction and is expressed as In this work, we focus only on the radial-overlap correction, calculating it within the shell model in combination with the realistic radial wave functions obtained from a WS or Skyrme-HF single-particle potential. Results for δ RO and discussions The radial-overlap correction, δ RO has been evaluated using the procedures outlined in the previous section. For this study, we choose only sd-shell emitters which are well-described by the so-called universal sd interactions -USD, USDA/B [7,8]. They include 22 Mg, 26 Al, 26 Si, 30 S, 34 Cl, 34 Ar, 38 K and 38 Ca. Six of these transitions are used to deduce the most precise Ft value, while the decays of 26 Si and 30 S are expected to be measured with an improved precision in future radioactive-beam facilities. The shell-model calculations have been performed in the full sd shell, using NuShellX@MSU code [9]. To get convergence, up to 100 intermediate states of each spin have been taken into account in Eq. (2.3). Figure 1 shows the results for δ RO obtained with either WS or HF singleparticle wave functions. The WS results have been computed using two different parametrisations. One of them is that of Schwierz, Wiedenhöver and Volya (SWV) [10], while the other is that of Bohr and Mottelson [11], modified as proposed in Ref. [12] and denoted as BM m . It is important to note that the Coulomb and the charge-symmetric isovector terms are the only sources of the difference between proton and neutron single-particle wave functions. In the present study, we assume that the charge-symmetry breaking and all other deficiencies of the WS potential can be cured by readjustment of the well depth case-by-case to reproduce experimental proton and neutron separation energies. The length parameter of the central term was determined from a condition that the charge density constructed from the proton radial wave functions yields a root-mean-charge radius in agreement with the experimental value measured by electron scattering [13] or by isotope-shift estimation [1]. The spherical HF calculations have been performed with three different Skyrme forces, namely, SGII [14] and SkM* [15] and SLy5 [16]. While SGII and SkM* were already used in Ref. [3], SLy5 is a more recent parametrisation by the Saclay-Lyon collaboration. It was constructed to reproduce various bulk nuclear properties and selected properties of a number of dou-bly magic nuclei, without 16 O. Since the nuclei of interest are open-shell systems, we have assumed a uniform occupation of a last occupied, partly filled orbital. We have checked that shell-model occupation numbers for initial and final 0 + states, obtained from the diagonalisation, produce very similar results. The central part of the self-consistent potential of the parent and daughter nuclei was scaled in order to reproduce experimental proton and neutron separation energies, respectively. We have tested that this scaling only little influences the charge radii of nuclei considered, which stay in a very good agreement with experiment. The Coulomb exchange term was accounted within the Slater approximation. Our preliminary results [12] show that its exact calculation only marginally affects the δ RO value. As is seen from the figure, all our WS results are quite close to each other, indicating that the correction δ RO is not very sensitive to a particular choice of the WS potential parameters. In general, they are in a fair agreement with the shell-model plus WS calculation of Towner and Hardy in 2002 [1], except for 34 Ar and 38 K because we used new experimental data for the charge radius [13]. We do not compare our present results with the latest calculation of Towner and Hardy [6], performed with the inclusion of the orbitals outside the valence space. The work in this direction is in progress. For the HF case, we find that the correction only little depends on a particular version of the Skyrme force, except for 30 S. Overall, our results are consistent with those of Ormand and Brown [5], again with the exception of 30 S. In the case of 30 S, the correction δ RO is dominated by the 2s 1/2 state, in which the centrifugal barrier is not present, and thus the radial wave function is very sensitive to the fine details of the mean field. We note that SLy5 interaction results in a considerably smaller δ RO value compared to SGII and SkM*. We do not confront our results to the most recent calculation of δ RO with Skyrme-HF wave functions carried out by Hardy and Towner in 2009 [17]. Unlike the standard HF procedure, they performed a single calculation for the nucleus with (A − 1) nucleons and (Z − 1) protons, and then used the proton and the neutron eigenfunctions from the same calculation to compute radial integrals. Since Koopmans's theorem is not fully respected by such HF calculations, in particular, with a density-dependent effective interaction, we do not consider their protocol to be well-justifiable. The δ RO values obtained with HF wave functions are seen to be systematically smaller than those obtained with WS wave functions. The reason can be easily understood. The Skyrme interaction is usually supposed to be isospin invariant. However, the presence of the Coulomb term causes a difference between proton and neutron densities, inducing an isovector term in the self-consistent mean-field potential [3,18]. That term tends to counter Coulomb repulsion, therefore reducing δ RO . It will be interesting to study whether charge-symmetry (CSB) and charge-independence breaking (CIB) terms in a conventional isospin-invariant Skyrme interaction affect the value of the correction. We are grateful to B. Blank for stimulating discussions. L. Xayavong thanks Université de Bordeaux for a Ph.D. fellowship. The work was supported by the CFT (IN2P3/CNRS, France), AP théorie 2014-2016.
2016-12-23T18:37:40.000Z
2016-12-18T00:00:00.000
{ "year": 2016, "sha1": "5b57b27e17e5d0e1375ef220fc073123253333d0", "oa_license": "CCBYSA", "oa_url": "https://jyx.jyu.fi/bitstream/123456789/53819/1/appbps101285.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "95d41f09e6bb8a9aef57c01e15acc09a58ed5ae2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220016762
pes2o/s2orc
v3-fos-license
Laboratory findings on the health status of the endemic rock-partridge (Alectoris graeca whitakeri) population during a two-year conservation programme in Sicily Sicily (Italy) hosts an ‘endangered’, endemic population of Alectoris graeca whitakeri, commonly known as Sicilian Rock Partridge. An EU-funded Life Natura 2000 project has been founded, involving Istituto Zooprofilattico Sperimentale of Sicily for veterinary aspects: a total of 15 Sicilian Rock Partridge found dead were collected, identified and processed by postmortem examination and laboratory investigations. The evidence of internal parasites was the most relevant finding, showing different types of infections by Nematoda, Cestoda and Coccidia. 60 per cent of these cases were infected with more than one parasite. In one single case, a pathogenic strain of Escherichia coli related to granulocytic lesions in liver was found and another cause of death was due to respiratory disease caused by Aspergillus fumigatus. The study represents the first veterinary report on this rare species and underlines the importance to monitor the health status of wild species in the Italian environment in order to preserve local biodiversity. SUMMARY Sicily (italy) hosts an 'endangered', endemic population of Alectoris graeca whitakeri, commonly known as Sicilian Rock Partridge. An eU-funded life Natura 2000 project has been founded, involving istituto Zooprofilattico Sperimentale of Sicily for veterinary aspects: a total of 15 Sicilian Rock Partridge found dead were collected, identified and processed by postmortem examination and laboratory investigations. The evidence of internal parasites was the most relevant finding, showing different types of infections by Nematoda, Cestoda and Coccidia. 60 per cent of these cases were infected with more than one parasite. in one single case, a pathogenic strain of Escherichia coli related to granulocytic lesions in liver was found and another cause of death was due to respiratory disease caused by Aspergillus fumigatus. The study represents the first veterinary report on this rare species and underlines the importance to monitor the health status of wild species in the italian environment in order to preserve local biodiversity. BACkgRoUnd The genus Alectoris includes seven recognised species with a distribution from Southern Europe and Northern Africa to much of Asia and the Arabian Peninsula 1-3 (figure 1). Sicily (Italy) hosts an 'endangered', endemic population classified as Alectoris graeca whitakeri, 4 commonly known as the Sicilian Rock Partridge. 5 The Sicilian Rock Partridge is the smallest and lightest subspecies among the Alectoris genus. The Sicilian Rock Partridge was first described more than 170 years ago with several clusters associated with Sicilian mountainous regions including Mount Etna and the Madonie mountains. 6 Alectoris graeca is considered 'near threatened' by the International Union for Conservation of Nature (IUCN), but the Sicilian subspecies is rather more endangered with an IUCN Red List rating of endangered. 7 The only numerical indication of the Sicilian Rock Partridge population dates from the early 1990s, with an estimate of about 1500 pairs. 8 Another study conducted in the eastern part of Sicily reported a low density in 2013 (0.67 pairs/km 2 ) compared with 1989 (3.3 pairs/ km 2 ). 9 A g whitakeri has been included in Annex I of the 'Birds Directive' (79/409/EEC) as well as Annex III of the Berne Convention, and, in recent years, all subspecies of Alectoris graeca have been included in Annex I of the 'Birds Directive' (2009/147/EEC). The major reasons for decline of the Sicilian Rock Partridge are unclear, but likely relate to environmental factors including new intensive agricultural management, progressive urban expansion, periodic fires within its territories and particularly its nesting grounds, massive illegal hunting 10 and more recently wild boars. To date, there have been no published veterinary reports of diseases affecting Sicilian endemic A g whitakeri; the few available concern other European and Asiatic rock-partridges as wild cases in Alectoris rufa, [11][12][13][14] captive/domestic chukars [15][16][17][18] and wild chukars, 19 20 with one study on reproductive disturbance in captive Alectoris graeca saxatilis associated with nematode infection. 21 Data on the likely cause of mortality of A g whitakeri and the most prevalent pathologies are important because of the limited population size and may inform on the future risk of extinction caused by both non-infectious (eg, trauma/poisoning) and infectious risks. Data collected during this study will be important for future strategic conservation and restocking actions. CASe pReSenTATion Fifteen carcases of Sicilian Rock Partridge found dead were collected from the protected area classified as S.P.A. ITA010029 during a Life project on the conservation of A g whitakeri in Sicily (Life09 NAT/IT/000099-Sicalecons). All birds were identified for their phenotypic compliance 1 to the endemic subspecies and age was estimated. Sex was determined by the presence/absence of spurs and confirmed by observation of the gonads during postmortem examination. Initially, birds were screened by external examination in order to detect superficial lesions and/or the presence of infestations Oropharyngeal, cloacal and conjunctival swabs were also taken when death was considered to have been recent (ie, within 24-48 hours, according to the season). Swabs were then inserted in Amies transport medium with and without charcoal (Thermo Fisher), transported to the laboratory at 4°C and processed within 24 hours. After postmortem examination, in case of suggestive pathology, sections/ fragments of tissues and/or swabs were taken for parasitology, microbiology and/or histological testing following the procedures described below. The intestine was always screened for the presence of internal parasites. inveSTigATionS External parasites were screened by simple observation, with the help of a brush and a white sheet. Once collected, all samples were fixed in 70 per cent (v/v) ethanol for further morphological identification. All external parasites were identified by morphological keys comparison according to the standards. 22 Postmortem examination was performed with a Y incision from both shoulders down to the sternum, separating the skin from the rib cage and abdomen and opening the carcase for examination of the coelomic cavity and evaluation of internal organs. 23 parasitological investigations Internal parasites were investigated using a direct mount technique through preparation of smears obtained from faecal samples (collected from the cloaca) and the contents from three sections of the intestine (mid-points of the duodenum, jejunum and caecum). Faecal material collected from the intestine was routinely processed by a saturated sugar (density 1.27 g/mL) flotation technique, 24 observing the upper supernatant phase by direct light microscopy at 100-400× magnification in order to identify eggs in faecal material. Internal parasites were fixed in 70 per cent (v/v) ethanol in order to evaluate characteristic features and morphology using the classification approaches for Nematoda and Cestoda. [25][26][27] Determination of the prevalences and intensities of each parasite species was calculated for every sample. To collect oocysts directly from the intestines, in case of suspected lesions, scrapings of the mucosa were made and the material rinsed into a beaker with 2.5 per cent (w/v) potassium dichromate solution to release the unsporulated oocysts. 28 The tissue suspension was filtered through cheesecloth into a beaker using a spatula to agitate the suspension. The sticky proteinaceous material present in the caeca was broken down by suspending the homogenate in 10-20 per cent (v/v) sodium hypochlorite solution in an ice-bath for 10-15 minutes. 29 This procedure was useful if clean suspensions of oocysts were required, but the sporulation of oocysts may sometimes be abnormal. Alternatively, homogenised pieces of intestine were treated for 3 hours in 1N sodium hydroxide solution, 30 after which a fairly clean suspension of oocysts was obtained. Oocysts belonging to the Eimeria genus were classified for respective species according to their morphology 31 and according to the method proposed for coccidia of poultry, 32 based on both morphological and biological features including intestinal location of endogenous stages, gross lesion appearance, size and shape of the parasite. Semiquantitative determination of the intensity of coccidiosis infection used an arbitrary score of infection density, designating: 1+ (≤10 oocysts per microscopic field 400×) for dead birds with no evidence of diarrhoea around the cloaca and where small numbers of oocysts were detectable; 2+ (11 to ≤50 oocysts per field) for dead animals with moderate infection and slight diarrhoea and 3+ (more than 50 oocysts per field) for dead birds with acute infection and diarrhoea (visible around the cloacae) and carcase emaciation. Microbiological investigations Bacterial cultures were carried out from mucosal swabs (transport swabs with Amies transport medium with and without charcoal, Thermo Fisher) taken from oropharyngeal and conjunctival tracts and from cloaca. Samples were cultured by standard procedures on Columbia agar containing 5 per cent sheep blood (Oxoid Limited, Hampshire, UK), MacConkey's agar (Oxoid, Hampshire, UK) and incubated aerobically at 37°C for up to 48 hours. Contents of the large intestine and faecal samples were also incubated in Buffered Peptone Water (Oxoid, UK) at 37°C±1°C for 16-20 hours, then plated on modified semisolid Rappaport-Vassiliadis agar (Oxoid, UK) for 24 hours and on xylose-lysine-desoxycholate agar (Oxoid, UK) for biochemical identification, according to international standard procedures for Salmonella isolation (ISO 6579). For all other bacteria, including fastidious species, samples of trachea, lung, duodenum and caecum were inoculated directly into tryptone soya agar medium, modified brilliant green medium and Columbia III with 5 per cent SB medium (BD Diagnostic, Diagnostic Systems, Sparks, Maryland, USA) and incubated at 37°C for 72 hours. For isolation of Mycoplasma species, samples of trachea and lung were inoculated on to Mycoplasma broth base (PPLO, BD Diagnostics, Diagnostic Systems) supplemented with phenol red, glucose, horse serum, nicotinammide adenina dinucleotide, cysteine, thallium acetate and penicillin and incubated at 37°C in an atmosphere containing 10 per cent CO 2 for 7 days according to Ref. 33. Mycoplasma broth cultures were then streaked on to Mycoplasma agar prepared as above but with 1.3 per cent agar (BD Diagnostics, Diagnostic Systems), maintained under identical conditions and examined daily for 25 days. Lesions were subcultured on to Sabouraud dextrose sugar, which is a medium specific for fungi, and incubated at 25°C and 37°C for 4-10 days. Additionally, portions of the same damaged organs were homogenised and plated on blood agar to screen for possible bacterial coinfection. Viral diseases were also routinely investigated. Specifically, selected portions of the following tissues were processed by RT-PCR in order to monitor avian influenza from lung and spleen; 34 flavivirus DNA (West Nile Disease) from spleen 35 and coronavirus from lung and oropharyngeal swab. 36 Histological investigations All suspected lesions were sampled and fixed in 10 per cent buffered formalin, then embedded in paraffin-wax following standard methods. 37 Four-micrometre-thick sections were obtained using a microtome, then mounted on slides and stained with H&E. In order to detect fungal infection, suspected sections were also stained by PAS (periodic acid-Schiff) and PASM (periodic Schiff-methenamine) to investigate the presence of hyphae infiltrating the tissues. oUTCoMe And Follow-Up According to the phenotypic features of all bird carcases, nine females (three adults and six young) and six males (two adults and four young) were confirmed to belong to the endemic subspecies A g whitakeri. At postmortem examination, 7 out of 15 birds showed poor nutritional status, emaciation, ruffled feathers (figure 2A). One of these birds was affected by severe granulomatous lesions ranging in size from 1 mm to 1 cm in the liver and lungs with presence of blood-serum liquid ( figure 2B). Histologically, granulomas were characterised by a central necrotic area surrounded by histiocytes, lymphocytes and multinucleated giant cells. Avian tuberculosis was excluded by the absence of acid fast bacilli in smears stained with Ziehl-Neelsen and after 4 weeks culture. Another carcase showed a total loss of breast musculature and a severe pneumonia with large yellowish nodules, ranging in size from 20 to 40 mm, greyish in colour, firm with irregular margins (figure 3A). A dark red area was also observed in the middle lobe of the right lung. Additionally, the serosa of the small intestine appeared moderately hyperemic. Histologically, the nodules presented fungal hyphae spreading from the centre of necrotic lesion, surrounded by lymphocytes and macrophages ( figure 3B). Morphology of the fungus 38 was compatible with the genus Aspergillus (ie, presence of a head in long column, long conidiophores, hemispherical vesicles flattened at the top, a single row of phialides, tight and regular globular conidia, septates and dichotomous hyphae). PAS, PASM staining (figure 3B-D) and culture test confirmed infection by Aspergillus fumigatus. 39 The histological findings confirmed the lung and related serosae were the primary site of infection. Parasitological findings: Four out of 15 carcases showed considerable infestation with a small infestation which lacked wings ( figure 4a and b), later classified as Goniodes colchici (Denny), a species of louse. All carcases showed notable evidence of internal parasite infection (table 1). Coinfection with multiple parasites was detected in 9/15 (60 per cent) of the carcases, including between two and four parasite species. Coccidial parasites were detected in all birds with varied levels of infection. No significant differences were detected between Eimeria species occurrence or level of infection in male and female birds. Intestinal protozoa were the most prevalent parasites detected in A g whitakeri, with four different species identified: Eimeria kofoidi, E. caucasica, E. legionensis and an as yet unidentified Isospora species. All coccidia were detected in the caeca. Parasitological and pathological findings suggest the existence of an association between the intensity of infection and severity of symptoms in the host species, especially when associated with other concurrent parasites. A single Cestode species was observed in four birds: Raillietina tetragona (Molin 1858), (figure 5a and b). All birds infected by this taenid exhibited learning points ► The study represents the first veterinary report on Alectoris graeca whitakeri. ► All carcases showed notable evidence of internal parasite infection. ► Coinfection with multiple parasites was detected in 60 per cent of the carcases. ► The present work sought to highlight the importance of health status monitoring for wild species in the Italian environment and the collection of information suitable to preserve local biodiversity. 40 In the bird, Sabourad Dextrose Agar (SDA) culture showed clear growth in 4 days, with the presence of flat, dusty and initially green colonies, which became greyish after a few days, confirming the growth of Aspergillus species. Virological investigation resulted negative for the presence of intercurrent infections. diSCUSSion Data collected in this study represent the first report on infections and lesions in the rare A g whitakeri. General consideration of the causes of mortality prevalent underlines the importance of environmental condition as a major risk for specie conservation. Poor body condition due to multiple parasitosis is probably most important since coligranulomatosis and aspergillosis were represented by one case only. Parasitic pathogens: It cannot be confirmed whether mortality in any of the Rock Partridges sampled was caused by parasite infection, but it is likely that parasite colonisation compromised effective utilisation of sufficient food, resulting in an aggravated wasting process. The evidence of heavy parasitic infection in carcases of A g whitakeri mirror previous observations in other species within the Alectoris genus. 12 Moreover, this study reports the first finding of Ascaridia compar in Sicilian Alectoris, underlining a possible role for Ascaridia compar in compromising bird fertility (even in low-level infections, as suggested in Ref. 21, and destabilising the host-parasite system. The low-level occurrence of bacterial and fungal infection in wild Alectoris precludes significant conclusion, although they are common among poultry when flock management is poor. The cases detected here may have been exacerbated by stress-related immunosuppression, caused by suboptimal ecological conditions impacting on a species strongly linked to its territory and ethological habits. Microbiological/bacterial findings: Alectoris rufa, genetically similar to A g whitakeri, is known to be highly susceptible to colibacillosis. 41 42 Stressors, such as starvation, thermal and/or migratory stress, toxicosis, adverse environmental conditions or trauma, can cause immunosuppression and promote colonisation of tissues by opportunistic pathogens 44,45 . The present work sought to highlight the importance of health status monitoring for wild species in the Italian environment and the collection of information suitable to preserve local biodiversity. In this modern context, Public Veterinary Health can play a central role in investigating and surveil environmental, toxic, infectious and genetic risks. Advocacy for the One Health agenda has driven development of new approaches for the management of wild species, recognising the importance of parallel environmental, veterinary and human health, as in the case of avian influenza. Several examples of this new 'global health vision' have been reported in regions such as the EU, USA and Australia 44 , where the role for the veterinarian has become especially significant in so called 'pathosurveillance', developing from a simple scientific database to an important tool of Public Health. Regional reports concerning morbidity and mortality of wild species should be introduced into a new epidemiological approach to investigate and explain/understand these phenomena, supporting effective risk assessment and planning emergency/contingency actions. Contributors GRl: conceptualisation and acquisition. RP: methodology and writing. CM and RP: field investigation and samples collection. MT and GRl: supervision. All authors have read and agreed to the published file of the manuscript.
2020-06-24T13:25:06.529Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "652416256b3c49a9a6ec9e14c62722bd7a5d0c30", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1136/vetreccr-2020-001100", "oa_status": "HYBRID", "pdf_src": "BMJ", "pdf_hash": "652416256b3c49a9a6ec9e14c62722bd7a5d0c30", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237846389
pes2o/s2orc
v3-fos-license
Pediatric SARS-CoV-2 Infection in India: Expert Opinion without Evidence Leading to Public Health Policy Paralysis By January 2021, sections of scientific community started to believe that India had overcome the pandemic and acquired herd immunity [1]. In March, 2021 the Indian government cited results from serological surveys and India’s main computer model predicting disease spread in the “endgame” of the pandemic [2]. It was followed by national negligence of social-distancing norms, non-compliance of personal protective measures, mass political and social congregations. Double mutant SARS-CoV-2 virus variant and systemic shortfalls in immunization program emerged as unfortunate and man-made crises respectively. As a result, by August, 2021 India is expected to witness 1 million deaths from CoVID-19 [3]. Letter By January 2021, sections of scientific community started to believe that India had overcome the pandemic and acquired herd immunity [1]. In March, 2021 the Indian government cited results from serological surveys and India's main computer model predicting disease spread in the "endgame" of the pandemic [2]. It was followed by national negligence of social-distancing norms, non-compliance of personal protective measures, mass political and social congregations. Double mutant SARS-CoV-2 virus variant and systemic shortfalls in immunization program emerged as unfortunate and man-made crises respectively. As a result, by August, 2021 India is expected to witness 1 million deaths from CoVID-19 [3]. Probably an intelligence of hindsight has potentiated many commentators to exert that current state of Indian pandemic scenario was well predicted. However, literature and references are lacking in support of concrete-confident scientific advice prior to onset of tragedy. Scientists in turn, legitimately so, have pointed towards gross underreporting of CoVID-19 statistics, political influence, ancient cultural mindset (such as, flowing river Ganga will cleanse the corona virus), over a dozen asynchronous CoVID-19 committees with no accountability, lack of cross specialty approach (virology, epidemiology, medicine, immunology etc) and delayed travel restrictions without stringent quarantine protocols [4-6]. This insight into the Indian healthcare system is critical to prevent repetition of such restricted scientific temperament. In May, 2021, virologists and CoVID technical advisory committee, warned that children will be widely affected in the third wave of pandemic. It was added that the centre and state governments should make urgent arrangements to handle flurry of paediatric cases between the month of October and December 2021. Lack of CoVID immunization in this age group was the basis of this hypothesis [7]. Experts coming out in support of this argument added that the first wave affected elderly predominantly and the second wave is affecting young adults, hence by stretching the trends, next Letter Check for updates wave should affect even younger population that is children. The strength of scientific argument is questionable here, however, it went largely unchallenged in the clinical and academic circuits. Children were not immunized even in the first and second wave, but still they have been clinically spared until now (Table 1) [11]. CoVID-19 affected paediatric population has been predominantly asymptomatic [8]. Another key opinion national leader added that the virus needs a host to survive and therefore will mutate itself to affect children. An acceptable argument from an evolutionary biology point of view, but it still doesn't explain why children, as new hosts, have been relatively spared until now [9]. The SARS-CoV-2 virus severely affecting adults below 50 years of age does not necessarily translate into a negative impact on children in the third wave. It is a speculation without foundation of basic sciences. However, academicians and clinicians have fiercely advocated this idea with an exponential propagation that hospitals need to be ready for a children-loving version of CoVID [10]. These expert opinions resulted in a series of potentially unnecessary decisions by the Indian institutions who felt compelled to react to this speculation wave. Following these expert comments, May 5, 2021 onwards the Supreme Court of India, central and state governments proceeded to focus on capacity building of paediatric ICUs, immunization strategy for under 18 and ensure resource availability in paediatric units [12,13]. Far sightedness in public health is critical, but triage of priorities should take precedence. All these developments were taking place when India needed to focus on the current ongoing crisis including severe shortage of hospital beds, absence of oxygen availability, overburdened and exhausted healthcare officials. In a resource starved setting, the public health machinery appeared to digress from handling the current situation and create hysteria for parents. Policy makers and healthcare workers got invested in preparation of hypothetical scenario. Further, instead of utilizing readily available resources of paediatric critical care setups, they were reserved for possible future use. It is almost equivocally accepted that various CoVID response committee's efforts would have been better utilized in the management of ongoing crisis.
2021-09-01T15:10:06.827Z
2021-06-24T00:00:00.000
{ "year": 2021, "sha1": "5d1c8e55c5948ce22d83e9952f5439b93005dc97", "oa_license": "CCBY", "oa_url": "https://scholars.direct/Articles/family-medicine/afmgp-6-029.pdf?jid=family-medicine", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "83a98e224205f2a9f39e83b6871b590107ee03b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211015320
pes2o/s2orc
v3-fos-license
PTX3 Gene 3’UTR polymorphism and its interaction with environmental factors are correlated with the risk of preeclampsia in a Chinese Han population Abstract To investigate the interaction between the single nucleotide polymorphism of the 3’ untranslated region (3’UTR) of the pentraxin 3 (PTX3) gene, as well as environmental factors and the preeclampsia risk in a Chinese Han population. Sanger sequencing was used to analyze rs5853783 and rs73158510 loci of the PTX3 gene 3’UTR from 235 patients with preeclampsia and 235 control subjects. The plasma PTX3 protein level was measured by enzyme-linked immunosorbent assay (ELISA). The risk of preeclampsia in the PTX3 gene rs5853783 locus D allele carriers was 0.72 times higher than that of the I allele carriers (95% CI: 0.60–0.84, P < .001). The risk of preeclampsia in the PTX3 gene rs73158510 locus A allele carriers was 1.36 times higher than in the G allele carriers (95% CI: 1.16–1.55, P < .001). The area under the ROC curve (AUC) for the diagnosis of preeclampsia by plasma PTX3 protein levels was 0.906 (P < .001). The PTX3 gene rs5853783 and rs73158510 single nucleotide polymorphisms (SNPs) were associated with plasma PTX3 protein levels. The AUC of plasma PTX3 protein level diagnosis of preeclampsia in PTX3 gene rs5853783 locus II genotype subjects was up to 0.9371, followed by the ID genotype (AUC = 0.8586); the DD genotype was the lowest (AUC = 0.8154). The AUC of plasma PTX3 protein level diagnosis of preeclampsia in rs73158510 locus GG genotype subjects was 0.9102, GA genotype was 0.8766, and AA genotype was 0.8750. The rs5853783 and rs73158510 SNPs in the 3’UTR region of the PTX3 gene are associated with the risk of preeclampsia in a Chinese Han population. Introduction Preeclampsia is a unique complication of pregnancy, and its pathogenesis is considered to be closely related to vascular endothelial injury, inflammatory excessive oxidative stress, insulin resistance, as well as genetic factors. [1,2] Clinical signs and symptoms of preeclampsia include visual impairment, headache, upper abdominal pain, thrombocytopenia, and abnormal liver function. [1,3] Pentraxin 3 (PTX3) is the first discovered long-chain pentameric protein, with a molecular weight of 40 to 50 KD, and is highly conserved in human and mouse evolution. [4,5] PTX3 has a wide range of synthesis and release sites, including neutrophils, dendritic cells, macrophages, activated endothelial cells, smooth muscle cells, fibroblasts, etc. [6,7] At the site of inflammation, the synthesis and release of PTX3 can be induced by interleukin-1 (IL-1), tumor necrosis factor (TNF-a), interleukin 10 (IL-10), lipopolysaccharide (LPS), etc. [7] Under physiological conditions, PTX3 levels in peripheral blood are relatively low; however, in the early stages of inflammation, plasma PTX3 levels can rise rapidly and reach a peak at 6 to 8 h. [4,6] Compared to the plasma C-reactive protein, PTX3 responds more rapidly, exists longer, better represents local inflammatory response, with a less variable plasma concentration than C-reactive protein. [8,9] Thus, PTX3 is a marker for the occurrence and progression of inflammatory responses. The human PTX3 gene is located in the q25 region of chromosome 3, containing 3 exons, and expressing 381 amino acids. [10] A variety of single nucleotide polymorphisms (SNPs) in the PTX3 gene are associated with the occurrence of several diseases. For example, Zandifar et al [11] indicated that the PTX3 gene rs3816527 polymorphism is associated with the susceptibility to migraine in men. In addition, He et al [12] reported a significant correlation between PTX3 rs1840680 polymorphism and the susceptibility to pulmonary aspergillosis in patients with COPD. To date, there are only a few studies have investigated PTX3 gene polymorphisms in preeclampsia. In the present study, we selected two SNP loci in the 3' untranslated region (3'UTR) of the PTX3 gene with a minor allele frequency (MAF) above 0.05, that is, rs5853783 and rs73158510. A case-control study was conducted to investigate the association of these two SNP loci with the risk of preeclampsia. Subjects A cohort of 235 patients with preeclampsia (case group) was recruited from the Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine between March 2016 and October 2018, with ages ranging from 18 to 39 years (mean, 28.18 ± 5.50 years), and gestational age ranging from 35 to 39 weeks (mean, 38.04 ± 1.05 weeks). Another cohort of 235 healthy pregnant women were randomly selected as the control group, aged 20 to 40 years (mean, 28.87 ± 6.08 years), and gestational age was 35 to 42 weeks (mean, 38.11 ± 1.41 weeks). The inclusion criteria were as follows: (1) Han nationality; (2) age ≥ 18 years; (3) complete medical records; (4) diagnostic criteria for preeclampsia in accordance with the American College of Obstetricians and Gynecologists (ACOG) guide. [13] The exclusion criteria were as follows: (1) other complications during pregnancy; (2) history of chronic hypertension, heart disease, kidney disease, diabetes, and liver disease before pregnancy. All subjects signed the informed consent form and the study was approved by the Medical Ethics Committee of the Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine. The recruitment was performed in accordance with the World Medical Association Declaration of Helsinki. Genotyping Plasma genomic DNA was extracted using a QIAamp DNA Blood Mini Kit (Qiagen, Hilden Germany) according to the manufacturer's instructions and stored at -80°C. The DNA fragment containing the rs5853783 and rs73158510 loci of the PTX3 gene 3'UTR was amplified by polymerase chain reaction (PCR) using the extracted genomic DNA as a template. The PCR primers were: 5'-TGG CCA GAG ATG AAT TTT ACA TTG G-3' (forward); 5'-TCT TCT CAA AAA CGT GAC ATT CG-3' (reverse). 5'-CGA ATG TCA CGT TTT TGA GAA GAT A-3' (forward); 5'-ACG AGT TTG CTC CAA AAC ATC T-3' (reverse). The PCR mixture contained 12.5 mL PCR mix (Elpis-Biotech), 1 mL (10 pmol) each of the primers, 1 mL genomic DNA, and 1.5 mL double distilled water. The PCR conditions were as follows: pre-denaturation at 94°C for 2 minutes, denaturation at 94°C for 1 minute, annealing at 60°C for 40 seconds, and extension at 72°C for 4 minutes, in a total of 30 cycles. After PCR, Sanger sequencing was performed using GENEWIZ (North Brunswick, NJ), and the genotypes were determined by comparing the sequencing results with the sequences in the NCBI database. Enzyme-linked immunosorbent assay (ELISA) A quantitative sandwich ELISA was performed to test the plasma PTX3 protein levels using 3 ml of whole blood collected from participants. Plasma PTX3 protein levels were determined using an ELISA kit (R&D Systems, Inc., Minneapolis, America) according to the manufacturer's instructions. The minimum detectable dose of the kit is 0.007 to 0.116 ng/ml. Statistical analysis In the present study, statistical analysis was performed using SPSS 22.0 (SPSS Inc, Chicago, IL). Continuous variables were expressed as mean ± SD, and statistically analyzed using the t test. Categorical variables were expressed as n(%) and statistically analyzed using the x 2 test. Fisher Exact Test was used to compare the genotype distribution of PTX3 gene SNPs between the case and control groups. x 2 test was performed to test whether the genotype distribution was consistent with the Hardy-Weinberg equilibrium (HWE), based on the distribution of allele frequencies and genetic models (additive, dominant, and recessive models) to determine the correlation between PTX3 gene SNPs and the risk of preeclampsia. The odds ratio (OR) and 95% confidence interval (CI) were used in an unconditional logistic regression analysis, adjusted for age, gestational age, prepregnancy body mass index (BMI), systolic blood pressure (SBP), diastolic blood pressure (DBP), and family history of hypertension. Multi-factor dimensionality reduction (MDR) was performed to assess the SNPs of PTX3 gene and its interaction with environmental factors. All tests were 2-tailed, with P < .05 considered as significant differences. Demographic information The demographic information of the case group and the control group are shown in Table 1. There was no significant difference in age and gestational age between the case and the control groups Table 1 Comparison of demographic characteristics between the case and control groups. Xu and Zhang Medicine (2020) 99:3 Medicine (P > .05). The proportion of pre-pregnancy BMI, SBP, DBP, and subjects with a family history of hypertension were significantly higher in the case group than in the control group (P < .05). Association of PTX3 gene 3'UTR SNPs with preeclampsia We analyzed the genotype and allele frequencies of the 3'UTR rs5853783 and rs73158510 loci from 235 cases of preeclampsia patients and 235 control subjects ( Table 2). The frequency distribution of the rs5853783 and rs73158510 loci of the PTX3 gene in the control group was consistent with the HWE (P > .05). Stratified analyses In the present study, a stratified analysis to test the correlation between the PTX3 gene 3'UTR SNP and the risk of preeclampsia was performed. Hence, we divided all participants into the following sub-groups: younger reproductive age (age 35 years) and advanced reproductive age (age > 35 years), non-obesity (BMI 24 kg/m 2 ) and obesity (BMI > 24 kg/m 2 ), as well as with family history and without family history of hypertension. The results demonstrated that in subjects with younger reproductive age, nonobesity, and with a family history of hypertension, the risk of preeclampsia in the PTX3 gene rs5853783 locus D allele carriers (ID/DD) was significantly lower than that of the type II genotype (P < .05). However, in subjects with advanced reproductive age, obesity, and without a family history of hypertension, there was no significant difference in the risk of preeclampsia between the PTX3 gene rs5853783 locus D allele carriers (ID/DD) and the II genotype carriers (P > .05) ( Table 3). These findings indicate that the correlation between the risk of preeclampsia and the PTX3 gene rs5853783 locus SNP can be affected by several factors including age, pre-pregnancy BMI, and a family history of hypertension. Similarly, in subjects with younger reproductive age, advanced reproductive age, non-obesity, and without a family history of hypertension, the risk of preeclampsia in PTX3 gene rs73158510 locus A allelecarriers (GA/AA) was significantly lower than in the GG genotype carriers (P < .05). However, in obese subjects, with a family history of hypertension, no significant difference was observed in the risk of preeclampsia between the PTX3 gene rs73158510 locus A allele carriers (GA/AA) and the GG genotype carriers (P > .05) (Table4). These findings indicate that the correlation between the risk of preeclampsia and the PTX3 gene rs73158510 locus SNP was affected by BMI and a family history of hypertension. Multi-factor dimensionality reduction (MDR) analysis of the interaction between PTX3 gene SNPs and environmental factors Further, we performed MDR to analyze the interaction between PTX3 gene rs5853783 and rs73158510 loci SNPs and environmental factors, that is, age, pre-pregnancy BMI, and family history of hypertension. We observed that there was a positive interaction between PTX3 gene rs5853783 SNP and age, BMI, as well as a family history of hypertension. There was a positive interaction between the PTX3 gene rs73158510 SNP and age, as well as a family history of hypertension; however, there was a negative interaction between the PTX3 gene rs73158510 SNP and the pre-pregnancy BMI (Fig. 1A). In addition, the interaction between the PTX3 gene rs73158510 Table 2 Correlation between the 3'UTR genotype and allele frequency of PTX3 gene and preeclampsia risk. Xu and Zhang Medicine (2020) 99:3 www.md-journal.com SNP and the rs5853783 SNP was the highest, followed by age, family history of hypertension, and pre-pregnancy BMI (Fig. 1B). Abnormal elevation of plasma PTX3 levels in patients with preeclampsia To detect the plasma PTX3 protein levels in all participants, ELISA was performed. The results showed that plasma PTX3 protein levels were significantly higher in patients with preeclampsia than in the control group (P < .001) ( Fig. 2A). Next, we analyzed the receiver operating characteristic (ROC) curve of plasma PTX3 protein level diagnosis of preeclampsia and found that the area under the curve (AUC) was 0.906 (P < .001) (Fig. 2B). Association of PTX3 gene rs5853783 and rs73158510 SNPs with plasma PTX3 protein levels Then, we analyzed the correlation between PTX3 protein levels in plasma and rs5853783 and rs73158510 loci SNPs in the case and controlgroups.Theresultsdemonstratedthatinbothcaseandcontrol Table 3 Stratified analysis of the correlation between PTX3 gene rs5853783 locus SNP and the risk of preeclampsia. Xu and Zhang Medicine (2020) 99:3 Medicine groups the plasma levels of PTX3 protein were significantly higher in rs5853783 locus II genotype carriers than in the ID genotype, and the DD genotype was the lowest (P < .05) (Fig. 3A and B). Moreover, the plasma levels of PTX3 protein in rs73158510 locus GG genotype carriers were significantly lower than in the GA genotype, and the AA genotype was the highest (P < .05) (Fig. 3C and D). PTX3 gene SNPs affected the diagnostic efficacy of preeclampsia by plasma PTX3 protein levels Finally, we analyzed the ROC curve of plasma PTX3 protein level diagnosis of preeclampsia in different genotypes of the PTX3 gene rs5853783 and rs73158510. The results indicated that the AUC of plasma PTX3 protein level diagnosis of preeclampsia in the PTX3 gene rs5853783 locus II genotype subjects was up to 0.9371, followed by the ID genotype (AUC = 0.8586), and DD genotype was the lowest (AUC = 0.8154), with a statistically significant difference (P < .05) (Fig. 4A). The AUC of plasma PTX3 protein level diagnosis of preeclampsia in rs73158510 locus GG genotype subjects was 0.9102; the GA genotype was 0.8766, and AA genotype was 0.8750, with a statistically significant difference observed (P < .05) (Fig. 4B). Discussion Here, we conducted a case-control study to investigate the correlation between SNPs of 2 loci with minor allele frequencies above 0.05 in the PTX3 gene 3'UTR (ie, rs5853783 and rs73158510) and the risk of preeclampsia in 235 patients with preeclampsia and 235 control subjects. We observed an increased risk of preeclampsia occurrence, as well as the plasma levels of PTX3 protein, in subjects carrying the rs5853783 locus I allele and the rs73158510 locus A allele of the PTX3 gene. Based on Xu and Zhang Medicine (2020) 99:3 www.md-journal.com these findings, it is probable that the rs5853783 and rs73158510 SNPs in the 3'UTR of the PTX3 gene are associated with the risk of preeclampsia in a Chinese Han population. Preeclampsia is a disease unique to pregnancy, clinically characterized by hypertension, proteinuria, and edema, which are common complications of a hypertensive disorder during pregnancy. Indeed, preeclampsia is a typical representative of the hypertensive disorder in pregnancy. [3,14] Preeclampsia is often accompanied by systemic multiple organ damage or multifunction failure, and these complications seriously endanger maternal and fetal safety. [15,16] Previously, studies have investigated the etiology and pathogenesis of hypertensive disorder in pregnancy, suggesting that preeclampsia may be affected by the interaction of multiple genes and environmental factors. [17][18][19] In the present study, we observed that plasma PTX3 protein levels were significantly higher in preeclampsia patients than in the control subjects. Based on the ROC analysis, we reported that the AUC of plasma PTX3 protein level diagnosis of preeclampsia was increased to 0.906, suggesting that PTX3 may be a potential marker of preeclampsia and is of great value in the diagnosis of preeclampsia. There are a variety of SNP loci in the 3'UTR of the PTX3 gene. In the present study, we selected 2 SNP loci with MAF > 0.05. Our analyses demonstrated that after adjusting for age, gestational age, pre-pregnancy BMI, SBP, DBP, and family history of hypertension, the D allele of rs5853783 locus was a protective factor for preeclampsia, and the PTX3 gene rs73158510 locus A alleles was a risk factor for preeclampsia. Further, by measuring the plasma PTX3 protein levels in the participants, we revealed that the plasma PTX3 protein level of the rs5853783 locus D allele carriers was significantly lower than that observed in I allele carriers in both patients with preeclampsia and the control subjects, and the plasma PTX3 protein level of the PTX3 gene rs73158510 A allele carriers was significantly higher than that in the G allele carriers. This indicated that the PTX3 gene rs5853783 and rs73158510 loci SNP were associated with plasma PTX3 protein levels. Based on the above findings, we hypothesized that the correlation between the PTX3 gene rs5853783 and rs73158510 SNPs and the risk of preeclampsia may due to abnormal expression of the PTX3 protein, and in the subjects with the high-risk allele, the PTX3 protein was highly expressed. Given that both the rs5853783 and the rs73158510 loci are located in the 3'UTR of the PTX3 gene, and the 3'UTR of the gene is a binding site for microRNAs to regulate the gene expression, we speculate that the rs5853783 and rs73158510 SNPs may affect the regulations of PTX3 protein expression through microRNAs; however, there is no direct evidence in the current study to support this hypothesis. Future studies aim to elucidate the associated microRNAs and confirm the effect of the PTX3 gene rs5853783 and rs73158510 on the regulation of the PTX3 protein expression by these microRNAs in vitro. In addition, the correlations between more relevant genes and environmental factors on the risk of preeclampsia needs to be evaluated. Furthermore, although SNPs of the rs5853783 and rs73158510 loci in the 3'UTR of the PTX3 gene were found to be related to preeclampsia in a Chinese Han population, the calculated OR value only demonstrated a weak correlation due to the small sample size. Hence, it is necessary to verify these observations with a larger sample size. More importantly, largescale, multi-regional, and multi-ethnic systematic studies are imperative to better understand the pathogenesis of preeclampsia. In summary, the rs5853783 and rs73158510 SNPs in the 3'UTR of the PTX3 gene are associated with the risk of preeclampsia in the Chinese Han population, and the specific mechanisms need to be evaluated in future studies.
2020-01-23T09:21:08.048Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e5ac681bd6fc8ac46e2b9fa7072ea7c28baf36f8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000018740", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9cae17513970fd818ab088c8cfe42aef9eb8d7da", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234180631
pes2o/s2orc
v3-fos-license
A transfer operator based computational study of mixing processes in open flow systems We study mixing by chaotic advection in open flow systems, where the corresponding small‐scale structures are created by means of the stretching and folding property of chaotic flows. The systems we consider contain an inlet and an outlet flow region as well as a mixing region and are characterized by constant in‐ and outflow of fluid particles. The evolution of a mass distribution in the open system is described via a transfer operator. The spatially discretized approximation of the transfer operator defines the transition matrix of an absorbing Markov chain restricted to finite transient states. We study the underlying mixing processes via this substochastic transition matrix. We conduct parameter studies for example systems with two differently colored fluids. We quantify the mixing of the resulting patterns by several mixing measures. In case of chaotic advection the transport processes in the open system are organized by the chaotic saddle and its stable and unstable manifolds. We extract these structures directly from leading eigenvectors of the transition matrix. Transfer-operator and its numerical approximation in open systems Let T : (A, B(A)) → (X, B(X)) be a measurable and nonsingular transformation that maps an initial particle in A ⊂ X ⊂ R d to its new position after a given timestep τ (open system). The evolution of the mass distribution f over A under T can be described by an affine operator: L 1 (A) → L 1 (A), defined by P A f +ς, where P A is the conditional Perron-Frobenius operator [1] and ς describes the new mass that is released into the system after time-step τ . The linear operator P A : Using Ulam's method [2], a spatially discretized approximation of P A is given by the substochastic matrix P with entries estimated as where {B 1 , B 2 , . . . , B n }, is a fine partition of A and µ is the Lebesgue measure on A. The evolution of the mass distribution vector v over A can now be described as an affine transformation vP + σ, where σ is the discrete source that is injected into the system after time-step τ . Under the assumption that all particles can finally leave A and that the source σ is constant, the matrix P defines the transition matrix of an absorbing Markov chain restricted to finite transient states. Assuming that the underlying velocity field in our system is time-periodic with period τ , then the Markov chain is time-homogeneous and the mass distribution converges to the invariant mass distribution v inv = σ(I − P ) −1 (fixed point of the affine transformation) [3]. Mixing in open systems: Example set-up Let a system with domain X contain an inlet and an outlet flow region, X 1 and X 3 , as well as a mixing region X 2 (see Fig.1). Two types of particles in the system are advected by velocity field where u w is a constant homogeneous velocity field and u m is the velocity field of a time-periodic mixer. Here, as velocity field of the time-periodic mixer we use the well-known periodically perturbed double gyre flow [4], whose phase portrait contains two counter-rotating gyres separated by a periodically moving separatrix. In the outlet region a "periodic" pattern is formed after some time, which we want to quantify with respect to the mixing quality. Therefore, we consider the open subsystem with domain A containing an inlet flow region A in , the mixing region A mix and an outlet flow region A out , which fully describes the pattern. We partition the domain A in 49152 square boxes and calculate the transition matrix P . As constant source we use a signed mass distribution σ, describing the two types of particles. Mixing measures We consider the following mixing measures: the sample variance (as measure of the intensity of segregation), the mean length scale (as measure of the scale of segregation) [5,6] and the mix variance (a multiscale measure of mixing that considers a concentration field to be wellmixed if its averages over arbitrary open sets are uniform) [7]. In Fig.2 we show the results of a parameter study. We vary the double gyre parameter , which controls the maximum displacement of the separatrix, and apply the different mixing measures to the invariant mass distribution v inv restricted to the outlet region A out . The mixing measures show peaks in the mixing quality for similar parameter values. Three corresponding mass distributions on A out are included in the figure. In further parameter studies, the mixing variance has been shown to be robust to numerical changes in the calculation of P . For future work, spectral mixing measures that take into account information on two types of fluid could be useful. Organizing structures Most fluid material has a transient behavior and leaves the open system relatively fast, but some material intersects with its original domain. This region is a chaotic saddle. Particles near the stable manifold of a chaotic saddle stay longer in the system and follow the unstable manifold of a chaotic saddle on their way out [8]. In Fig.3a we follow a bulb of particles at time t = 0 (cyan and blue) to t = 10 (pink and orange). The cyan particles remain in the system after 10 time steps (colored in pink). This reveals the unstable manifold (pink) and the stable manifold (cyan). Instead of following particles, we can extract these structures now directly from the leading left and right eigenvectors of our substochastic transition matrix P . In Fig.3b-c the leading two left and the leading two right eigenvectors of the transition matrix P are shown. The support of a left eigenvector approximates an unstable manifold. The support of a right eigenvector approximates a stable manifold. The intersection of the support of the two left and right eigenvectors (dark blue) approximates two chaotic saddles (see Fig.3d). To optimize mixing, it would be interesting to study how these organizing structures can be manipulated.
2021-05-11T00:05:38.757Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e10e2ce70cc28239d26a832771830e0c02cb0a88", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/pamm.202000133", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "154692821e7403a92a3e93e42f48884fe6302a7b", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
7768213
pes2o/s2orc
v3-fos-license
A Novel Treatment Protects Chlorella at Commercial Scale from the Predatory Bacterium Vampirovibrio chlorellavorus The predatory bacterium, Vampirovibrio chlorellavorus, can destroy a Chlorella culture in just a few days, rendering an otherwise robust algal crop into a discolored suspension of empty cell walls. Chlorella is used as a benchmark for open pond cultivation due to its fast growth. In nature, V. chlorellavorus plays an ecological role by controlling this widespread terrestrial and freshwater microalga, but it can have a devastating effect when it attacks large commercial ponds. We discovered that V. chlorellavorus was associated with the collapse of four pilot commercial-scale (130,000 L volume) open-pond reactors. Routine microscopy revealed the distinctive pattern of V. chlorellavorus attachment to the algal cells, followed by algal cell clumping, culture discoloration and ultimately, growth decline. The “crash” of the algal culture coincided with increasing proportions of 16s rRNA sequencing reads assigned to V. chlorellavorus. We designed a qPCR assay to predict an impending culture crash and developed a novel treatment to control the bacterium. We found that (1) Chlorella growth was not affected by a 15 min exposure to pH 3.5 in the presence of 0.5 g/L acetate, when titrated with hydrochloric acid and (2) this treatment had a bactericidal effect on the culture (2-log decrease in aerobic counts). Therefore, when qPCR results indicated a rise in V. chlorellavorus amplicons, we found that the pH-shock treatment prevented the culture crash and doubled the productive longevity of the culture. Furthermore, the treatment could be repeatedly applied to the same culture, at the beginning of at least two sequential batch cycles. In this case, the treatment was applied preventively, further increasing the longevity of the open pond culture. In summary, the treatment reversed the infection of V. chlorellavorus as confirmed by observations of bacterial attachment to Chlorella cells and by detection of V. chlorellavorus by 16s rRNA sequencing and qPCR assay. The pH-shock treatment is highly selective against prokaryotes, and it is a cost-effective treatment that can be used throughout the scale up and production process. To our knowledge, the treatment described here is the first effective control of V. chlorellavorus and will be an important tool for the microalgal industry and biofuel research. INTRODUCTION Vampirovibrio chlorellavorus is a non-photosynthetic cyanobacterium (Melainabacteria; Soo et al., 2014) with a predatory lifestyle that targets a variety of Chlorella species (Coder and Starr, 1978). This predator only feeds on living algae cells and is unable to grow in liquid or agar media unless co-cultured with the alga. Electron microscopy-based studies (Coder and Goff, 1986;Mamkaeva and Rybal'chenko, 1979) have shown that V. chlorellavorus uses a flagellum to reach its prey and attach to the surface of the alga through fibril appendages. The attached cells remain epibiotic contrary to other predatory bacteria belonging to Bdellovibrio or Daptobacter (Guerrero et al., 1986). The unique predation mechanisms of V. chlorellavorous are exquisitely regulated. The bacterial cell secretes a path that connects with the alga, breaches through the parent cell wall and if necessary also through the daughter cell wall (Coder and Goff, 1986). Recent metagenomic analyses suggest that plasmid DNA and hydrolytic enzymes are transferred to the prey cells via the T4SS secretion system where they apparently degrade the algal cell contents (Soo et al., 2015). As the Chlorella cell is digested, the color changes from dark green to yellow brown, and empty or "ghost" Chlorella cells accumulate in the culture. If left unabated, the majority of Chlorella cells are destroyed and the culture develops a granular texture due to progressive clumping of algal cells visible to the naked eye (Coder and Starr, 1978). Early detection by optical microscopy is difficult due to the pleomorphic shape and the small size of V. chlorellavorus (<1 µm 3 ; ultramicrobacterium; Coder and Starr, 1978). Other bacterial epiphytes may be mistaken for V. chlorellavorus; however, the infection becomes more apparent with the rapid increase in the proportion of Chlorella cells with attached bacteria and the development of a clear zone that spreads through the infected cell from the site of attachment (Coder and Goff, 1986). Using electron microscopy, V. chlorellavorus was first described in freshwater samples from a reservoir in Ukraine (Gromov and Mamkaeva, 1972) and later in freshwater anuran amphibian ponds in Brighton, UK (Wong et al., 1994). However, traditional detection tools have failed to provide a definitive diagnosis, and it is likely that the incidence of V. chlorellavorous infection has been overlooked. For example, only a few original papers have been published since this predator was first described (Gromov and Mamkaeva, 1972). Recently, molecular tools have targeted this predator across diverse systems. For example, V. chlorellavorous was identified in soil samples from Anhui, China (Shi et al., 2015), in bovine rumen samples from Nagaland, India (Das et al., 2014) and from open pond cultures of Chlorella at three different universities in the southwestern USA which suffered repeated V. chlorellavorus infections (personal communication from Drs. P. Lammers, J. Brown, and M. Sommerfeld). In recent years it has become apparent that algal crop protection is one of the most important challenges facing the algal industry McBride et al., 2016). Chlorella was the first eukaryotic microalgae to be grown in pure culture (Beijerinck, 1890) and today it is arguably the most common alga grown by the microalgal industry. Its high productivity and robustness allows a wide range of applications including food, feed, fertilizer, wastewater remediation, CO 2 capture, and biodiesel production (Safi et al., 2014). V. chlorellavorus attacks this alga whether it is growing photoautotrophically (Coder and Goff, 1986), heterotrophically (Wong et al., 1994), or as described here, mixotrophically. Therefore, the lack of treatment to control V. chlorellavorus is a major challenge for the industry. Here, we report the development of tools for early detection and treatment of V. chlorellavorous. Chlorella Culture Conditions Identification of the Algae Chlorella HS26 was isolated from soil samples in the Sonoran Desert of Arizona, USA and privately deposited in the Culture Collection of Algae at the University of Cologne (Germany). Total genomic DNA was extracted and purified using the DNeasy Plant Mini Kit (Qiagen, Hilden Germany) and the nuclear-encoded rRNA-operon was amplified by polymerase chain reaction (PCR) using DreamTaq TM DNA Polymerase (Fermentas, St. Leon-Rot, Germany). The DNA was sequenced for 18S, ITS1, 5.8S, ITS2, 28S regions using universal eukaryotic primers (see Marin et al., 2003;Marin, 2012). The following PCR protocol was used: an initial denaturation step (95 • C for 180 min) was followed by 30 cycles including denaturation (95 • C for 45 min), annealing (55 • C for 60 min), and elongation (72 • C for 180 min). PCR products which showed single clear bands by gel electrophoresis were purified using the Dynabeads M-280 Streptavidin system (Holmberg et al., 2005). For sequencing the SequiTherm EXCEL II Long Read DNA Sequencing Kit (Biozym Diagnostik, Germany) and fluorochrome-labeled primer combinations were used. Two partial and overlapping sequences of each strand were read out with a LI-COR IR2 DNA Sequencer (LI-COR Biosciences, Lincoln, NE, USA) and assembled to the complete rDNA sequence using the program AlignIR TM 2.0 (LI-COR Biosciences, Lincoln, NE, USA). Sequences were aligned manually on the basis of conserved rRNA secondary structures using SeaView 4.3.0 software and only unambiguously aligned sequences were used for phylogenetic analysis. The sequences were compared with existing sequences on the NCBI GenBank database via BLAST. The closest BLAST hits had at least 99 % similarity to Chlorella sp. NIES2171 (accession number AB731604) and Chlorella vulgaris CCAP/79 (FR865683), consequently the species was preliminary identified as Chlorella sp. HS26. It should be noted that recently, the two reference species were proposed as Micractinium inermum NIES2171 (Hoshina and Fujiwara, 2013) and Micractinium sp. CCAP 211/79 (Germond et al., 2013), respectively. The new sequences for the strain used in this work were deposited in the NCBI database under the accession number KU641127. Laboratory Culture Conditions Small-scale testing of Chlorella was performed in duplicate using baffled experimental Erlenmeyer flasks (100 ml volume), incubated at 25 • C, and shaken at 100 rpm. The cultures were inoculated at 10% v/v from an axenic 2-3-day-old exponentially growing autotrophic flask culture using BG-11 medium. The experimental flasks were illuminated with LED lights at an intensity of 100 µmol photons/m/s, and sodium acetate trihydrate (2.5 g/L) was added daily to support mixotrophic conditions. The pH was maintained between 7 and 8, and the flasks were maintained in a CO 2 enriched (2 %) atmosphere inside a chamber. For each flask experiment, we tested for a treatment effect, a time effect, and a treatment by time interactions using linear mixed models. Treatment and time were modeled as fixed effects, and we included a first-order autoregressive error structure (AR1) to account for temporally autocorrelated measures. Models were fit using the package lme4 (Bates et al., 2015) in R version 3.0.2 (R Core Team, 2016). Open Pond Culture Conditions The pH treatment validation was performed outdoors in open pond raceways (1000 L running volume) measuring 3.5 m long, 1.75 m wide and 19 ± 1 cm depth cultures. The mixotrophic cultures were fed acetic acid on demand through a pH-auxostat feedback control system. The culture was maintained at pH 7.4 ± 0.05 and residual acetic acid (0.2-1 g/L) and nitrate (0.2-0.5 g/L) were maintained at constant concentrations throughout the batch cycle using a feedstock solution of acetic acid (40% v/v) and NaNO 3 (40 g/L) as titrants. The initial BG-11 medium was modified to contain (in g/L): sodium acetate (0.5), NaNO3 (0.5), The medium was prepared using reverse osmosis water and initial pH was corrected to 7.5 using HCl (1 M). The raceways were inoculated with 10% v/v of outdoor open cultures that had previously been exposed to Vampirovibrio chlorellavorus. Temperature was maintained at 24 ± 2 • C using a 6 m long stainless steel heat exchange coil. Mixing was applied with a paddlewheel at 1 m/s tip speed. The cultures were aerated (0.05 vol air/vol culture per min) using a 6 m long porous hose and evaporation was corrected daily using reverse osmosis water. Culture conditions above were scaled up to a pilot (60,000 L) and commercial reactor (130,000 L) according to (Tonkovich et al., 2014, International Patent No 2014. Cell Dry Weight Cell dry weight samples (10 ml) were collected in duplicates daily from the cultures and vacuum filtered with glass microfiber filter papers designed to retain particles of 1.1 µm (Ahlstrom TM Grade 161). The filtrate was washed twice with 10 ml of ammonium bicarbonate 0.5 M solution and placed in an oven (105 • C) until weight was stable. Residual Nutrients Culture samples (2 mL) were centrifuged (17,000 × g for 7 min) and the supernatant was removed and diluted 20-fold. Acetate in the medium was analyzed by HPLC according to the Association of Official Agricultural Chemists Official Method 986.13. The nitrates were analyzed using a Latchat Quickchem TM 8500 and the UV-method 10049 (Hach, Milwaukee, WI, USA). Culture Longevity Culture longevity of Chlorella was determined based on the total productive days in the target reactor. Chlorella cultures operated in sequential batch cycles. The new batch started from the previous batch before this reached stationary phase. Thus, those days with stable or declining dry weights indicated the end of the life of that culture and were excluded from the longevity calculations. The longevity within our commercial reactors (130,000 L) was compared between treated (n = 9) and untreated (n = 8) runs using a student's t-test (JMP R , Version 12.1 SAS Institute Inc., Cary, NC, USA). Total Aerobic Bacterial Counts Total aerobic bacterial counts were determined in duplicates using 3M Petrifilm TM Aerobic Count Plates. The plates were incubated for 3 days at 35 • C and counts were read using the 3M TM Petrifilm TM Plate Reader and associated image analysis software. The Percentage of Chlorella with Attached Bacteria The percentage of Chlorella with attached bacteria was determined with phase contrast light microscopy using oil immersion objective lens (100×; Olympus DP72). Algal cells with one or more bacteria on their surface were recorded as positive infection. Overall, 50-100 algal cells were used to calculate the percentage. The standard deviation for the method was below 10%. Determination of Bacterial Community Structure Samples (2 mL) were collected daily from outdoor Chlorella cultures and prepared for small subunit rDNA sequencing as follows. For total DNA extraction, the biomass was concentrated by centrifugation (10,000 × g for 10 min) and DNA was purified using the ZR Fungal/Bacterial DNA mini prep kit (Zymo Research, Irvine, CA, USA). To isolate microbiome DNA that was only associated with the phycosphere, biomass was concentrated using slow speed centrifugation (1000 × g for 7 min), the supernatant containing unattached bacteria was discarded, the pellet was washed by resuspension in sterile water three times, and the final pellet was collected using high speed centrifugation (17,000 × g for 7 min). To isolate DNA from the whole culture portion, biomass was collected using high speed centrifugation (17,000 × g for 7 min). For all samples, DNA purity and quantity was determined via 260/280 nm readings then normalized to 10 ng/µL. PCR was conducted on the normalized gDNA using chloroplast excluding 16S rDNA primer set 799F/U1492R (Chelius and Triplett, 2001). The following PCR protocol was used: an initial denaturation step (98 • C for 30 s), followed by 35 cycles of denaturation (98 • C for 10 s), annealing (53 • C for 30 s) and elongation (72 • C for 60 s). The PCR cycle was ended with a 72 • C final elongation for 10 min. The resulting PCR product was visualized on agarose gels for confirmation of expected bands (∼700 bp) and then cloned using Zero Blunt Topo PCR cloning kits (Invitrogen, Carlsbad, CA, USA). 16S rDNA clone library inserts were sequenced with the T7 vector primer using an ABI 3730xl DNA Analyzer. The resulting 16S rDNA sequences were aligned and trimmed of primer and vector sequences using the built-in trimming tool in the Geneious 8.1.5 software suite (Biomatters, Ltd, New Zealand) with vector trimming against the NCBI UniVec database 1 . Sequences with an HQ % quality score below 30% were excluded from analysis. Sequences were then aligned to the NCBI "nr" database for taxonomic assignment using BLAST (Altschul et al., 1990). Best BLAST hits had ≥98% similarity to the query sequence picked for taxonomic assignment. For verification of assignments, the trimmed sequences were also aligned and classified using the SINA online alignment tool against the SILVA 119 database [lc1] (Pruesse et al., 2012). Although these methods involved two different reference databases and alignment algorithms, the taxonomic assignments were in agreement at >98-99% sequence similarity (data not shown). 1 ftp://ftp.ncbi.nlm.nih.gov/pub/UniVec/ Heatmaps of bacterial relative abundance were generated using the phyloseq R package (McMurdie and Holmes, 2013). Predator Identification and Tracking Using qPCR Assay A proprietary 6-carboxyfluorescein based (FAM) qPCR assay was designed for V. chlorellavorous consisting of forward and reverse primers and a species specific probe. The assay is not publically available at this time; however, the information may be made available by contacting Heliae and agreeing to confidentiality terms. The assay was designed using a clone insert bacterial 16S rRNA sequence from an infected pond (Accession number KU570459). The clone exhibited >99% similarity (692/695 nucleotide match) to a GenBank 16S gene sequence (Accession number HM038000) from the V. chlorellavorous type culture that was deposited by Coder and Starr (1978) and sequenced by the American Type Culture Collection (ATCC29753). DNA collected from the phycosphereenriched fraction of the same culture was used as a positive control template. Pond samples (2 mL) were lysed by bead beating using 0.5 mm beads at 3400 rpm for 2 min and centrifuged at 10000 × g for 1 min to remove cellular debris. The aqueous supernatant (1 µL) from the lysate was used as a template for the qPCR assay in 10 uL total volume reactions. The following qPCR protocol was used: an initial denaturation step (98 • C for 3 min), followed by 40 cycles of denaturation at 95 • C for 5 s, and annealing at 60 • C for 30 s. The total run time for the assay was 120 min. The assay showed specificity only to the V. chlorellavorous assigned clone insert and tested negative (C t = 0) against purified 16S sequences from various bacterial isolates representing multiple genera (Shewanella sp., Acinetobacter sp., Ochrobactrum sp., Pseudochrobactrum sp., Bacillus sp., Stenotrophomonas sp., Clostridium sp., Azospirillum sp., Gemmobacter sp., Pseudomonas FIGURE 2 | The comparison between the whole culture bacterial community structure (16s rRNA gene sequencing assay) of a batch culture of Chlorella and the corresponding phycosphere portion. Sequencing data was not available for days 2-4. Frontiers in Microbiology | www.frontiersin.org sp., Pedobacter sp., Pannonibacter sp., Rheinheimera sp. Azoarcus sp., Cloacibacterium sp., and Comamonas sp.). Literature Review on Cytoplasmic pH Regulation by Microalgae Literature review on cytoplasmic pH regulation by microalgae was performed to compare the response of Cyanophyta and Chlorophyta to the medium pH. The review included peerreviewed articles that analyzed the cytosolic pH of algae at two or more medium pH set points. Cytosolic pH was analyzed using either 5,5-dimethoxazolidine-2,4-dione (DMO), 31 P-NMR spectroscopy, or fluorochrome methods. Mean pH values were easily transcribed either from a data table or a detailed line graph reported in those articles. Means and standard deviations were calculated based on data from 3 to 7 different strains. Acidophilic microalgae, with an optimum growth at a pH below 5, were excluded from this review. Most cyanobacteria stopped growing at medium pH below 6 and cytosolic pH data could not be retrieved for those data points. The pH-Shock Treatment The pH-shock treatment was applied by adding hydrochloric acid (HCl; 34% v/v) to well mixed algae culture until the pH decreased from 7.5 to 3.5. The pH of the algae culture was maintained at pH 3.5 for 15 min in the presence of 0.5 g/L residual acetate. The pH was then returned to pH 7.5 using sodium hydroxide (2 M). More details on the method and method variants are provided by Ganuza and Tonkovich (2015, U.S. Patent No 9,181,523). FIGURE 4 | The pH-shock treatment (A) duration, (B) pH level, (C) residual acetate concentration, and (D) type of titrant did not affect Chlorella growth in flasks (250 mL, n = 2). Unless stated otherwise the pH treatment was conducted using hydrochloric acid (HCL) to reduce pH to 3.5 for 15 min in the presence of 0.5 g/L acetate and then inoculated aseptically into the Chlorella culture. Error bars represent 1 standard deviation. Predator Identification Three data sets are presented as representative examples (Figure 1). Cultures grown in 60,000 L bioreactors showed initial Chlorella growth as measured by biomass increase. After transferring the cultures to the 130,000 L reactor (arrows; Figure 1A), biomass declined in 2-3 days. When identifying V. chlorellavorus using 16S rRNA sequences, the bacterium was not detected until day 6, at which time the culture was crashing. In addition to V. chlorellavorus, other bacterial taxa were present throughout the experiments (Figure 1B). When quantifying V. chlorellavorus using the qPCR assay the progress of the infection was observed in anticipation of the crashing of the culture (Figure 1A). A fourth 130,000 L reactor was investigated using only 16S rRNA sequences and the proportion of V. chlorellavorus from the whole pond community was compared to the community associated with the phycosphere (Figure 2). While V. chlorellavorus was present in both whole pond and phycosphere samples on day 7, the proportion of reads assigned to this predatory bacterium was much more abundant in the phycosphere sample (0.47 versus 0.07). When viewed microscopically, the Chlorella culture was heavily infested with Vampirovibrio-like cells (Figure 3). Initially, a single V. chlorellavorus cell attached to the outside of a Chlorella cell, but this quickly developed into a chain and then a cluster of bacterial cells. As the predation proceeded, numerous bacterial cells formed and then were released, eventually leaving empty or ghost algal cells ( Figure 3A). Finally, as the predation event neared completion, the Chlorella cells began clumping, apparently held together by a bacterial matrix (Figure 3B). Predator Treatment Development in Laboratory Preliminary flask experiments showed that Chlorella tolerated a pH-shock (pH 3.5) up to at least 2 h ( Figure 4A). Chlorella also tolerated a 15 min pH-shock as low as pH 1.5 (Figure 4B), and cells tolerated the pH-shock even in the presence of up to 5 g/L acetate ( Figure 4C). Lastly, Chlorella tolerated treatments using either sulfuric acid or HCls as titrants (Figure 4D). No significant differences were observed between any of the treatments tested during these four experiments according to the Generalized Linear Model analysis (P > 0.1). Predator Treatment Development in Open Ponds Based on positive results from laboratory experiments, the pHshock treatment was tested outdoors on mixotrophic cultures growing in small raceways (1000 L). A contaminated culture (70% Chlorella cells having attached bacteria) was removed from a large reactor (130,000 L) and used to inoculate two small raceways. One raceway was pH-treated as follows: (1) the pH was decreased from 7.5 to 3.5 using HCl, (2) the pH was maintained at pH 3.5 for 15 min in the presence of 0.5 g L −1 residual acetate, and (3) the pH was then returned to pH 7.5 using sodium hydroxide (Ganuza and Tonkovich, 2015, U.S. Patent No 9,181,523). The second raceway was left untreated. The visual symptoms associated with V. chlorellavorus predation (i.e., bacterial attachment to the algae, clumping of algae cells, and change in culture color) were only observed in the untreated raceway (Figures 5A-C), and this culture crashed within 2 days. In the pH treated culture, the symptoms of infection were reversed and a culture crash was prevented (Figures 5D-F). The pH-treated culture remained healthy, reaching a cell density of 6 g/L within 6 days ( Figure 6A). Bacterial attachment to the algal cells was reversed and eliminated within 12 h of treatment ( Figure 6B) and a twolog decrease in total aerobic plate counts (day 0, treated vs. untreated) demonstrated the bactericidal effect of the treatment (Figure 6C). Analyses based on16s rRNA sequencing showed suggested that the pH treatment was effective at eliminating V. chlorellavorus, i.e., V. chlorellavorus reads were not detected in the pH-treated reactor 2 days after treatment but V. chlorellavorus comprised 25% of the bacterial community in the untreated reactor ( Figure 6D). A second outdoor pH treatment experiment used the more sensitive and timely qPCR analysis to track infection (Figure 7). A mildly contaminated culture (20% bacterial attachment) removed from an open pond raceway (1000 L) was used to inoculate two additional 1000 L raceways that were operating mixotrophically. One raceway was left untreated while the other received a pH-shock treatment. Bacterial attachment in the untreated reactor dramatically increased to 100% within 3 days and cell dry weight declined and crashed within 3-5 days (Figures 7A,B). These changes followed an increase in detection of V. chlorellavorus in the untreated FIGURE 7 | Impact of the pH-shock treatment on (A) Chlorella cell dry weight, (B) bacterial attachment, and (C) qPCR assay C t values for Vampirovibrio chlorellavorous from outdoor pilot scale reactors (1000 L, n = 1) inoculated with a contaminated Chlorella culture. The C t value, or cycle threshold, in (C) decreases as the target abundance increases. Lysates prepared for qPCR on day 1 from the pH treated reactor failed to amplify. reactor via qPCR on day 2 when attachment was at 40% (Figure 7B). If this had been a single commercial scale reactor run, qPCR would have provided a 3-day warning that the culture was headed for a crash and crop protection strategies could have been initiated. V. chlorellavorus detection in the pH-treated culture remained stable, while attachment was reduced to 5% and cell dry weights continued to increase for 11 days to 8.5 g/L (Figure 7). This response confirmed the efficacy of the pH treatment against this previously fatal V. chlorellavorus predator in outdoor ponds used to grow Chlorella. To date, we have confirmed the efficacy of the pH-shock treatment at commercial-scale (130,000 L running volume) during nine additional reactor runs that were compared to untreated control cultures. The longevity of the pH treated ponds (12.7 ± 2.7 days [mean ± 1 SD]) was significantly higher (t-test; t(1) = 4.47, P < 0.01) than the longevity of the untreated ponds (7.0 ± 2.7 days [mean ± 1 SD]), increasing the total harvested biomass. Additionally, we found that for a typical batch operation involving scaling up and transferring cultures every 6 days between outdoor reactors, Chlorella could be treated right before transfer for at least two sequential transfers; this treatment schedule resulted in an increased culture longevity from eight to over twenty consecutive days (Figure 8). DISCUSSION Microalgae are among the few species in industrial microbiology that are grown in open ponds. Consequently, as in agriculture, crop protection is one of the most important challenges facing the microalgal industry, and it has been recognized as a chief limiting factor for microalgal production at commercial scale McBride et al., 2016). Previously, pests encountered in commercial microalgal systems were FIGURE 8 | Longevity of a Chlorella culture was prolonged by applying the pH treatment repeatedly to the same culture upon transfer, at the beginning of at least two consecutive batch cycles. not widely reported, and even minimized in an effort to make the microalgal industry appear more promising. Today, a diverse assemblage of zooplankton, fungi, bacteria, and viruses are known to attack microalgal cultures, and their impacts range from chronically reduced production to swift and irreversible culture crashes Gong et al., 2015;Touloupakis et al., 2015;White and Ryan, 2015). Despite a more open acknowledgment of pests and the more frequent use of molecular diagnostic techniques for predator and pathogen detection , there are still very few detailed descriptions of crop protection strategies. Reports of treatments have included commercial fungicides (McBride et al., 2014) and hydrogen peroxide (Carney and Sorensen, 2015, U.S. Patent No 9,113,607) for fungal pathogens, pH transitions for rotifers (Zmora and Richmond, 2004), hyperchlorite (Zmora and Richmond, 2004), and size selective pulsed electric fields (Rego et al., 2015) for protozoa; see McBride et al. (2016) for a detailed review of reactive and preventative strategies. Our study identified the causative organism that was apparently impacting our commercial Chlorella ponds. During infection, a larger proportion of V. chlorellavorous sequence reads were present in the region immediately surrounding the algal cells (i.e., the phycosphere) compared to bacteria freeliving in the supernatant (Figure 2). This observation is in agreement with reports describing the epibiotic lifestyle of this predator (Coder and Star, 1978). Although V. chlorellavorous was originally identified as the potential causative crash agent FIGURE 9 | Summary of cytoplasmic pH regulation in response to the pH in the surrounding media for members of phylum Chlorophyta and Cyanobacteria reported in the literature (seven and four members, respectively). Chlorophyta includes Chlorella kessleri (El-Ansari and Colman, 2015), Chlorella pyrenoidosa and Scenedesmus quadricauda (Lane and Burris, 1981), Chlorella saccharophila (Gehl and Colman, 1985), Chlorella vulgaris and Chlorella fusca (Küsel et al., 1990) and Dunaliella parva (Gimmler et al., 1988). Cyanobacteria include Agmenellum quadruplicatum and Gloeobacter violaceus (Belkin et al., 1987), Anacystis nidulans, (Falkner and Horner, 1976) and Synechococcus sp. (Kallas and Castenholz, 1982). Note that only one Cyanobacterium (A. quadruplicatrum) did grow at a pH below 6.0. All other data points are expressed as mean values with error bars representing 1 standard deviation. Frontiers in Microbiology | www.frontiersin.org via 16s rRNA sequencing (Figures 1 and 2), qPCR was relied upon for advanced warning of a culture crash via increasing V. chlorellavorous abundance (Figure 7C). In general, detection of V. chlorellavorous via qPCR preceded visual observations of an impending crash, including an increase in the percent of Chlorella cells with attached bacteria (> 20%; Figures 6B and 7B), algal cell clumping and the culture changing color from dark green to yellowish brown (Figure 5). The analytical techniques developed and utilized here (i.e., V. chlorellavorus-specific qPCR assay, % bacterial attachment to Chlorella cells) proved important for determining timing and success of treatment applications. Although largely unaccounted for in the literature, V. chlorellavorus is known as a devastating predator of Chlorella. We have experienced first-hand how this cyanobacterium can induce culture crashes in a matter of days. Likewise, other Chlorella ponds in the southwestern USA have suffered similar crises from this predator (personal communication from Drs. P. Lammers, J. Brown, and M. Sommerfeld). Molecular reports available in the NCBI GenBank database suggest that this predator is widespread globally and therefore could potentially affect Chlorella cultures regardless of their location. In this context, the development of a treatment against V. chlorellavorus is especially significant because (1) there was no treatment available prior and (2) Chlorella-like microalgae are among the most commonly produced crops by the industry. In addition, the treatment was opportunistically validated in outdoor mixotrophic cultures, which are one order of magnitude more productive than traditional photoautotrophic cultures Tonkovich, 2016, U.S. Patent No 2015/0118735 A1). The crop protection strategy demonstrated here is straightforward and can be inexpensively applied at commercial scale (∼USD $100 for a 130,000 L reactor). The treatment has a low risk of failure given the broad tolerance of Chlorella to low pH. The pH-shock is highly selective against prokaryotes, as illustrated by the two log decrease in the total aerobic counts. Indeed, green microalgae (Chlorophyta) are better prepared than cyanobacteria to regulate internal pH relative to low external pH (relevant studies are summarized in Figure 9). Previously, the use of pH-shock treatment has been used to control lactic acid bacterial contamination from anaerobic yeast cultures in the Brazilian bioethanol industry (Basso et al., 2011). In microalgae, the transition to moderately low pH (6) has been used to control diatoms (Zmora and Richmond, 2004), while the transition to high pH (9-10) has been used to promote the growth of cyanobacteria over green microalgae (Vonshak et al., 1983;Touloupakis et al., 2015). Because the pH-shock treatment against V. chlorellavorus is applied for a discrete amount of time (15 min), resistance is less likely to develop than if applied continuously to the culture. To date, there is no indication that V. chlorellavorus has developed a tolerance to the treatment in our growth systems. Successful rescue of actively infected algae culture heading for a crash is typically rare and attempts to circumvent a crash generally end in total loss when the culture is discarded. Total culture losses due to infections can greatly impact productivity, not to mention provide ample pathways to infect other ponds in the area during operation and take-down (reviewed by White and Ryan, 2015). The pH-shock treatment we have described here is capable of completely reversing high infection rates (>70% attachment) of a predatory bacterium, doubling Chlorella culture longevity and increasing the total harvested biomass from a commercial scale production platform. This treatment is now a routine component of our company mixotrophic operation. AUTHOR CONTRIBUTIONS EG and EL studied and reviewed the pH regulation in microalgae. EG and CS planned, designed, acquired, analyzed, and interpreted the data leading to the pH treatment development. LC and BB developed molecular assays and analyzed and interpreted the molecular data leading to the identification and monitoring of V. chlorellavorus. All authors drafted, read, critically revised and approved the final manuscript.
2017-05-03T23:13:00.087Z
2016-06-20T00:00:00.000
{ "year": 2016, "sha1": "65927daa60c8e38a29c5d9d9a908c86b8db141b6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2016.00848/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65927daa60c8e38a29c5d9d9a908c86b8db141b6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225696267
pes2o/s2orc
v3-fos-license
Assessment of Whitehead Process Philosophy and Pedagogy in Nigeria: Implications for Global Citizenship among Teachers and Students This paper assesses teachers’ and students’ self-perception as global citizens in the context of Alfred North Whitehead Process Philosophy. The aim of the paper is to identify the potential for global citizenship within pedagogy and learning. One hundred students and 50 teachers from Peaceland College of Education, Enugu, in Nigeria, were selected systematically and examined on their belief that an action in situ could pose global consequences or benefits. Respondents were also assessed on other dimensions of globalization. Results showed that although more teachers believed themselves to be global citizens, there was little tendency to stimulate students in this regard. Students con-versely exhibited their potential for global citizenship by listening more to foreign media than their teachers. For students, however, knowledge of current affairs and interaction with foreigners were significant determinants of self-perception as global citizens; for teachers, it was the motivation to teach and the level of education. The study concludes that adopting process philosophy in schools has much promise for skills, values, attitudes and disposi-tions needed to live in a global society. Introduction In this paper, self-perception as global citizens among teachers and students is examined in the context of Alfred North Whitehead process philosophy. Global citizenship is considered crucial in this era of globalization driven by informa-tion technology. An individual's patriotism to the world enables him or her to see humanity as one-to truly see the world as a global village-irrespective of distance, individuals love their neighbors as themselves. Global citizenship is indicated by patriotism-first to the nation, then to the world. It is assumed that with increases in the spread of information and education, there will be a communality in perception and understanding of humankind as it relates to fellow human beings and the environment. This increased awareness will motivate patriotism in the world and global citizenship-the ultimate reflection of globalization (United Nations Educational, Scientific and Cultural Organization [UNESCO], 2017a). But the plight for and movement toward globalization has not reduced discrimination among people. Divisions along the lines of tribe, race, region, and religion still abound, despite the increase in access to information and knowledge of the costs and consequences of their attendant destruction. Nonetheless, education-the reorientation of the mind-remains the permanent solution to these vagaries and vices in the world (Duncan, 2013;Magrini, 2009). Schools are one of the best locations for such reformations to take place (Pohan, 2012). What pedagogical approach or philosophy will suffice to stimulate teachers and learners to see the world-the harmony and interplay of its elements-truly as one and reduce the propensity of man to destroy it? What forms of education will stimulate students to learn and understand that the incidence of their actions or inactions has a ripple effect that could be globally far-reaching? What disposition do teachers, rather, students need to possess to perceive and preserve the unity of humanity despite its diversity? This study profiles Nigerian teachers' and students' sense of global responsibility against the process philosophy of Alfred North Whitehead. Whitehead Process philosophy is the most modern of all process philosophies. It is the consummation of the belief that the current state of the world is in a dynamic state-in a state of flow, of evolution, in a "process" of "becoming" (Smith, 2010). To this effect, the world is always in a state of flux. Whitehead opposes the material or static state of existence and favors or proposes an organic metaphysical state. Like any organism, what affects one part affects the whole even while it is still growing, evolving, or developing (Monserrat, 2008;Sjostedt-H, 2016;Smith, 2010;Ugwuozor, 2019). Based on this belief, it is critical to note that there are ripple effects of global proportions for an action in one part of the world-the effect of an action or inaction could be adapted, replicated, or reacted to elsewhere, whether beneficial or harmful to mankind. This is the spirit of globalization, driven by rapid information dissemination and sharing. But the pedagogical structure in Nigeria to possess this mentality is indeterminate. The current pedagogical approach among Nigerian teachers is often from the top-down, that is, teachers deliver the instructions and examine their students based on those instructions without effective student participation and contributions (Emeh, Abang, Asuquo, Kalu Agba, & Ogaboh, 2011;Ugwuozor, 2017). Most teachers are thus considered "banks" of knowledge from which the students may "withdraw" deposits (Freire, 1970). Nigerian students often study to pass their exams and may not learn to master the skills needed for their profession (Emeh et al., 2011). This teaching method still occurs despite popular calls for a "learner-centered" approach to teaching. Marina and Halpern (2011) argue that (time and other factors) constrain teachers to simply deliver learning contents as instructions to students for which they will be assessed during exams for their level of memorization rather than for understanding. Students, therefore, tend to learn by rote and often have a narrow perspective on issues, lacking the capacity to relate to or translate the impact and implications of a local action abroad. Students may well receive instructions to satisfy the cognitive domain of learning bypassing their exams, but may lack the capacity to understand the global implications and applications of their learning-the affective. Thus, despite the apparent manifestation of globalization as evidenced in fashion trends, music, and other fads, the sense of global citizenship for these students in their various categories is uncertain. The extent to which people believe that an action in one part of the world can have effects or impacts in other places is largely unknown (Hill, 1999), since there is evidence of global conflict. There may be pedagogical implications of this effect. Since schools are a place for socialization of children and students (Theimann, 2016), they are also a place to teach global citizenship as teachers' perceptions of themselves as global citizens are likely to stimulate students. The rest of the paper is written as follows: In Section 2, Whitehead Process Philosophy is briefly reviewed and its implications for global citizenship and ends with a conceptual framework for the study. In Sections 3 and 4 respectively, the model and data are described. The results and their implications are discussed in Section 5. The study concludes in Section 6 with summary of major findings and conclusions. Whitehead Process Philosophy This subsection describes first the fundamentals of Whitehead Process Philosophy. It also highlights variants of his proposition and current level of application toward global citizenship. Global citizenship includes the situation when individuals begin to see themselves as part of a global system. Ideologies and ideas that will bring about the increase in the social, emotional, and economic well-being of people should be prioritized. Thus the need for people to understand that the effect of actions or inactions to be globally far and wide-reaching is crucial to building momentum for global citizenship and participation (Ugwuozor, 2017). For this mentality to be imbued in human beings there has to be a philosophy that guides this frame of thinking. Process philosophy and its variants are considered the best in this context and demonstrate the need to assess the worldview of Alfred North Whitehead. Much of this overview of Whitehead is drawn from Smith (2010) and Monserrat (2008). Whitehead's work is reviewed here based on the following constructs or concepts: prehension, internal relatedness, and concrescence. These concepts are borne out of his concept of panexperentialism-the interpretation of the universe draws from its prehension and internal relatedness of world elements-so world elements are always in a state of "experiencing" and "becoming". He had earlier indicated that nature is inert and that the universe consists of bits of matter that are extended, inert, lifeless, valueless, and purposeless. This he called scientific materialism but it did not survive the Darwinian evolution theory. Plural nexus was his counter-response to scientific materialism from whence the theory of prehension and internal relatedness was conceived. He posited that within each man is a nexus of experience. He illustrates, for instance, that a rock is an accumulation and continuation of experiences molecular interaction for eons. Thus, the steady-state of the world is not an outcome of "survival of the fittest," rather, while the world is always in a state of flux or becoming, the elements of the world more or less have been interacting to the point of "water finding its level"-a steady state (Smith, 2010 not teach students to discover and to identify themselves as part of a grand picture (Blackmore, 2016;Marina & Halpern, 2011). They often fail to communicate to students based on the subject they are taught that there are spatial and temporal effects of their actions and inactions. Students therefore should be taught to relate to the subject matter, irrespective of discipline, on how to perceive themselves as part of a process of becoming part of the world. Thus, students should be taught to know that their actions in situ have an impact in time and space. Smith (2010) also notes that Whitehead preferred that students learn to harmonize their "aims" in life: initial aim, inherited aim, and adjusted personal aim. Smith also notes that Whitehead preferred students to learn to harmonize their "aims" in life: initial aim, inherited aim, and adjusted personal aim. A student possesses an initial aim that they may contrast between the ideal for a particular learning outcome and process, which may culminate in its realization; students may possess an inherited aim if and when they realize their role in a wider process of realizing an outcome, of becoming. For instance, Bill Gates and his team were part of a process that links the world together through information and communication technology. Therefore, students should be taught not to objectify the world as lifeless but that their learning or experience makes them significant contributors in a greater process of becoming. Some other schools of thought agree with Whitehead, perhaps with a little variation. The pragmatist school of thought as supported by Kilpinen (2009) indicates that the process school of thought is evolutionary in nature after the order of Darwin. In Darwinian context, nothing is static-the world is dynamic. Kilpinen (2009), citing the works of Dewey (1922Dewey ( , 2002, indicates that human habits are intrinsically situated but need to be formed-"knowledge is habit". He posits that the difference between the mind of an infant and that of an adult is the level of formation due to habit so an adult may deal with more complex issues more than a child because the former has a better formed mind due to habit. This is the Process because as the concept implies, learning remains ongoing and cumulative to the extent that habits are formed. It also indicates, for instance, that a scientific man and a philosopher, like a carpenter, physician and politician, know their habit, not their consciousness. By implication, students should be taught to learn to elicit new habit formation; how the lessons or knowledge are transmitted is the gap in the observations of Kilpinen and Dewey. Habermas' (1998) social-communicative theory of rationality could provide the epistemological framework in which to gain further insight into the ongoing discussion. He defines rationality as interpersonal relationship and therefore it has a social aspect (Schaefer, Heinze, Rotte, & Denke, 2013). His work, The Inclusion of the Other, reveals the dissatisfaction with the traditional epistemology in which self-understanding reflects one universal worldview. For Habermas (1995,1998), this conception of self or worldview is no longer adequate given the social and ideological pluralism, a fact I think Whitehead would concede as a Open Journal of Philosophy metaphysical given. Communication for learning, therefore, is the fulcrum of Habermas (Hermann, 2017). He insists that the formation of subject (based on habit) is a communicative process since "men learn from each other". Learning from one another, in turn, stimulates democratic ideals in the minds of people perhaps because no man knows it all and needs the knowledge of others (Cahn, 2004;Dewey, 1916;Papastephanou, 2017;Sandlin, Burdick, Norris, & Hoechsmann, 2012;Straume, 2016;Ugwuozor, 2016). But whether teachers possess some of the elements of these philosophies is another matter. Often, discourse on philosophy is for intellectual purposes-for mental stimulation. Its application in the classroom or other theater of training is rare. Moreover, literature is rarer in the application of process philosophy for student learning of global citizenship. Far more of the literature on global citizenship is delivered by international donor agencies. Overall Implications for Global Citizenship International development organizations encourage young people to develop the knowledge, skills, and values they need to engage with the world, Oxfam (2017). Based on man's apparent inhumanity to man and its attendant consequences, including terrorism, hunger, war, corruption, and death, these organizations have developed a template for global citizenship education to curb, if not to eliminate, this scourge. UNESCO (2017a) proposes three core conceptual dimensions of learning for education to be transformative (see Figure 1)-knowledge, socio-emotional, and positive change. Knowledge (cognitive) must touch the heart (socioemotional) and turn into action to bring about positive change (behavioral). They argue that the framework emphasizes an education that fulfills individual and national aspirations, and ensures the well-being of all humanity and the global community. Another framework is recommended by Blackmore (2016). She argues that students should be taught critically to make them effectively engage with the world via a series of dialogue and reflective feedback, which, in turn, makes all parties in the conversation take responsible action (see Figure 2). Based on these parameters, it can be assumed that the level of development in the developed world may suggest that global citizenship education is imbued in their curriculum. For instance, students are taught by default how to preserve their environment and the consequences of not doing so (Hill, 1999). Students are taught the various causes and effects of climate change-the effect of greenhouse gas emission due to industrialization, improper disposal of plastic and toxic wastes, and the resulting prevalence of flooding, deforestation, and desertification; ice melting at the polar regions, and wild fires, among others. The developed world possesses a better looking and conserved environment. The undeveloped world is more likely to destroy its environment due to lack of education than the developed world. Moreover, funds for environmental conservation flow from the developed countries. What is not clear, however, is the impact of curriculum and pedagogy on a sense of humanity, especially in the current political climate and economy. While literature exists that agrees that an improved curriculum may translate to global citizenship (Peters, Britton, & Blee, 2008), the author of this study has yet to find literature about an experiment that has measured the translation of process pedagogy to global citizenship. Moreover, the global consciousness of global citizenship remains threatened by the politics that promote tension in race relations, on immigrants and immigration, on sexuality and sexual orientation, on international trade, and on terrorism. Conceptual Framework The author-a believer in Whitehead's philosophy-accedes to the following: • Education in terms of (Whitehead) Process Philosophy: pedagogy and learning in schools and elsewhere is motivated by the consciousness that every action has consequences that could be of global proportions and significance. This "mentality" will stimulate the understanding that good or bad actions will make the world a good or bad "becoming". • When this mentality or belief is adopted and assimilated in our daily processes, it becomes easy to see the world like an organism, a body where pain in one part affects the whole. This belief will stimulate patriotism and global citizenship. It will then become easier to perceive the prehension and internal relatedness of all elements that make up the material and metaphysical space of man. • With increased level of patriotism and global citizenship, it will be easy to lure mankind away from vices that bedevil the global polity and economy. For instance, it may become easier to reduce, without force, the incidence and prevalence of global issues like corruption, environmental degradation, and terrorism, among others. Open Journal of Philosophy Source: Author's original conception. Model Multinomial Logit Regression was used to identify determinants of self-perception as global citizens among the respondents. Respondents were classified according to their belief that an action committed locally may have ripple effects of global magnitude. Responses-dependent variables were categorized as "Yes", "No", and "Don't Know" and, respectively, to indicate that their beliefs, non-beliefs, where, i = cases, j = categories and k = independent variables. Because parameter estimates of the model are often difficult to interpret and may not be intuitive appealing, the model is interpreted in terms of relative risk ratio or odds ratio. The model is specified thus, which indicates that the probability of choosing j relative to 0 changes if we increase x by one unit. We then examine the probability of "Yes" and "Don't Know" responses relative to "No" responses to the question on the effect of a local action abroad. Data Primary data were collected using questionnaires from students and staff of Peaceland College of Education, Enugu. The school possesses all attributes of higher education. It is affiliated with local and international universities and attracts staff and students from all walks of life. It is located in the heart of the Enugu metropolis and leverages the advantage that Enugu is former capital of Nigeria's east central region. Enugu remains the most developed city in southeast Nigeria. It is strategic in location because it is bounded by the north central region to Benue and other states of southeast Nigeria-Anambra, Abia, and Ebonyi. Residents are mostly students, civil servants, businessmen and farmers (in the hinterlands). Furthermore, it is the most peaceful capital of southeast Nigeria with drainage, roads, and an international airport. Multistage sampling was used to select 100 students between 100 levels (freshman) and 400 levels (final-year students). The first stage required students to complete a listing in each level. Then, based on their population, a representative sample was selected. All levels had almost equal population; 25 students were selected from each level. Respondents were selected on a random basis in each stratum prior to the commencement of their compulsory lectures. All students had an equal chance of participating. Teacher-level data were collected from a simple random selection of all the teachers in the school according to a complete list of all the teachers. Data were collected to measure proxies of variables peculiar to process ontology on panexperentialism: prehension, concrescence, and internal relatedness. On this basis, Whitehead is assessed on the level of perception of globalization and the sense of global citizenship. From the pedagogical standpoint, data were collected on pedagogical preference and its bearing on teachers' and students' perception of globalization and sense of global citizenship. Data were collected on the following: on factors that motivate teachers to teach whether they teach to stimulate students toward global citizenship or not; on the pedagogical preferences of teachers and students and whether their teaching style is participatory, learner-or teacher-centered; levels of interaction with foreigners to indicate their "appreciation" and "acceptance" of the views of people outside their frame of reference. Data were also collected on their listenership to local and foreign news and how interested they might be in their immediate and wider surroundings. Data were also collected on their self-perception as global citizens and the possible factors responsible for that perception. Self-perception as global citizen is indicated by the level of belief that an action or inaction in Enugu may have ripple effects in England and elsewhere in the overall global process of "becoming" or evolving. This is the crux of Whitehead's organic process philosophy. Results and Implications In this section, factors that border on perception of teachers and students as Open Journal of Philosophy global citizens are discussed within the framework of Whitehead process pedagogy. As a backdrop to determinants of teachers' and students' perception of themselves as global citizens, what motivates teachers to teach is discussed first, followed by the pedagogical preference among teachers and students. In Section 5.3 both groups are assessed on their frequency of international interactions to appraise their sense of globalization, and in the last section, details and determinants of global citizenship among the respondents are discussed. Motivation to Teach Forty percent of teachers teach to relate their lessons to a wider environment (see Figure 3). This category of teachers is likely to adhere to Whitehead process pedagogy. They are likely to stimulate the consciousness of students for relating these lessons to their immediate and wider community. What implications and applications might this hold for societal development? Most teachers however teach for "selfish" reasons. A significant proportion (34%) teach to learn more about the subject, perhaps, following the principle that "teaching is twice learning" and some others teach to discover more about themselves. This supports the fact that in Nigeria, teaching is a calling for the unemployed and underemployed. It is often a last resort for job seekers. Most of these teachers, therefore, may lack the capacity to transmit their lessons in a global way, that is, the content and illustrations of their lessons may not be globally far reaching. In the next section, pedagogical preferences among teachers and students are discussed. Pedagogical Preferences among Teachers Sixty-six percent of teachers prefer students to learn by doing their research and forming their own notes. This implies that students may possess the potential to discover the varieties of the subject matter outside of their immediate environment. But it is unclear if forming their own notes is not just for the sake of passing exams. Thirty-two percent of teachers prefer students to learn by reading only recommended texts and by forming their own notes. But how "global" or far-reaching the content of textbooks is will determine how stimulates a consciousness that the effect of a local action could ripple outward. Students who form their own notes may likely retain learning because of their ability to recast or rephrase the note contents. However, teachers (2%) who prefer students just to read recommended texts without forming their own notes are likely to make them learn by rote and will be less likely to stimulate them to "globalized" learning-to learn about the implications and applications of their lessons outside the classroom and in the world at large. Figure 4 shows the teachers' pedagogical preferences. Figure 5 shows the students' pedagogical or learning preferences. As with the teachers, a greater distribution (47%) prefers to form their own notes by reading Distribution of teachers 1 = to learn about the subject, 2 = to discover self, 3 = to learn about immediate environment, 4 = to discover wider (global) environment and its functions, 5 = others, specify Distribution of teachers (%) 1 = students who prefer to pass exams by reading notes, handouts, and textbooks, 2 = students who prefer to form their notes only, 3 = students who prefer illustrations with global issues, 4 = none of the first 3 mentioned Students' Preference for Teachers' Pedagogical Styles widely from many sources. This further validates the fact that students possess the potential for globalized and integrated learning, to understand the wider implication and applications of their disciplines. Forty-three percent of students prefer to read notes, handouts, and textbooks rather than do research-these students may simply read the required materials only to pass their exams. Less than 5% of students are global in their learning preference. That is, they were the only ones who preferred to be far reaching in their interpretations of what the teachers teach. It can be implied, therefore, that it is more convenient for students to be narrower in their assimilation of the subject than to investigate the subject in a wider, far-reaching manner. Some factors responsible for this approach might be time constraints (too many lectures) or because they have not found the right motivation for globalized learning. Interaction with Foreigners This subsection describes how students and teachers can be potentially globalized. This is indicated by their frequency of travel abroad (for teachers) and level of interaction with foreign students for cross-fertilization of ideas. Section 5.4.1 discusses how traveling abroad will likely influence process pedagogy and process learning among teachers and students respectively. Section 5.4.2 presents the same issue for students, but in terms of their interaction with foreigners (either by traveling abroad or meeting with foreigners in Nigeria). Figure 6 shows that almost 70% of the teachers had never traveled abroad. Most of their information about foreign issues may be gleaned from the media. Only about 8% of the teachers traveled often and interacted with the wider world. This may not hold much promise in stimulating students' global or worldview. Nonetheless, interacting with the wider world may not be limited to just foreign trips. Listening to foreign news and media may also bring the world into one's home. Nonetheless, it is uncertain what makes a student or teacher more globalized, frequent direct interaction with foreigners or listenership to local and foreign media. Students' Interaction with Foreigners Like the teachers who have never traveled abroad, most of the students have not interacted with their counterparts abroad. Figure 7 shows that only 6% of them claimed they did so often; about 53% did interact infrequently and 40% have never met students from foreign countries before. Exchange programs are often a good means of getting students to interact across the globe, for ideas to cross-fertilize, and for increased collaboration. But there are little data on schools that have exchange programs although there is anecdotal evidence for private schools that go on foreign excursions (Agarwal, Bansal, & Maheshwari, 2010). In any case, it can be assumed that students' infrequent interaction with their counterparts abroad limits their potential for globalized learning and global citizenship. Global Citizenship Global citizenship of students and teachers is assessed based on their listenership to local and foreign news and a comparison of their beliefs in globalization, that is, if they believed that what happens in Nigeria has an impact or implications elsewhere in the world. The determinant of their perception of globalization is assessed to identify factors that significantly influence such beliefs. Figure 8 shows that 65% of the teachers listen to local news very often while half of this number of students did not listen to local news. Almost 20% of students rarely listened to local news and 45% of them sometimes did. The higher listenership among the teachers may be attributed to age difference. Older people listen more to local news, perhaps, because of the need to follow issues in politics and economics. Younger people listen more to local news for entertainment (Wekesa, 2016). This may be further collaborated because political issues domi- Distribution of students 1 = often, 2 = not too often, 3 = never (Wekesa, 2016). Moreover, more youthful students may find social media a more beneficial source of entertainment and information. Figure 9 shows that listenership to foreign news is about equal between students and teachers. Students' regular listenership is marginally higher than the teachers' (34% and 33%, respectively). Marginally higher listenership also exists for students among respondents who do not regularly listen to foreign mediaabout 49% of the students and 48% of the teachers. Among respondents who rarely listened to the foreign media, there were more teachers (19%). Distribution of students and teachers 1 = very often, 2 = sometimes, 3 = rarely students teachers Open Journal of Philosophy With these findings, it is inconclusive to appraise a more globalized respondent category. However, identifying what factors may motivate either students' or teachers' listenership to foreign media will be crucial to assessing their potential for global citizenship. Self-Perception as Global Citizens This subsection is the most direct proxy measurement for absorbing Whitehead Process Philosophy. Figure 10 shows that 62% of the teachers are considered "global" in their teaching; teachers who believe and prefer to deliver their lessons in terms of their impact and implications in Nigeria and abroad. This implies that they relate current state of affairs in Nigeria to what is happening abroad to buttress the impact of their lessons. Thus, they may relate to local and foreign events to illustrate their lectures. Conversely, the distribution of students who believe that actions here have an impact outside Nigeria are fewer. About 40% of them claim they don't know about possible effects on local actions abroad and 38% indicated they did. This may suggest that lecturers find it difficult to transmit their global awareness to the students; however, determinants of global citizenship among students and teachers are discussed in the next section. Multinomial Logit Regression In this section, the determinant of global citizenship among teachers and students is estimated with a discrete choice model-the multinomial logit model. Table 1 and Table 2 show estimates for students; estimates for teachers are in Table 3 and Table 4. Table 1 shows students who are knowledgeable about global current affairs and interact often with their international counterparts, relative to those who are not knowledgeable and do not interact, are more likely to be globalized in their perception of the world, and are more likely to believe that actions in Nigeria have impacts and implications abroad. However, students who interact more often with their counterparts abroad are more likely to possess a neutral perception on globalization than those who do not believe that what happens in Nigeria has effects abroad. Multinomial Analysis for Students Because parameter estimates are not the most intuitively appealing for interpreting multinomial logit model, the probability of the respondents' characteristics to influence their perception on globalization relative to not doing so is estimated in Table 2. Table 2 shows the likelihood of students' knowledge of global current affairs increased their belief in globalization-what happens in Nigeria influences or has an impact elsewhere in the world-by 34 times, and vice versa. Also, students who interacted more with foreigners are likely to believe what happens locally Open Journal of Philosophy has an international impact 9 times more than those who did not. Nonetheless, by the same magnitude, students who interacted more with their foreign counterparts are more likely to be indifferent or neutral about the impacts and implications of local actions abroad than those who do not believe. To gauge the ripple effect of local actions internationally may depend on the level of interactions among the students. Students who interact on a micro level-those who interact with very narrow concepts-may not see the likelihood of any consequence of a local action internationally. Also, those whose mentality is not "global" in nature also may not possess a mindset that translates local actions and international consequences. For instance, students who prefer to emigrate abroad because of their local societal vagaries may not relate or translate the consequence or effect of the actions that took place where they are to where they intend to emigrate. For example, of what consequence is the embezzlement of education funds Nigeria, which affects them directly, affect the quality of education they intend to pursue in Norway? Relative Risk Ratio Estimates for Students At the micro level, this may be difficult to determine directly or indirectly or in the short-or long-term, but one who possesses a "global" perspective may observe, through international relations, the meso-and macro-consequences of local actions internationally. An accumulated effect of embezzlement that leads to underdevelopment, which, in turn, may force students to (illegally) emigrate may elicit negative reactions from their host countries through anti-immigration policies. Immigrants are often vulnerable to violence, murder, racism, terrorism, and a host of other vices. Open Journal of Philosophy Table 3 parameter estimates for teachers suggest that the rationale for teaching and level of education did not influence their global outlook on life nor their pedagogy style. More teachers were unlikely to increase their global outlook by 1.2 units for every unit increase toward realizing their goals for teaching. Level of education did not influence teachers' globalized outlook. Furthermore, teachers who were likely to be indifferent to issues of globalization relative to those who did not believe in it were likely to decrease by 4.5 units for every unit increase in their ranks for anyone with higher education. But as noted earlier, parameter estimates of multinomial logit model are limited in explaining the relationship between dependent and independent variables. The probability of teachers possessing a global outlook to life relative to not possessing it is estimated in the next section. Relative Risk Ratio Estimates for Teachers Estimates of the Relative Risk Ratio show, in Table 4, that motivation for teaching and level of education significantly influence globalization outlook and pedagogy among teachers in a paradoxical sense. Increased motivation to teach reduces the likely belief in "globalized" perspective to teaching. Motivation to teach reduces the probability that teachers would support a globalized approach to teaching by 0.3, rather than not support it. This implies that teachers who are more motivated to teach, although believing by default in process philosophy, are less likely to teach in the context of globalized learning than those who do not. Similarly, a higher level of education reduces the probability that teachers would support a globalized approach to teaching by 0.03, rather than not support it. In this regard, teachers with a higher level of education who implicitly adhere to process philosophy-and believe that a local action may have an effect elsewhere in the world-are less likely to teach in this manner than those who do not. Possible reasons could include insufficient interaction time between students and teachers. Mumtaz (2000) argues that insufficient time is a limitation to participatory learning and action. Thus, teachers are often constrained to deliver instructions on the content (cognitive) rather than expounding on the subject with global illustrations (affective). Summary and Conclusion Major findings of this paper concern globalized pedagogy and learning between teachers and students in Nigeria. The paper assesses the actual and potential beliefs about the effect of an action in situ elsewhere in the world and its implications for teaching and learning. First, teachers were assessed on the motivation to teach, and 34% of them did so to learn more about a subject. Forty percent of them taught to discover the wider world. This, however, does not suggest they knew or taught students about the effect of a local action abroad. Then, both teachers and students were assessed on pedagogical preferences. Most of the teachers preferred their students to learn more about their subject by Similarly, both categories of respondents were appraised on their levels of interaction with foreigners. Most of the teachers had never traveled abroad and likewise, very few students ever interacted with foreigners. On this note, it may be concluded that both teachers and students were not globalized and may not believe in the effect of a local action elsewhere in the world. But traveling abroad or interacting with foreigners may not suffice for global citizenship. Thus, students and teachers were assessed on their listenership to local and foreign media-since they could not directly interact with the wider world, the paper also assesses how they did so indirectly via the media. Results show that teachers listened to more local media than students, but students marginally listened to more foreign media than teachers. Last, students and teachers were assessed on their self-perception as global citizens-if they believed that an action at home may have impact abroad. This question is the most direct proxy measure of Whitehead Process Philosophy. Sixty-two percent of teachers and 38% of students did not believe they were global citizens. This result therefore suggests that teachers are more globalized than students. But whether this belief is reflected in their teaching is another matter. Moreover, it was necessary to identify the factors responsible for this belief. Students and teachers were therefore examined on determinants of global citizenship. Knowledge of global current affairs and levels of interaction with foreigners were significant determinants among the students. For teachers, motivation for teaching and level of education were significant factors that did not influence their self-perception as global citizens. Only duration in the teaching profession was a factor, but it was not significant. Further, there was more likelihood for teachers that believe in globalization not to express this belief relative to those who did not. The rationale for this belief needs further examination. However, literature (for example, Appel, Buckingham, Jodoin, & Roth, 2012) suggests that insufficient contact time with teachers could be a factor that hindered "participatory" and global citizenship education. Nonetheless, the results have shown the possibility of global pedagogy and learning. Teachers possess the potential to stimulate students about globalized thinking-that a harmful action in situ could be of global consequences in time and space-and, similarly, for beneficial actions. It is possible, therefore, from a philosophical standpoint, to persuade students toward altruistic and patriotic actions: toward environmental conservation and protection; improved and healed differences in race, religion, and region; toward less greed and corruption; and advancing further to restore human dignity with little or no cost to life and livelihood. Perhaps policymakers can be persuaded to integrate this idea into the school curriculum to help students gain better awareness of their connectedness. This consciousness would enhance students' cognitive development and obser-Open Journal of Philosophy vational skills (Pyle, 2002), encourage future generations to start to live, not just simply exist in their environment, and is likely to reduce or eliminate bullying (Malone & Tranter, 2003). It would help students understand the world around them by shaping their behaviors to appreciate and preserve their surroundings (trees, animals, human beings, etc.) for what they are, rather than change them merely to meet their own needs. Such thinking, woven into a school curriculum, can become an integral element in communicating sustainability values. When people feel connected to someone or something, they are more likely to work hard to secure and preserve it. This study concludes that, by default, teachers possess more potential than students to adopt and adapt Whitehead Process Philosophy and Pedagogy. Thus, it is possible for students and the rest of the society to be trained to understand the prehension and internal relatedness of the elements of global systems so the outcome of one's action (and inaction), while not directly or immediately affecting the individual, could affect people and things elsewhere which they may be unaware of-a process way of thinking. Process mentality-belief in global citizenship-enhances one's ability to see the world and humanity as one interrelated unit. It is possible from the study's findings that increased motivation to teach and higher level of education will stimulate process thinking. Recommendations The world is becoming increasingly globalized in a market-driven sense without any clear-cut underlying philosophy. In fact, the dominant philosophy, in the author's opinion, is one of maximizing self-interest as the current global polity and economy demonstrate. Leading developed nations are striving for greater self-determination and influence as exemplified by the United Kingdom and the United States in the Brexit (Britain's exit from the European Union [EU]) and MAGA (Make America Great Again slogan of the Trump Administration). Many influential political parties within the EU are clamoring for less immigration into their countries, which has produced more anti-immigration policies. These policies adversely affect legal migrants of these countries and the latter may react negatively (in severe cases, through terrorism) to new immigrants; the extent to which immigrants are safe may depend on their wealth and influence. The push for immigration from developing countries is often a result of corrupt leadership and fear of criminal retribution. In addition, often political leaders in developed countries do not provide adequate infrastructure needed for development. Resources are often misallocated and public funds embezzled. Consequently, there is strife for scarce resources that culminates in extreme negative consequences. In all these scenarios, there is complete absence of Process Philosophy (the effect of a local action could be of global consequence), which hurts everyone. The belief that the world is one, as humanity is one, is not universal. This study recommends that teachers should be further motivated and trained to im- part the study of process philosophy. This philosophy should be a core rather than elective course in the colleges of education that train teachers. Teachers should be highly encouraged to interact directly with the wider world through foreign trips and exchange programs. Teaching should not be a mere job for the unemployed and the underserved. Ideally, students and the wider community will learn to understand, rather than to underestimate, their roles and contributions to a world that is constantly in a state of "becoming". Conflicts of Interest The author declares no conflicts of interest regarding the publication of this paper.
2020-06-25T09:09:41.978Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "b4cfe7b996ebcaa0b4e1d6b395b244eb86ae630a", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=101075", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "018fc8404db9031ab7110769c94bbba652968e92", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Sociology" ] }
17092592
pes2o/s2orc
v3-fos-license
Observational Window Functions in Planet Transit Surveys The probability that an existing planetary transit is detectable in one's data is sensitively dependent upon the window function of the observations. We quantitatively characterize and provide visualizations of the dependence of this probability as a function of orbital period upon several observing strategy and astrophysical parameters, such as length of observing run, observing cadence, length of night, transit duration and depth, and the minimum number of sampled transits. The ability to detect a transit is directly related to the intrinsic noise of the observations. In our simulations of observational window functions, we explicitly address non-correlated (gaussian or white) noise and correlated (red) noise and discuss how these two noise components affect transit detectability in fundamentally different manners, especially for long periods and/or small transit depths. We furthermore discuss the consequence of competing effects on transit detectability, elaborate on measures of observing strategies, and examine the projected efficiency of different transit survey scenarios with respect to certain regions of parameter space. INTRODUCTION The signal-to-noise ratio (SNR) of a planetary transit detection in photometric time series data can, in the simplest case, be approximated by: In this equation, depth is the transit depth in magnitudes, σ represents the photometric measurement uncertainty in magnitudes per data point (assumed here to be the same for all data points), and n equals the total number of data points observed during transit (Pont 2006). One essential assumption in this equation is the absence of any statistically correlated (red) noise, i.e., only random (white) noise is present. White noise is defined as noise that is uncorrelated from data point to data point; typical sources are photon noise and sky background noise. The relative contribution of white noise to the total noise decreases with increasing brightness of the observed target and number of data points. Pont (2006) and Pont et al. (2006) showed that calculations of transit SNRs with only white noise, as in Equation 1, is often insufficient and overly optimistic. Instead, one needs to account for the presence of red noise in calculations of SNRs and the corresponding yield projections for transit surveys. Red noise is defined as noise that is correlated from data point to data point; it is not necessarily removed through standard differential or ensemble photometry techniques. Typical sources of red noise may be weather, seeing changes, tracking/guiding errors, flatfielding errors, changes in airmass, or intrinsic astrophysical changes in target brightness. It does not change as a function of target magnitude, and is generally independent of the number of observational data points (see equation 9 in Pont et al. 2006). Thus, planetary transit searches are particularly sensitive to red Electronic address: kaspar, skane, ciardi@ipac.caltech.edu noise, due to their focus on bright targets and high number of observational epochs: both are aimed to reduce white noise, and therefore make red noise the dominant component. A detailed description of the transit detection SNR which includes both white and red noise components is given by Pont et al. (2006): where cov[i; j] is the covariance matrix with elements C ij representing the correlation coefficents between the i-th and j-th measurements obtained during transit. All diagonal elements C ii = σ 2 i are not correlated with other measurements and thus represent the uncorrelated or white noise uncertainties in the i-th measurement. These diagonal elements are assumed to be the same, i.e., σ i = σ for all values of i. In order to make the above equation more practically calculable, Pont et al. (2006) assume that statistical correlation among data points from different transits will be much weaker than among data points observed during the same transit. They furthermore separate the total noise into a purely uncorrelated (white) component σ w and a purely correlated (red) component σ r and derive an approximation of Equation 2: where n is the total number of data points observed during all transits, N tr is the total number of transits observed, n k is the number of data points observed during the k-th transit, and σ w and σ r are the white and red noise components, respectively. By means of Equations 1 and 3, it is clear that a planet transit SNR can be regarded as a function of transit survey strategy and astrophysical parameters (see §2). If this SNR exceeds a certain threshold value, then an existing transiting planet is, for the purposes of this paper, defined to be detectable in the data 1 . The window function determines the probability, as a function of planetary orbital period, that SNR transit exceeds this threshold. In this paper, we examine the dependence of the detection probability upon several astrophysical and transit survey strategy parameters for a number of white noise and red noise assumptions as well as criteria based on a minimum number of transits sampled. Since our calculations are based on existing transits, we note that the following aspects are not taken into account: the estimated frequency of transiting exoplanets, any non-circular orbits, multi planet systems, and detection of secondary transits. We also do not address the problem of false positives and how to weed them out. For more detailed studies of the above, we refer the reader to the following studies: frequency of (transiting) exoplanets: Gould et al. (2006) and Cumming et al. (2008); transit probability as a function of orbital elements: Barnes (2007), Burke (2008), and Kane & von Braun (2008); plus see Gaudi (2007) and Beatty & Gaudi (2008) and references therein for a comprehensive study of all factors influencing planet detections in transit surveys. We briefly outline our methods in §2, which describes our algorithm in §2.1, along with a justification for the threshold SNR selection, and addresses the respective influences of varying white and red noise components ( §2.2), as well as the consideration of sampling at least N tr transits with one's data to constitute a detection ( §2.3). We examine the effects of various survey strategy and astrophysical parameters in §3. Section 4 contains the application of window functions for selected scenarios and types of survey. We summarize and conclude in §5. ALGORITHM AND PARAMETERS In this Section, we provide a brief description of our algorithm and explain our choices of the globally used values of input parameters in our calculations in §3. Description of the Algorithm The window function algorithm used in this paper is based on counting data points observed during transit whose contribution to a virtual detection is dependent on the values of σ w and σ r as defined in Equations 2 1 We note that, to maximize applicability for astronomical planet transit surveys, we follow the arguments outlined above and in Pont et al. (2006), rather than using more rigorous treatments employed in the large body of statistical literature devoted to time-series analysis. These treatments include autoregressive moving-average (ARMA) modeling where events such as eclipses in the presence of red noise are sought with the help of autocorrelation functions at different time lags Robinson 2005), power spectrum analysis (Konig & Timmer 1997), or the use of surrogate data sets (Timmer 1998). and 3, typically measurable or calculable quantities in photometric time series surveys. User-provided observing cadence, number of nights, and typical length of night are used to generate an observing time line. From the input stellar and planetary radii, we calculate transit depth and duration according to the equations in Seager & Mallén-Ornelas (2003) (except in §3.6 where we explicitly set transit depth and duration), thereby assuming a central transit (i.e., impact parameter b = 0) and zero-length ingress and egress. For each orbital period, a family of light curves is generated for a range of starting phase angles; each with transits of user-defined photometric depths at the appropriate intervals. In the simulations, the number of data points per transit (n k ), number of transits (N tr ) and total number of data points within all transits (n) are tracked. It should be noted that an observation has to fit fully within a transit to be counted toward n and n k (that is, it needs to start after the beginning of the transit and terminate before the end of the transit), resulting in shorter exposure times' being more favorable for transit detection in this algorithm. For every light curve, the SNR (Equations 1 and 3) is calculated. If, for a given phase angle, the SNR exceeds SNR threshold , a transit is considered "detected". The probability of detection (P detection ) for a given orbital period is simply the ratio of phase angles for which a transit was detected to the total number of phase angles. Typical observational parameter values assumed in this paper (unless specifically noted) are: minutes for the observing cadence, one minute for the exposure time, tens of nights for observing run length, and few to ten hours for the typical time of observation spent during one night on the monitored target. Astrophysical parameter values are assumed to be around 1.0 and 0.1 solar radii for the parent star and orbiting planet, respectively, resulting in a transit depth of 0.01 mag. Transit duration depends on period, but typical duty cycles are in the 1% to few % range. Additionally, we set σ w and σ r to a few millimagnitudes (mmag). The threshold SNR is set to 7.0, based on the arguments in Jenkins et al. (2002) and specifically Pont et al. (2006Pont et al. ( , 2007, which each use thresholds of 7-9 as acceptable values for reducing false-alarms whilst maximizing real detections given a typical transit survey configuration. We note that, in contrast to other some window function calculations in the astronomical literature, we only use the SNR criterion to quantify detections, along with an assumed minimum number of sampled transits, and we do not require that, e.g., a full transit be contained in the data (as in, e.g., Mallén-Ornelas et al. 2003;von Braun et al. 2005). We do not account for holes in the observing due to weather, telescope outages, or technical problems. Furthermore, as mentioned in §1, we only calculate the probability of detecting existiting primary transits in circular orbits. Finally, we assume that the number of out-of-transit data points sampled is much higher than the number of in-transit data points. where transit surveys typically operate since red noise is independent of target brightness. We show this effect in §3.1. See and Irwin et al. (2007) for an in-depth discussion of different noise properties and their calculations. Typical ground-based survey estimates of σ r , as defined in §1, are on the order of 2-6 millimagnitudes (mmag) (e.g., Pont et al. 2006;Irwin et al. 2007;Nutzman & Charbonneau 2008). When subjected to detrending algorithms such as TFA (Kovács et al. 2005; or SYSREM (Tamuz et al. 2005), σ r can be reduced to 1-2 mmag. It is worth pointing out that the influence of red noise is much less of a problem for targeted observations such as characterization of known planetary transits (see Gillon et al. 2008, for example). Studies to date (e.g., Aigrain et al. 2008) have shown that the red noise in the space-based CoRoT mission (Baglin et al. 2006) is significantly lower than in groundbased counterparts, due in large part to the "removal of the atmosphere" Beatty & Gaudi 2008). Thus, the cause of any space based red noise not due to stellar variations is most likely caused by variations in the thermal environment of the spacecraft and detectors. Typical values for σ r in CoRoT light curves are on the order of 0.5 mmag (R. Alonso 2008, private communication; see also Aigrain et al. 2009). Number of Sampled Transits One criterion often used to calculate detection efficiency and related survey yield is the minimum number of sampled transits (i.e., the minimum number of transits during which any data were obtained). An important factor in the success of the widely used BLS algorithm (Kovács et al. 2002) is the initial folding of the data by a test period and subsequent search for transit-like features in the phased data. Thus, its power is really only realized for data that contain more than one sampled transit. We assume in this publication that the BLS algorithm has become an "industry standard" in the search for planetary transits, and we thus require the existence of at least two transits in the data for a transit detection, except for where we explicitly change this criterion ( §3.5). It is worth noting that different simulations in the literature require different minimum numbers of transits sampled, such as three for Pont et al. (2006). FUNCTIONS AND ASTROPHYSICAL PARAMETERS ON TRANSIT DETECTION PROBABILITY Careful consideration of the various strategy aspects involved in planetary transit surveys and a number of astrophysical parameters will have significant effects on the detection efficiency of existing transits von Braun et al. 2005;Pepper & Gaudi 2005;Beatty & Gaudi 2008;von Braun & Ciardi 2008;Beatty 2009). This Section quantitatively illustrates these effects under consideration of the assumptions described in §2. For the sake of clarity, we vary one parameter at a time, leaving all others fixed to values justified in §1 and §2. In particular, §3.1 examines different values for red and white noise, §3.2 looks at various observing run lengths (with a given Note. -Mean values for P detection for various period ranges (column 1) as a function of different values of σr (Fig. 1). Assumed parameters are given in the caption of Fig. 1. For discussion, see §3.1. number of hours of observing per night), whereas §3.3 assumes a number of consecutive observing nights but varies their lengths. §3.4 investigates different observing cadences. In §3.5, we explicitly change the criterion of a minimum of two transits sampled for a detection that we mention in §2.3 to see how requiring a larger number decreases detection efficiency. §3.6 deals with different transit depths and durations. The values of the parameters held constant in the respective calculation are given in the caption of the appropriate figure. The solid (blue) line shows the detection efficiency in the hypothetical case of zero red noise, and the dashed (red) line shows the same for a given σ r = 0. The corresponding table shows the mean values of P detection for the ranges of orbital periods given in the first column under assumptions of the indicated magnitudes of σ w and σ r . Amount of Red Noise and White Noise The contribution of red noise is independent of target brightness (unlike white noise, which is mostly due to photon noise for the brightest targets). Since planet transits are typically detected around the brightest sources in a given data set, red noise will be the dominant source of noise (Pepper & Gaudi 2005;Pont et al. 2006;Beatty & Gaudi 2008). Figures 1 and 2 along with Tables 1 and 2 quantitatively substantiate this statement, illustrating the influences of different amounts of red and white noises for different period ranges. As we mention in §2.2, typical values for the wide-field ground-based transit surveys before detrending are σ w ∼ 5 mmag and σ r ∼ 2-6 mmag, which reduces to σ r ∼ 1-2 mmag after detrending. The difference in P detection in Fig. 1 and Table 1 between σ r = 1 mmag and σ r = 4 mmag is very significant for longer periods. In addition, Fig. 2 and Table 2 show how small the influence of σ w upon P detection is for no or very little red noise. Thus, the value of minimizing the influences of red noise during observing (even at the expense of increasing σ w if necessary), and of applying detrending algorithms such as SYSREM (Tamuz et al. 2005) or TFA (Kovács et al. 2005; to one's data as part of their reduction can hardly be overstated. Observing Run Length For any kind of transit survey that has limited access to telescope time, the question of how long to spend on one field will occur at some point during the design of the observing strategy. At what point is it worth switching to a different field to increase the number of targets Table 1 for mean values of P detection over various period ranges, and §3.1 for details. Note. -Mean values for P detection for various period ranges (column 1) as function of σw (Fig. 2). Assumed parameters are given in the caption of Fig. 2. For discussion, see §3.1. without overly reducing the probability of detecting existing transits in the data? We provide insight into the answer to this question in Fig. 3 and Table 3. To first order, observing a field for few nights will yield an almost negligible probability of detection, potentially leading to a waste of telescope time. Alternatively, it may be wise not to stay on a single field for too long but rather to double the chances of detecting any planetary transits by switching fields and thus increasing the number of monitored stars. It is ultimately a question of the period range one is sampling in a given survey. As Fig. 3 shows for a typical set of parameters, "very hot Jupiters", i.e., planets with periods up to ∼3 days per the definition in Gaudi et al. (2005), can be detected even with a residual presence of red noise and "only" 15 nights (eight hours per night) of monitoring. However, longer period planets (∼6 days and longer) remain elusive (for σ r ≥ 2mmag) until the length of the observing run exceeds 30 nights. Length of Night The amount of time for which a given target field can be observed from the ground during one night depends on its celestial coordinates, the location of the telescope, the time of year, and, of course, outages due to weather, or technical or other problems. Special cases are discussed below, such as space-based observing ( §4.1) or synoptic surveys ( §4.2). The length of night can also depend on observing strategy. As an alternative to decreasing the number of nights spent on a single target field to increase the number of monitored stars ( §3.2), one may instead choose to split Table 2 for mean values of P detection over various period ranges, and §3.1 for discussion. Note. -Mean values for P detection for various period ranges (column 1) as function of observing run length (Fig. 3). Assumed parameters are given in the caption of Fig. 3. For discussion, see §3.2. the night up between two or more fields, thereby decreasing the number of hours spent on each one of them. We illustrate the effect of such strategies in Fig. 4 and Table 4 in which we assume basically the same parameters as for Figures 3 and 5 (see §3.4) for purposes of com-parison. The situation shown in the bottom right panel can obviously only be achieved at numerically high latitudes on Earth during the respective winter season, or from space, but serves as a comparison to the scenarios encountered in transit surveys conducted from moderate latitudes. As in §3.2, the choice of strategy depends on the range of periods that is probed. One expected and visible effect in Fig. 4 is the decreasing depth of spikes in P detection with longer lengths of night as the diurnal cycle becomes less of a factor in transit detection. As evidenced in Figures 7 and 10, the spikes eventually disappear altogether when observing becomes uninterrupted, as, e.g., from space. Observing Cadence Observing cadence is primarily dependent on telescope and detector characteristics as well as target brightness, with the goals that σ w is minimized, the target remains in the linearity regime of the detector, and the exposure time is not so long as to smear out phase information on any detectable planetary transit. Similar to §3.2 and §3.3, however, the choice of cadence can also be used as a observing strategy parameter to increase the number of monitored stars at the expense of a lower sampling rate per field (by moving back and forth between fields between exposures, for instance). This effect is simulated in Fig. 5, and values for P detection Table 3 for mean values of P detection over various period ranges, and §3.2 for details. Note. -Mean values for P detection for various period ranges (column 1) as function of length of night (Fig. 4). Assumed parameters are given in the caption of Fig. 4. For discussion, see §3.3. for different period ranges are given in Table 5, which shows that the effect of changing from a cadence of one to several minutes does not greatly affect the calculated detection probability, especially for very small values of σ r . It may therefore be worth considering changing between fields every one or few exposures to increase target number. The effects of red noise produced for such an observing strategy, however, such as flatfielding errors due to the fact that the stars may not be located in exactly the same position in the field as before, are dependent on aspects such as the pointing stability of the telescope used and would need to be explored for the respective observing setup. As explained in §2.3, we require a minimum of two transits sampled to constitute a detection. Note, however, that other predictions of survey yields in the literature use different numbers for different reasons, e.g., to be able to constrain period, which, with only two transits detected, would be subject to significant aliasing uncertainties, depending on the time elapsed between the Table 4 for mean values of P detection over various period ranges, and §3.3 for details. Table 6 show how the detection probability varies as a function of different minimum number of sampled transits. We note that, in this work, only one observation taken during a transit is enough to count this transit as sampled, but the detection is still a function of the transit SNR, as explained in §2, as well as the number of sampled transits. It is interesting to observe that the detection probability of σ r = 2 mmag approaches the σ r = 0 case for higher minimum number of sampled transits, showing how the "white noise only" case thus becomes increasingly equivalent to the realistic case with red noise present ( Fig. 6 and Table 6 produce identical results for σ r = 0 and σ r = 2 mmag for more than five or more transits sampled). In the absence of knowledge of σ r , requiring at least three or four detected transits in the data could therefore serve as a alternative for calculating a conservative estimate of survey yield. Detections Based on Single Transits We now examine the case where it is deemed possible to detect a transit based on a minimum of one sampled transit. For this scenario, the detection probability may exhibit a non-intuitive behavior for very short and very long periods. To illustrate these points, we assume 5 "nights" of uninterrupted 24-hour (e.g., space-based or polar ground-based) observing (see Fig.7). Short periods imply short transit durations. For a given cadence, there are few points per sampled transit. At the same time, short periods imply many tran- Table 5 for mean values of P detection over various period ranges, and §3.4 for details. Note. -Mean values for P detection for various period ranges (column 1) as function of observing cadence (Fig. 6). Assumed parameters are given in the caption of Fig. 6. For discussion, see §3.5.1. sits sampled for a given observing run length. With increasing orbital period, the number of sampled transits decreases as the number of data points per transit increases, though not at the same rate. This effect can be seen for the short period range in Fig. 7, and it is most intuitively understood by considering the case for σ r = 0. Since σ w = transit depth = 10 mmag (Fig. 7), a detection simply requires SN R 2 threshold = 49 data points observed during transit (Eq. 1), even if they are all located in a single transit. P detection goes to zero at a period of 2.5 days, which, for the stellar and planetary masses and radii given in the caption of Fig. 7, implies a transit length of around 161 minutes. For a 7-minute observing cadence, 23 data points can thus be collected during a single transit. For a 2.5 day period, one would expect to have two transits present in a 5-day observing run (24 hours of observing per "night"), but two transits would only contain 46 data points, and not the 49 required for SN R transit > SN R threshold . P detection in Fig. 7 therefore goes to zero at that point. For periods longer than 3 days, however, the expected number of data points per transit is 24.5, and thus, sampling two transits somewhere in one's data is sufficient for a detection, resulting in an increase of P detection with period as it approaches 3 days. An inverse effect can be observed at periods longer than the duration of the observing run and is again most easily explained by considering the σ r = 0 case. As expected, P detection drops to zero for periods longer than the observing run, since, although it is possible to sample one transit during the observing run, the length of this Table 6 for mean values of P detection over various period ranges, and §3.5.1 for details. one transit is too short for enough data points (49) to be sampled during its duration (and it is obviously impossible to sample more than one transit, as indicated by the dotted line). This situation, however, changes as the period approaches 24 days (transit duration = 343 minutes). From that point on, the transit duration will be long enough to fit 49 data points at a 7-minute cadence. Therefore, if a single transit is observed, it is long enough to gather enough data to fulfill the threshold SNR criterion. Ultimately, the probability that any transit occurs at all during the observing run, and thus the detection efficiency, approaches zero. Transit Depth and Duration Throughout the paper, we calculate the transit depth and duration according to equations in , thereby assuming a central transit, i.e., i = 90 • and b = 0, as well as solar and Jupiter values for stellar and planetary radii and masses. In this subsection, we set those parameters to certain values that may not be consistent with physical laws, but are meant to illustrate the behavior of the detection probability as a function of transit duration and depth. Transit depth is primarily a function of stellar and planetary radius, which we set to solar and Jupiter val-ues before, resulting in a transit depth of 0.01 mag or 1% of the relative flux. In order to show how much more challenging the detection of smaller planets is, or conversely, how much easier the detection of larger planets is for given observing parameters, we vary transit depth in Fig. 8 and show mean P detection values in Table 7. For a parent star with R = R ⊙ , the panels represent planets of 0.3 R Jupiter (top left), 0.7 R Jupiter (top right), 1.0 R Jupiter (bottom left), and 1.4 R Jupiter (bottom right). Note that an Earth-sized planet would have a radius of around 0.1 R Jupiter and produce an eclipse around a solar-sized star with depth of 10% of that assumed in the top left panel, i.e., 0.0001 mag. It is worth pointing out how significant the difference in detection probability is for shallow transits between σ r = 0 and σ r = 0, substantiating the claim that space-based observing is necessary to find very small planets (see also §4.1), as recently evidenced by the discovery of Rouan et al. in preparation;Bouchy et al. in preparation). Transit duration is a function of orbital period, i, and stellar and planetary masses and radii. Rather than following the physical dependence on period, we set transit duration to fixed values of 1, 2, 5, and 10 hours in the four panels of Fig. 9 (see also Table 8), thereby still assuming values for the parameters mentioned in the figure cap- Fig. 7.-The behavior of the detection probability as a function of period for a short, white-noise dominated monitoring campaign. σw is assumed to be 10 mmag. σr = 0 for the solid (blue) and dotted (black) lines; σr = 0.5 mmag for the dashed (red) line. The solid (blue) and dashed (red) lines no longer assume a minimum number of sampled transits, whereas the dotted (black) line requires a minimum of two transits to be present in the data. Additional parameters are: SNR threshold = 7.0, a 7-minute observing cadence, continuous observing (i.e., 24 hours observing per "night"), 5 consecutive nights, Rstar = R ⊙ , R planet = 0.1R ⊙ , Mstar = M ⊙ , M planet = M J upiter . The inset is a zoomed display of the behavior of P detection for periods between 1 and 5 days. See §3.5.2 for discussion. Note. -Mean values for P detection for various period ranges (column 1) as function of transit depth (Fig. 8). Assumed parameters are given in the caption of Fig. 8. For discussion, see §3.6. tion, including a transit depth of 0.01 mag. While longer transits are obviously easier to detect, the increase of detectability with transit duration is slow but sensitively dependent on σ r . Note. -Mean values for P detection for various period ranges (column 1) as function of transit duration (Fig. 9). Assumed parameters are given in the caption of Fig. 9. For discussion, see §3.6. APPLICATION AND EXAMPLES The examples used in §3 to illustrate the influences of various observing strategy and astrophysical parameters on the transit detection probability resemble ob- . σw is assumed to be 5 mmag. The (blue) solid line indicates the detection efficiency σr = 0, and the (red) dashed for σr = 2 mmag. Additional parameters are: SNR threshold = 7.0, a 5-minute observing cadence, 8 hours observing per night, 60 consecutive nights, transit duration calculated using solar radius and mass for the star and Jupiter values for planetary mass and radius. See Table 7 for mean values of P detection over various period ranges, and §3.6 for details. serving campaigns typical of the very successful widefield transit surveys such as HAT, TrES, XO, SWASP, etc (e.g., Bakos et al. 2007;O'Donovan et al. 2006;McCullough et al. 2006;Pollacco et al. 2006). In contrast, this section shows examples and consequences of observational window functions for fundamentally different setups of monitoring projects. Space Based Surveys Compared to ground-based counterparts, space-based transit surveys such as CoRoT (Baglin et al. 2006) have the principal two advantages that (a) they are not subject to interruptions in observing due to the diurnal cycle (see §3.3), and that (b) they do not need to deal with the Earth's atmosphere (see §3.1). The latter aspect in particular makes them the currently only realistic option of detecting Earth-sized planets around sun-like stars, which is one of the explicit goals of the recently launched Kepler Mission (Borucki et al. 2009). Fig. 10 shows the detection probability for simulated space-based surveys of various lengths, loosely modeled after the long and short observing runs by the CoRoT satellite, thereby assuming somewhat generic parameters for survey strategy and photometric precision (see caption). The solid and dashed lines respectively indicate the detection probabilities for a Jupiter-sized planet and for the recently discovered exoplanet CoRoT-7b around its parent star, a K0 dwarf (transit depth ∼ 0.5 mmag; period ∼ 0.9 days). We note that our simulations of an Earth-sized planet around a solar-type star produce a detection probability of zero for all periods. Synoptic Surveys Synoptic surveys typically provide high-quality photometric time-series data of very low cadence but over extended periods of time. Thus, they are not primarily designed to find planetary transits but nevertheless present data sets that are worth probing for their existence (see for instance Plavchan et al. 2008). In fact, several transiting planets have been discovered a posteriori in the Hipparcos archives such as HD 209458b (Robichon & Arenou 2000) and HD 189733b (Hébrard & Lecavelier Des Etangs 2006). The panels in Fig. 11 are produced by observational window functions of synoptic surveys loosely based on the (the future, ground-based) Large Synoptic Survey Telescope (LSST; Ivezic et al. 2008) in the top panel and (the space-based) Hipparcos mission (Perryman & ESA 1997) in the bottom panel. For both panels, we require a SN R threshold = 7.0, at least two sampled transits, and assume solar and Jupiter values for stellar and planetary mass and radius. The principal differences between the σw is assumed to be 5 mmag. The (blue) solid line indicates the detection efficiency σr = 0, and the (red) dashed for σr = 2 mmag. Additional parameters are: SNR threshold = 7.0, a 5-minute observing cadence, 8 hours observing per night, 60 consecutive nights, transit depth = 0.01 mag. See Table 8 for mean values of P detection over various period ranges, and §3.6 for details. two window functions in Fig. 11 are due to the different assumptions in σ w (5 mmag for top panel; 1.5 mmag for bottom panel) and σ r (1 mmag for top panel; 0.5 mmag for bottom panel), and the different number of data points obtained over different lengths of time. For the top panel, we assumed a cadence of a single, 30-second exposure time image every three nights, accumulated over around eight years, such that the total number of images is 1,000. The mean value for P detection over various period ranges are as follows: • 0-10 days: < P detection > = 0.555; • 10-50 days: < P detection > = 0.239; • 50-100 days: < P detection > = 0.025; • 100-200 days: < P detection > = 0.007; For the bottom panel, we chose a cadence based on the actual observations of a Hipparcos star with 190 epochs, downloaded from the NASA Star and Exoplanet Database 2 . Basically, the 190 observations were obtained over three years in groups of several images every few tens of days. The mean value for P detection over various period ranges are as follows: 2 http://nsted.ipac.caltech.edu • 0-10 days: < P detection > = 0.746; • 10-50 days: < P detection > = 0.156; • 50-100 days: < P detection > = 0.034; • 100-200 days: < P detection > = 0.012; Finally, it should be noted that we ran the equivalent simulations to the ones in Fig. 11, but thereby assuming an Earth-sized planet instead of a Jupiter-sized one. Both detection probabilities were identical to zero for all periods. DISCUSSION AND CONCLUSION This work quantitatively illustrates the influence of a number of observing strategy and astrophysical parameters on the detection efficiency of existing planetary transits as a function of period under general assumptions listed in §2.1 and parameters given in the various figure captions. The influences of red and white noises upon this detection efficiency are first examined in their own right, and then included in every simulation of the aforementioned parameters. Red noise is confirmed to be the dominant challenge to overcome in the search for planetary transits, as seen in the discussion in §3.1, Figures 1 and 2, and Tables 1 and 2. All parameters being equal, a factor 4 increase in σ r produces a much more significant reduction in P detection than a factor 10 increase in σ w . In particular, we explicitly address controllable strategy parameters such as the number of nights for which one may choose to monitor a given target field, the number of hours per night one may stay on this field, and the observational cadence with which the field is monitored. We furthermore examine the influence of astrophysical parameters on detection efficiency, such as transit depth and duration. Finally, we look at parameters typically involved in the calculation of the projected yield of a given transit survey such as the minimum number of transits required for detections, and illustrate two non-intuitive effects that occur when the criterion of a minimum number of sampled transits is abandoned and detection is based only on SNR. Along with visualization of the ef-fects caused by the various parameters in the figures, we provide quantitative means of comparison for different period ranges in the accompanying tables. A consideration that did not factor into the calculation of P detection is the fraction of data points outside of transit (we assumed that this fraction is much higher than the number of points sampled in transit). In order to detect a transit in one's data, one needs to have both brightness levels well measured such that the difference between them becomes signficant enough to enable a detection. For instance, an observing run that only obtains data during transit would, by the metrics used in this paper, detect the transit, provided the SNR is high enough. In real life, however, the data would appear perfectly flat and no sensible algorithm would flag the signal as a possible planetary transit. Obviously, sparse cadences are more susceptible to this admittedly pathologic pitfall than well sampled ones. One scenario where one may encounter a problem like this would be in the attempt to detect transits among long-period planets discovered by radial velocity work, which would tend to exhibit long transit durations. More generally, we caution that the detection of an existing transit in "real life" is dependent on a large number of properties of the data reduction and analysis pipeline and transit detection methods, including human experience and human error potential, which cannot possibly be parametrized as an ensemble or included in any code. Therefore, the significance of our results and predictions, although quantitative, are necessarily subject to an unknown fudge or scaling factor. As pointed out by Beatty & Gaudi (2008), the non-uniformity of the definitions of detection criteria cause the largest uncertainty in transit survey yield predictions. Nevertheless, we specifically allowed for parameters that are typically calculable in transit survey designs to be used as input to the code in order to make it as practically applicable as possible. Consequently, even in the presence of the unknown fudge factor mentioned above, comparisons between different observing strategies is quantitatively possible to optimize survey yield. For instance, under some circumstances it appears much more favorable to increase the observational cadence to add a second monitoring field to one's project than switching fields in the middle of the night or in the middle of the observing run (see § §3.2, 3.3, and 3.4), provided it is possible to repeatedly achieve very good pointing of the telescope to reduce the additional red noise component that might arise from flatfielding errors otherwise. Thus, it may well be advisable for transit surveys to consider this trade-off between cadence and fields monitored as it can lead to a dramatic change in the predicted planet yield of the survey. Furthermore, the examination of the effects of red and white noise in §3.1 and throughout the paper give quantitative insight into what size of planet one may expect to realistically detect in one's data, given observing strategy parameters. In general, the depth of transit one may hope to detect in one's data needs to be larger than the magnitude of σ r , as seen in §3.6 and evidenced Figures 8 and 10. This is confirmed very well in Fig. 10, showing that the detection of CoRoT-7b around its parent star, given their sizes and orbital period, are right at the limits of the CoRoT satellite for a single long run (i.e., without combining data from several runs). The code used for all calculations in this paper is available from KvB upon request. Lee for many helpful discussions about window functions, and F. Pont for invaluable assistance with red noise considerations. We furthermore express our thanks to R. Alonso, J. Pepper, and B. S. Gaudi for sharing insights into their ground-based and space-based data with respect to red noise characteristics and decorrelation timescales. Finally, we extend our gratitude to the anonymous referee for comments, encouragement, and a very insightful suggestion that noticeably improved the quality of the manuscript, as well as the scientific editor for pointing out a number of shortcomings with respect to mentioning and giving credit to the much more rigorous treatment of red noise in the mathematics and statistics literature.
2009-07-09T17:05:49.000Z
2009-05-01T00:00:00.000
{ "year": 2009, "sha1": "9adf1c9e1ecc928866550d7f7404dd979d36c14a", "oa_license": null, "oa_url": "https://authors.library.caltech.edu/15850/1/vonBraun2009p5891Astrophys_J.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7ab6e2a9e1a3c0c1f86bb56550fde3f351f674a4", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Geology", "Physics" ] }
258757407
pes2o/s2orc
v3-fos-license
Visualization Simulation of Branch Fractures Based on Internal Structure Reconstruction : This paper presents a visualization algorithm for wood fracture simulation based on wood science and wood internal structure reconstruction. The algorithm can simulate a reasonable and realistic wood fracture effect. First, the 3D point-cloud data of the bark structure are obtained using a laser scanner, and the cross-section of the branch is obtained by voxelization of the surface mesh model. Then, the outer contour of the cross-section is shrunk inward to reconstruct the annual rings and wood fiber bundles, and reasonable internal structures of branch 3D models are generated. The internal structure consists of a hierarchical model composed of several ring-like annual rings Introduction Specific fields such as film special effects, real-time gaming, and geometric modeling often require highly detailed models to create realistic scenes.Woody plants are an essential element of nature, and their 3D simulation plays a significant role in constructing natural scenes.In real-life situations, the shape of a tree can vary, and almost every tree has broken branches.Different factors, such as the tree species and environmental conditions, can affect branch breaking in different ways.Therefore, it is crucial to reconstruct branch fractures realistically and effectively to better simulate real-life scenes. Natural disasters such as wind, rain, hail, blizzards, and tornadoes can also occur in the simulation of natural scenes.These disasters can cause damage to trees, resulting in broken branches.For instance, strong winds can blow branches, heavy snow on branches can break them, and tornadoes can uproot entire trees.The cracks in the broken branches caused by different natural disasters have varying effects.To create a more realistic 3D natural scene, it is essential to simulate the natural phenomenon of these branch fractures. Tree branch fracture is caused by mechanical factors and involves both wood science and physics.Therefore, simulating the generation of fractured tree branches can be an effective way to study wood's resistance to external forces.Additionally, visualizing the fracture of tree branches can provide an intuitive representation of the tree's structural failure, which can aid in the development of strategies for garden protection.Wood fracture simulation is a crucial aspect of natural landscape simulation.Wood is a complex biological organism, and the internal structure of branches varies among different tree species, making it difficult to simulate the fracture of wood and branches using traditional methods.Fractured branch models collected through 3D scanning instruments often have many voids due to the unevenness of the wood spurs at the fracture site.Using random noise to generate the fracture surface of the wood leads to a lack of realism in the simulation, as it oversimplifies the complex properties of wood.Therefore, there is a pressing need for a 3D plant technique for geometric modeling that can simulate detailed and realistic branch fracture models, allowing for a better analysis of the internal morphological structure and function of woody plants. This paper introduces a novel approach for reconstructing the internal structure of tree rings and wood fiber bundles using the external contours of branches.The primary objective of this research is to develop a method for generating precise and evenly distributed cross-sectional slices and fragment models of tree rings, which can significantly enhance the creation of detailed and lifelike tree models.Moreover, the resulting model of fractured branches is particularly valuable for simulating the response of trees to external forces and evaluating their mechanical characteristics. Related Work In current graphics research, there is a focus on the dynamic simulation of trees and the development of material fracture algorithms.However, there are few simulation algorithms that are effective in capturing the unique properties of wood and simulating its fracture phenomenon.Furthermore, there are limited direct simulations of tree branch fracture.Wood is a complex biological organism that is porous, anisotropic, and viscoelastic, possessing exceptional mechanical properties.Previous research on tree fracture has mainly focused on two areas: computer graphics and wood fracture mechanics.In computer graphics, the focus has been on simulating trees in a realistic manner from a visualization perspective, including the modeling of trees and their interactions with various environmental factors such as external forces, wind, and rain. Research on Tree Modeling Algorithm In the early days of tree modeling research, procedural modeling methods were commonly used.One example is the work by Fernández et al. [1], who studied the functionstructure model of Pinus radiata and were able to reproduce the development and growth of the species using mechanical formulas.Ancelin et al. [2] proposed a numerical model designed to simulate the biomechanical behavior of growing trees.The model is based on the transfer matrix method and is adjusted incrementally to calculate the evolution of trunk biomechanics during growth.Weber et al. [3] proposed an algorithm to create a tree model.The algorithm can adjust the morphological characteristics such as the rotation angle interval between branches, the length of the sub-branch, and the number of branches through multiple parameters.Initially, by learning a framework, the 3D shape, camera, and texture of an object are recovered from a single image [4].Then, Hu et al. [5] proposed a self-supervised mesh reconstruction (SMR) method to enhance the 3D mesh attribute learning process.By requiring only contour mask annotations, SMR can be trained in an end-to-end fashion and can generalize to reconstruct natural objects.With the development of 3D scanning technology, many 3D point cloud-based tree modeling methods have emerged.This method generates point clouds that allow for 3D models that are more consistent with real trees.For example, Livny et al. [6] proposed a method to automatically generate realistic tree models from scanned 3D point clouds of trees.This work is based on pre-generated classified tree information and is able to handle large area scans and generate models automatically.Liu et al. [7] proposed TreePartNet, a neural network aimed at reconstructing tree geometry from point clouds obtained by scanning real trees. In addition to research on modeling algorithms for whole trees, there are many studies on tree growth animation.For example, Kratt et al. [8] studied the animation simulation of Forests 2023, 14, 1020 3 of 15 wood growth according to the principle of tree cambium growth, including an algorithm for generating bark folds.Xiao and Chen [9] developed a model of the plant leaf drying phenomenon.The model is driven by the differential contraction of leaf tissue, and it also simulates the leaf vein system on plant leaves. Research on the Interactive Simulation Algorithm of Trees and Environment There are several geometric representations of plant models.Different representations have their own advantages and disadvantages in simulating the interaction of trees with environmental factors.Quigley et al. [10] proposed an interactive real-time animation method.This method represents the tree model as multiple articulated rigid bodies.By setting the stiffness of different branch node rigid bodies to avoid the animation effect of excessive softness and bending when subjected to external forces.The tree animation is processed in real time on a tree scene with tens of thousands of branches or a wooded scene composed of many trees with complex branch structures, such as wind and collision.Diener et al. [11] proposed a method to simulate in real time a complex scene of tens of thousands of trees drifting with the wind under a user-controlled wind field, where the branch nodes calculate the acceleration, velocity, and displacement of the object on the basis of the applied external forces, mass matrix, damping matrix, and stiffness matrix. Yang et al. [12] used an extended trigonal spring model to represent the tree structure.Using the theory of nonrigid tree leaf surfaces with hydrophilic properties to simulate branch and raindrop interactions.Xie et al. [13] used a leaf model based on a mass-spring model representation to simulate the effect of hail striking and tearing leaves. Pirk et al. [14] proposed a biologically sound approach to simulate tree combustion.Their tree combustion model can establish a link between the description of fine mechanisms (pyrolysis of wood, mass loss of logs, insulation effect of charcoal, and temperature change due to water evaporation) and the description of macroscopic effects (forest fire spread phenomenon), which can realistically simulate the effect of tree combustion.Jernej et al. [15] decomposed the model volume mesh into several subdomains using the finite element method.A simplified deformable model is constructed for each subdomain to achieve a real-time simulation of deformed objects.Li et al. [16] proposed a simulation method based on the theory of material mechanics that can set stable anisotropic material parameters and can accurately simulate the deformation effect of plant twisting and bending.Bohan et al. [17] proposed a method based on the power law relationship between branch length and its diameter and between length and natural vibration frequency to automate the hardness setting of tree models according to plant biomechanics, which can simulate the dynamic effects of trees with richer layers. Fracture Modeling and Animation in Graphics For the simulation of material fracture, materials are usually considered as ideal brittle or isotropic materials.James et al. [18] analyzed the stress tensor calculated using a finite element model.They modeled where the cracks should start on the model volume mesh and in which direction they should propagate.Pfaff et al. [19] proposed a method for adaptive crack propagation on a thin sheet model.This method dynamically reconstructs a high-quality triangular mesh and adaptively maintains the details of the simulation.Chen et al. [20] proposed a user-friendly method for designing and controlling the effect of fracture surfaces.The method refines the low-resolution fracture surface mesh into a high-resolution and detail-rich fracture surface according to the user-set material strength field.Desbenoit et al. [21] proposed an interactive approach to model cracks by mapping editable 2D curve patterns onto a 3D model.Hädrich et al. [22] proposed a new method to simulate wood as an anisotropic material.This method uses the shape-matching method as the basis for modeling the isotropic properties of wood.A fiber model based on the Cosserat rod theory is used to generate fracture. Image Detection of Wood Annual Rings The annual rings of wood can reflect important information such as the age of the tree and the growth environment of the tree.Tree rings not only are a dating tool, but also provide information on environmental factors during the generation of tree rings, serving as a proxy for environmental changes.The width of tree rings is considered a function of the total number of cells and the radial size of cells, while cell growth rate is a function of other climatic factors such as temperature, soil moisture, and light conditions.It is this functional relationship that transforms the statistical correlation between tree ring growth and external climate factors into a causal relationship [23,24].Existing work has processed wood cross-section images through image enhancement [25], image processing [26], edge detection [27], and other methods, obtaining information such as the number of annual rings by detecting annual rings. Previous methods approximated the branch fracture surface as a smooth elliptical surface, and they lacked certain rationality and realism.The effect of the branch fracture surface depends on the material difference of different species of wood and the material strength of the internal annual structure of the branch.In this paper, a better model of the branch fracture surface is obtained by reconstructing the internal structure of branches. Materials and Methods As shown in Figure 1, the result of parametric modeling [28] is a cylinder with a variable-height branch fracture surface.The cross-section is still a circle, uniformly divided into N ring circles and N sec sectors.However, the shapes of real branches and the 3D branch models used in the animation are different.According to actual observations and knowledge of wood science, the internal structure of tree branches has irregularly shaped annual rings with uneven width, and real wood fiber bundles are also irregular thin rectangles.Therefore, in order to generate a more realistic branch fracture surface that meets the animation requirements, a reasonable and effective method of mapping the fracture surface generated on the regular cylinder to the 3D branch model is needed.As shown in Figure 2, this paper obtains real fracture branches, then further generates bark point-cloud data and internal voxelized data, and finally generates a complete tree broken model through mapping. tree and the growth environment of the tree.Tree rings not only are a datin provide information on environmental factors during the generation of tree as a proxy for environmental changes.The width of tree rings is considere the total number of cells and the radial size of cells, while cell growth rate i other climatic factors such as temperature, soil moisture, and light condi functional relationship that transforms the statistical correlation between tr and external climate factors into a causal relationship [23,24].Existing work wood cross-section images through image enhancement [25], image proces detection [27], and other methods, obtaining information such as the num rings by detecting annual rings. Previous methods approximated the branch fracture surface as a sm surface, and they lacked certain rationality and realism.The effect of the b surface depends on the material difference of different species of wood an strength of the internal annual structure of the branch.In this paper, a bette branch fracture surface is obtained by reconstructing the internal structure Materials and Methods As shown in Figure 1, the result of parametric modeling [28] is a cylind iable-height branch fracture surface.The cross-section is still a circle, unif into Nring circles and Nsec sectors.However, the shapes of real branches and models used in the animation are different.According to actual obs knowledge of wood science, the internal structure of tree branches has irreg annual rings with uneven width, and real wood fiber bundles are also irre tangles.Therefore, in order to generate a more realistic branch fracture surf the animation requirements, a reasonable and effective method of mappin surface generated on the regular cylinder to the 3D branch model is needed Figure 2, this paper obtains real fracture branches, then further generates ba data and internal voxelized data, and finally generates a complete tree through mapping. Branch Voxelization Model According to related research in wood science and botany, the structure of the stem of a branch woody plant is composed of four parts: bark, cambium, essential part, and pith, from outside to inside [29][30][31].The voxelization method converts the geometric form representation of an object (vertices and facets) into a voxel representation that is closest to the object at a given resolution N. The voxelization method is used to convert the branch surface mesh into volume data.The spatial voxels used to represent a 3D model are similar to the two-dimensional pixels used to represent an image.It can be understood as an extension from a two-dimensional pixel unit to a three-dimensional cube unit.The internal distribution and arrangement of trees are similar to the arrangement of plant cells.The voxelized branch model can obtain a branch shape composed of many voxels, which is similar to the arrangement of cells inside the branch.According to the arrangement and combination of voxels to form the various parts of the branch woody plant structure, the changes in the shape and position of different parts can be better realized.The outermost voxels can form the shape of annual rings on the branch fracture surface.The outermost voxels can be used to deform and control the shape of the fractured part, generate different fractured surfaces, and form the shape of annual rings on the fractured surface. In order to obtain the contours of each layer of the 3D branch model.The branch model needs to be divided into multiple segments in the axial direction.The voxelization method [32,33] divides the model space into N × N × N grids, and sets the grid to 0 or 1 depending on whether the grid is covered by the model.The voxelization method eventually generates volume data of size N × N × N consisting of 0s and 1s, where 0 denotes that the voxel is not in the model, and 1 denotes that the voxel is in the model.Each voxel represents a particle. In order to obtain a clearer contour map of the branch interface, the 3D model of the branch should be voxelized using a high-resolution N. When N = 512, volume data with 134,217,728 values are generated.However, since the axial length of the branch model is much larger than its cross-sectional width and height, if the whole branch is voxelized directly, it is not possible to generate high-quality volume data even with a larger resolution of 512.As shown in Figure 3, a part of the voxelized branch data is shown.It can be seen that the cross-section of the branch consists of only a few hundred voxels, and the width and height of each cross-section are a few tens of voxels wide.Therefore, in order to obtain a clear cross-section, the long branch model should be divided into Nmodel segments of the same axial length before voxelization, with each segment being slightly longer than the cross-sectional width of the branch model. Branch Voxelization Model According to related research in wood science and botany, the structure of the stem of a branch woody plant is composed of four parts: bark, cambium, essential part, and pith, from outside to inside [29][30][31].The voxelization method converts the geometric form representation of an object (vertices and facets) into a voxel representation that is closest to the object at a given resolution N. The voxelization method is used to convert the branch surface mesh into volume data.The spatial voxels used to represent a 3D model are similar to the two-dimensional pixels used to represent an image.It can be understood as an extension from a two-dimensional pixel unit to a three-dimensional cube unit.The internal distribution and arrangement of trees are similar to the arrangement of plant cells.The voxelized branch model can obtain a branch shape composed of many voxels, which is similar to the arrangement of cells inside the branch.According to the arrangement and combination of voxels to form the various parts of the branch woody plant structure, the changes in the shape and position of different parts can be better realized.The outermost voxels can form the shape of annual rings on the branch fracture surface.The outermost voxels can be used to deform and control the shape of the fractured part, generate different fractured surfaces, and form the shape of annual rings on the fractured surface. In order to obtain the contours of each layer of the 3D branch model.The branch model needs to be divided into multiple segments in the axial direction.The voxelization method [32,33] divides the model space into N × N × N grids, and sets the grid to 0 or 1 depending on whether the grid is covered by the model.The voxelization method eventually generates volume data of size N × N × N consisting of 0 s and 1 s, where 0 denotes that the voxel is not in the model, and 1 denotes that the voxel is in the model.Each voxel represents a particle. In order to obtain a clearer contour map of the branch interface, the 3D model of the branch should be voxelized using a high-resolution N. When N = 512, volume data with 134,217,728 values are generated.However, since the axial length of the branch model is much larger than its cross-sectional width and height, if the whole branch is voxelized directly, it is not possible to generate high-quality volume data even with a larger resolution of 512.As shown in Figure 3, a part of the voxelized branch data is shown.It can be seen that the cross-section of the branch consists of only a few hundred voxels, and the width and height of each cross-section are a few tens of voxels wide.Therefore, in order to obtain a clear cross-section, the long branch model should be divided into N model segments of the same axial length before voxelization, with each segment being slightly longer than the cross-sectional width of the branch model. Branch Outer Bark Structure Simulation Bark is an important part of woody plants.The bark is located in the outermost part of woody plants and is in direct contact with the real environment.Influenced by different environmental factors, the structure and shape of the bark are very different from the internal structure of the branch.By using real tree branch fracture models to obtain pointcloud data, a detailed and complete outer skin structure of the fractured tree branch can be obtained, resulting in a more realistic model of the fractured tree branch.Using the PRINCE 775 two-color laser handheld 3D scanner of SCANTECH, the fractured tree branches were placed on the platform, and the handheld scanner scanned the fractured tree branches 360° to obtain the point-cloud data of the external contour of the branches.The point-cloud data were used to simulate the bark structure, which was more consistent with the real bark shape.The obtained external point-cloud data were preprocessed, noisy points and free points were removed, and point-cloud data were simplified to generate more accurate data, as shown in Figure 4a.The average distance density of this bark pointcloud data was calculated as 0.1367 points per cubic meter.The point-cloud data at the fracture surface were removed, only the bark data were retained, and the mesh structure was reconstructed using the point-cloud data, as shown in Figure 4b.The bark point-cloud data are denoted as p. Branch Cross-Sectional Annual Ring Simulation Taking the resolution N = 512, after voxelizing the branch model, 512 × Nmodel crosssections of the branch model were obtained.The length and width values of each cross- Branch Outer Bark Structure Simulation Bark is an important part of woody plants.The bark is located in the outermost part of woody plants and is in direct contact with the real environment.Influenced by different environmental factors, the structure and shape of the bark are very different from the internal structure of the branch.By using real tree branch fracture models to obtain pointcloud data, a detailed and complete outer skin structure of the fractured tree branch can be obtained, resulting in a more realistic model of the fractured tree branch.Using the PRINCE 775 two-color laser handheld 3D scanner of SCANTECH, the fractured tree branches were placed on the platform, and the handheld scanner scanned the fractured tree branches 360 • to obtain the point-cloud data of the external contour of the branches.The point-cloud data were used to simulate the bark structure, which was more consistent with the real bark shape.The obtained external point-cloud data were preprocessed, noisy points and free points were removed, and point-cloud data were simplified to generate more accurate data, as shown in Figure 4a.The average distance density of this bark point-cloud data was calculated as 0.1367 points per cubic meter.The point-cloud data at the fracture surface were removed, only the bark data were retained, and the mesh structure was reconstructed using the point-cloud data, as shown in Figure 4b.The bark point-cloud data are denoted as p. Branch Outer Bark Structure Simulation Bark is an important part of woody plants.The bark is located in the outermost pa of woody plants and is in direct contact with the real environment.Influenced by differe environmental factors, the structure and shape of the bark are very different from the i ternal structure of the branch.By using real tree branch fracture models to obtain poin cloud data, a detailed and complete outer skin structure of the fractured tree branch c be obtained, resulting in a more realistic model of the fractured tree branch.Using t PRINCE 775 two-color laser handheld 3D scanner of SCANTECH, the fractured tr branches were placed on the platform, and the handheld scanner scanned the fractur tree branches 360° to obtain the point-cloud data of the external contour of the branche The point-cloud data were used to simulate the bark structure, which was more consiste with the real bark shape.The obtained external point-cloud data were preprocessed, noi points and free points were removed, and point-cloud data were simplified to genera more accurate data, as shown in Figure 4a.The average distance density of this bark poin cloud data was calculated as 0.1367 points per cubic meter.The point-cloud data at t fracture surface were removed, only the bark data were retained, and the mesh structu was reconstructed using the point-cloud data, as shown in Figure 4b.The bark point-clou data are denoted as p. Branch Cross-Sectional Annual Ring Simulation Taking the resolution N = 512, after voxelizing the branch model, 512 × Nmodel cros sections of the branch model were obtained.The length and width values of each cros can be obtained as shown in Figure 5b.Firstly, a filtering kernel with K = 1 is used t corrode the external contour map of the tree cross-section, and the stored white pixel va ues are set to dark colors with RGB values of (30,92,186).Then, a filter kernel with K = for corrosion traversal is used, the stored white pixel values are set to light colors wit RGB values of (184, 215, 248), and filters of sizes 1 and 3 are alternatively used to chec the external contour map of the tree cross-section for corrosion until there are no whit pixel points in the external contour map of the tree cross-section.The width of the true annual rings on wood reflects the environmental conditions o the year.When the environmental conditions such as temperature, moisture, and sunligh are suitable for the growth of trees in a year, the annual rings produced in that year wi be thicker, and vice versa.Therefore, the width of the annual rings on the cross-section o wood can be wide or narrow, but it is not narrower from the inside to the outside, an there is no obvious pattern.As shown in Figure 6, a cross-sectional map of size 256 × 25 To get the outline of a circle such as a growth ring, a method similar to the erosion algorithm in image processing was used [34,35].Erosion is performed on the white part of the black-and-white section; each time the white part is corroded, the circle shrinks.By recording this circle of corrupted white pixels, the shape of a chronicle is obtained.Iterative etching is performed until all white pixels in the cross-section are set to black, and the annual wheel is obtained from the outside to the inside. The erosion method uses a filter kernel of size K to traverse all pixels in the crosssectional view, and the filter kernel can cover K × K pixels.When the pixel in the center of the filter kernel is white and there are M K black pixels currently covered by the filter kernel, the white pixel value in the center is marked and stored.After traversing all pixels, the stored white pixels are modified to the specified color.For example, when the size of the filter core K = 3 and the user set parameter M K = 1, for white pixels, if one of the eight adjacent pixels located at top, bottom, left, right, top left, bottom left, top right, and bottom right is a black pixel, it is stored and set to a uniform color in the new erosion result graph. According to the knowledge of wood science, it is known that annual rings can be divided into earlywood and latewood, with earlywood having a light color and latewood having a dark color.The pixel color of the odd numbered circles was set to dark brown, and the pixel color of the even numbered circles was set to light brown; the dark circles were considered latewood, and the light circles on the inner side were considered earlywood.The dark outer circle and the adjacent light inner circle form an annual ring, and the cross-sectional distribution of annual rings based on the outer contour of the branch can be obtained as shown in Figure 5b.Firstly, a filtering kernel with K = 1 is used to corrode the external contour map of the tree cross-section, and the stored white pixel values are set to dark colors with RGB values of (30,92,186).Then, a filter kernel with K = 3 for corrosion traversal is used, the stored white pixel values are set to light colors with RGB values of (184, 215, 248), and filters of sizes 1 and 3 are alternatively used to check the external contour map of the tree cross-section for corrosion until there are no white pixel points in the external contour map of the tree cross-section. The width of the true annual rings on wood reflects the environmental conditions of the year.When the environmental conditions such as temperature, moisture, and sunlight are suitable for the growth of trees in a year, the annual rings produced in that year will be thicker, and vice versa.Therefore, the width of the annual rings on the cross-section of wood can be wide or narrow, but it is not narrower from the inside to the outside, and there is no obvious pattern.As shown in Figure 6, a cross-sectional map of size 256 × 256 was processed to obtain a more realistic simulation of the annual rings in the cross-sectional area of the branch by reasonably designing the size of the filter kernel used in the corrosion process.First, we alternated the filter kernels of size 3 and 5 to generate a few thin annual rings in the outermost circle.Then, only the filter kernel of size 3 was used to generate a few annual rings with the narrowest width.Next, the kernels of size 3 and 5 were alternated to generate a few wider annual rings.Then, the kernels of size 3 and 7 were alternated to generate a few wider annual rings, resulting in a gradual change in the width of the annual rings.Lastly, the inner annual rings with the widest width at the center of the branch cross-section were generated by alternating the filter kernels of size 3 and 9 and the filter kernels of size 3 and 11.A total of 24 annual rings were generated. was processed to obtain a more realistic simulation of the annual rings in the cross-sectional area of the branch by reasonably designing the size of the filter kernel used in the corrosion process.First, we alternated the filter kernels of size 3 and 5 to generate a few thin annual rings in the outermost circle.Then, only the filter kernel of size 3 was used to generate a few annual rings with the narrowest width.Next, the kernels of size 3 and 5 were alternated to generate a few wider annual rings.Then, the kernels of size 3 and 7 were alternated to generate a few wider annual rings, resulting in a gradual change in the width of the annual rings.Lastly, the inner annual rings with the widest width at the center of the branch cross-section were generated by alternating the filter kernels of size 3 and 9 and the filter kernels of size 3 and 11.A total of 24 annual rings were generated.In the internal reconstruction of some particularly irregular outer contours, in order to conform to the real situation of wood, it should be ensured that the innermost annual rings that are very close to a perfect circle can be obtained, Therefore, the outer contour and the innermost annual ring contour are extracted at the same time, and the annual rings are generated from the outside to the inside and from the inside to the outside respectively.As shown in Figure 7, since the tree growth process is initially less affected by external factors, early growth rings that are approximately perfect circles can be generated by using the outer contour of the innermost ring.When the tree grows to a certain year, the shape of the annual ring will change due to the influence of specific external factors.The outer contour can be used to generate the late growth ring that matches the later growth.In the internal reconstruction of some particularly irregular outer contours, in order to conform to the real situation of wood, it should be ensured that the innermost annual rings that are very close to a perfect circle can be obtained, Therefore, the outer contour and the innermost annual ring contour are extracted at the same time, and the annual rings are generated from the outside to the inside and from the inside to the outside respectively.As shown in Figure 7, since the tree growth process is initially less affected by external factors, early growth rings that are approximately perfect circles can be generated by using the outer contour of the innermost ring.When the tree grows to a certain year, the shape of the annual ring will change due to the influence of specific external factors.The outer contour can be used to generate the late growth ring that matches the later growth. was processed to obtain a more realistic simulation of the annual rings in the cross-sec tional area of the branch by reasonably designing the size of the filter kernel used in th corrosion process.First, we alternated the filter kernels of size 3 and 5 to generate a few thin annual rings in the outermost circle.Then, only the filter kernel of size 3 was used t generate a few annual rings with the narrowest width.Next, the kernels of size 3 and were alternated to generate a few wider annual rings.Then, the kernels of size 3 and were alternated to generate a few wider annual rings, resulting in a gradual change in th width of the annual rings.Lastly, the inner annual rings with the widest width at the cen ter of the branch cross-section were generated by alternating the filter kernels of size 3 an 9 and the filter kernels of size 3 and 11.A total of 24 annual rings were generated.In the internal reconstruction of some particularly irregular outer contours, in orde to conform to the real situation of wood, it should be ensured that the innermost annua rings that are very close to a perfect circle can be obtained, Therefore, the outer contou and the innermost annual ring contour are extracted at the same time, and the annua rings are generated from the outside to the inside and from the inside to the outside re spectively.As shown in Figure 7, since the tree growth process is initially less affected b external factors, early growth rings that are approximately perfect circles can be generate by using the outer contour of the innermost ring.When the tree grows to a certain yea the shape of the annual ring will change due to the influence of specific external factor The outer contour can be used to generate the late growth ring that matches the late growth.During tree growth, the formation of annual rings in the same circle is easily affected by sunlight intensity and temperature.It is also related to the influence of the local environment and climate.When trees grow in different directions in different environments, it is easy to form eccentric growth rings, as shown in Figure 8.During the simulation process, the range covered by each corrosion ring is varied in the xand y-directions, in order to achieve different widths of the same annual ring in different directions and generate eccentric annual ring images. During tree growth, the formation of annual rings in the same circle is easily affected by sunlight intensity and temperature.It is also related to the influence of the local environment and climate.When trees grow in different directions in different environments, it is easy to form eccentric growth rings, as shown in Figure 8.During the simulation process, the range covered by each corrosion ring is varied in the x-and y-directions, in order to achieve different widths of the same annual ring in different directions and generate eccentric annual ring images.The tree cross-section annual ring simulation was carried out on 512 sections of the volume data of a branch model, and the calculated results were stored in a volume data structure with a color.The visualization effect is shown in Figure 9.The longitudinal sections of the wood interior were visually very similar to real wood. Discrete Representation of Branch Model The tree branch model is discretized into particles with information about the index value of the annual rings in which they are located, the sector location, and the number of layers in the axial direction.In this way, the information about the internal structure of wood can be stored with a small amount of data.Therefore, a method is proposed to discretize the branch model into particles on the basis of the reconstructed internal structure of wood. The internal structure of a branch reconstruction is stored in colored body data sections.Each section i has annual rings, and each annual ring can be discretized into Nsec particles.Thus, the number of particles to be discretized from the wood internal reconstruction data with resolution Nmodel can be expressed as follows: The method of discretizing an annual cycle into Nsec particles is as follows: before each profile erosion operation, the center position of the current branch cross-section is calculated.The center position is equal to the average position of all white dots in the The tree cross-section annual ring simulation was carried out on 512 sections of the volume data of a branch model, and the calculated results were stored in a volume data structure with a color.The visualization effect is shown in Figure 9.The longitudinal sections of the wood interior were visually very similar to real wood. During tree growth, the formation of annual rings in the same circle is easily affected by sunlight intensity and temperature.It is also related to the influence of the local environment and climate.When trees grow in different directions in different environments, it is easy to form eccentric growth rings, as shown in Figure 8.During the simulation process, the range covered by each corrosion ring is varied in the x-and y-directions, in order to achieve different widths of the same annual ring in different directions and generate eccentric annual ring images.The tree cross-section annual ring simulation was carried out on 512 sections of the volume data of a branch model, and the calculated results were stored in a volume data structure with a color.The visualization effect is shown in Figure 9.The longitudinal sections of the wood interior were visually very similar to real wood. Discrete Representation of Branch Model The tree branch model is discretized into particles with information about the index value of the annual rings in which they are located, the sector location, and the number of layers in the axial direction.In this way, the information about the internal structure of wood can be stored with a small amount of data.Therefore, a method is proposed to discretize the branch model into particles on the basis of the reconstructed internal structure of wood. The internal structure of a branch reconstruction is stored in colored body data sections.Each section i has annual rings, and each annual ring can be discretized into Nsec particles.Thus, the number of particles to be discretized from the wood internal reconstruction data with resolution Nmodel can be expressed as follows: The method of discretizing an annual cycle into Nsec particles is as follows: before each profile erosion operation, the center position of the current branch cross-section is calculated.The center position is equal to the average position of all white dots in the Discrete Representation of Branch Model The tree branch model is discretized into particles with information about the index value of the annual rings in which they are located, the sector location, and the number of layers in the axial direction.In this way, the information about the internal structure of wood can be stored with a small amount of data.Therefore, a method is proposed to discretize the branch model into particles on the basis of the reconstructed internal structure of wood. The internal structure of a branch reconstruction is stored in colored body data sections.Each section i has N ring (i) annual rings, and each annual ring can be discretized into N sec particles.Thus, the number of particles to be discretized from the wood internal reconstruction data with resolution N model can be expressed as follows: The method of discretizing an annual cycle into N sec particles is as follows: before each profile erosion operation, the center position of the current branch cross-section is calculated.The center position is equal to the average position of all white dots in the current section.Then, every (360/N sec ) degrees, a straight line is emitted from the center outward, and the position of the first black point in the path of each straight line is found along the direction from the center to the outside, which is recorded as the position of a discrete particle.The result of discretizing a cross-section into particles on the basis of annual structure of the wood cross-section is shown in Figure 10, with the discrete particles in red. Figure 11 shows a section of a branch represented by discrete particles, with different colors of particles at different annual rings.Figure 12 shows the surface mesh of the branch reconstructed from the discrete particles.The surface mesh of the "bark" was reconstructed using the position information of the particles located in the outermost annual rings of all sections.The surface meshes of the two ends of the branch were reconstructed using the position information of all particles in the first and last sections. outward, and the position of the first black point in the path of each straight line is found along the direction from the center to the outside, which is recorded as the position of a discrete particle.The result of discretizing a cross-section into particles on the basis of annual structure of the wood cross-section is shown in Figure 10, with the discrete particles in red. Figure 11 shows a section of a branch represented by discrete particles, with different colors of particles at different annual rings.Figure 12 shows the surface mesh of the branch reconstructed from the discrete particles.The surface mesh of the "bark" was reconstructed using the position information of the particles located in the outermost annual rings of all sections.The surface meshes of the two ends of the branch were reconstructed using the position information of all particles in the first and last sections. Mapping Method By mapping the vertex information of the fracture surface generated on the regular cylinder to the internal wood particles reconstructed according to the external contour of the branch, the fracture surface that fits the contour of the branch and conforms to the along the direction from the center to the outside, which is recorded as the position of a discrete particle.The result of discretizing a cross-section into particles on the basis of annual structure of the wood cross-section is shown in Figure 10, with the discrete particles in red. Figure 11 shows a section of a branch represented by discrete particles, with different colors of particles at different annual rings.Figure 12 shows the surface mesh of the branch reconstructed from the discrete particles.The surface mesh of the "bark" was reconstructed using the position information of the particles located in the outermost annual rings of all sections.The surface meshes of the two ends of the branch were reconstructed using the position information of all particles in the first and last sections. Mapping Method By mapping the vertex information of the fracture surface generated on the regular cylinder to the internal wood particles reconstructed according to the external contour of the branch, the fracture surface that fits the contour of the branch and conforms to the along the direction from the center to the outside, which is recorded as the position of a discrete particle.The result of discretizing a cross-section into particles on the basis of annual structure of the wood cross-section is shown in Figure 10, with the discrete particles in red. Figure 11 shows a section of a branch represented by discrete particles, with different colors of particles at different annual rings.Figure 12 shows the surface mesh of the branch reconstructed from the discrete particles.The surface mesh of the "bark" was reconstructed using the position information of the particles located in the outermost annual rings of all sections.The surface meshes of the two ends of the branch were reconstructed using the position information of all particles in the first and last sections. Mapping Method By mapping the vertex information of the fracture surface generated on the regular cylinder to the internal wood particles reconstructed according to the external contour of the branch, the fracture surface that fits the contour of the branch and conforms to the Mapping Method By mapping the vertex information of the fracture surface generated on the regular cylinder to the internal wood particles reconstructed according to the external contour of the branch, the fracture surface that fits the contour of the branch and conforms to the internal structural rules of the wood could be obtained.The method in [25] was used to generate a vertex V ij at the i annual ring and the j sector on the regular cylinder, and its height was H V ij , where i and j are greater than 0 but less than 40, and H V ij = 16.8 cm.α map denotes the ratio of the axial length of the dendritic mesh model to the total number of sections N model occupied by the dendritic voxel data in the axial direction, where N model = 512.The algorithm in this paper was used to get a discrete particle P ijk located on the k section, where k is greater than 0 but less than N model .The vertex of the new fracture surface obtained by mapping is represented by P ijk .V ij represents the vertices at the i annual ring and the j sector of the new fracture surface obtained through mapping.V ij is calculated using Equations ( 2) and (3): The mesh model of the new fracture surface k was constructed according to all V ij obtained by calculation.The bark point-cloud data sections were divided according to the total number of sections.The vertex V ij on the outermost ring of the fracture plane was connected with the bark point-cloud data point p on the same section layer to form a complete branch fracture model. Experimental Environment The algorithm in this paper was run in the following hardware and software experimental environment: 16 GB memory, Windows 10 64 bit operating system, CPU AMD Ryzen 7 5800H, GPU NVIDIA GeForce RTX 3060, Visual Studio 2017, Python 3.6, and OpenGL 3.3. Experimental Results Cross-sectional tree ring images of tree branches were generated using methods similar to corrosion algorithms.Different filtering kernels could be used to generate rings with different widths.Adding colors with the same RGB values as the real image to each ring could generate a more realistic cross-sectional ring image of fractured branches.Figure 13 shows a comparison of the tree ring contour extraction method based on the image processing method [25] and the tree ring reconstruction results in this paper.The first line is the tree ring reconstruction of China fir with a thoracic diameter of 36 cm, and the second line is the tree ring reconstruction of Cryptomeria fortune with a thoracic diameter of 24 cm.The internal contours were constructed using 3 × 3 filter kernels for different sizes of cross-sectional maps, as shown in Table 1.A 128 × 128 cross-sectional map can generate 41 circles of contours, a 256 × 256 cross-sectional map can generate 80 circles of contours, and a 512 × 512 cross-sectional map can generate 156 circles of contours.A larger resolution takes a longer time to build the internal contour.The internal contours were constructed using 3 × 3 filter kernels for different sizes of cross-sectional maps, as shown in Table 1.A 128 × 128 cross-sectional map can generate 41 circles of contours, a 256 × 256 cross-sectional map can generate 80 circles of contours, and a 512 × 512 cross-sectional map can generate 156 circles of contours.A larger resolution takes a longer time to build the internal contour.The outer contour was used to simulate the annual ring.After each erosion algorithm, the pixel value of the generated contour and the value of the remaining white pixels were calculated, and then the next erosion was performed.At the same time, according to the number of corrosions, a specific color was given to the corresponding contour to form the final effect diagram of the annual ring section.To process contour images of different sizes, the pixel values that needed to be calculated were different.A larger image resolution required larger pixel values to be calculated, thus taking longer. The length and width of the collected tree branch were 16.8 cm and 4.5 cm, respectively.The scanned point-cloud data contained 101,422 points with a point density of 0.1367 points per cubic meter.Scanning to obtain point-cloud data of the bark structure of broken branches could preserve complete detail information, generate realistic bark surface structure, voxel the fractured branches, and add ring information to the fracture surface to better simulate the fractured branches; however, the details on the fracture surface were missing.Figure 14b shows the rendered branch fracture model, and Figure 14a shows the real branch fracture picture.It can be seen that the overall simulation effect was good, but there was still a certain gap between the simulation and the real picture in the details. Figure 14c shows the fractured branch model generated by parametric modeling [26], depicting a branch fracture surface with uneven height on a cylinder, but its cross-section was still a circle.Compared with the fractured branch model generated in this paper, the outer surface of the branch was uneven.The bark fracture was similar to the real fracture branch after adding texture, appearing realistic.The real annual rings could be seen at the fracture surface, conforming to the real branch shape.The model generated by parametric modeling was still a cylinder.However, at the fracture surface, the protrusions simulated in this paper were not obvious enough.Using parametric modeling and the fracture surface generated by filtering, it can be seen that the protrusions were more obvious, but the generated positions were still relatively random. Forests 2023, 14, x FOR PEER REVIEW 13 of 16 details.Figure 14c shows the fractured branch model generated by parametric modeling [26], depicting a branch fracture surface with uneven height on a cylinder, but its crosssection was still a circle.Compared with the fractured branch model generated in this paper, the outer surface of the branch was uneven.The bark fracture was similar to the real fracture branch after adding texture, appearing realistic.The real annual rings could be seen at the fracture surface, conforming to the real branch shape.The model generated by parametric modeling was still a cylinder.However, at the fracture surface, the protrusions simulated in this paper were not obvious enough.Using parametric modeling and the fracture surface generated by filtering, it can be seen that the protrusions were more obvious, but the generated positions were still relatively random. Discussion In this paper, we proposed a 3D plant technique to reconstruct the internal annual rings and wood fiber bundle structure on the basis of the external profile of tree trunk or branch, The reconstructed internal structure can be used to map the fracture surface and to represent the inhomogeneous material strength inside the branch.This method is an Discussion In this paper, we proposed a 3D plant technique to reconstruct the internal annual rings and wood fiber bundle structure on the basis of the external profile of tree trunk or branch, The reconstructed internal structure can be used to map the fracture surface and to represent the inhomogeneous material strength inside the branch.This method is an early computer graphics method to simulate wood fracture according to wood structure and wood science theory.The method incorporates basic features of wood fracture, such as the internal hierarchical structure of wood, including the annual rings and wood fiber bundles, into the parametric modeling approach.The reconstructed model plays an important role in the study and analysis of the internal morphological structure of woody plants.At the same time, the simulation of branch fracture also plays a certain role in the corresponding research of forestry.The branch fracture model is the embodiment of the details of the tree model, and the function of woody plants is accurately calculated and analyzed using the branch fracture structure model.Therefore, the simulation algorithm can be used not only in the fields of entertainment and animation games, but also to promote the research and application of plant morphology and structure in smart agriculture. Conclusions Although the algorithm in this paper achieved the expected results of wood fracture simulation, there were some limitations in its design and experimental process.The design of the method for extracting internal points when performing discretization is still inadequate.The circumference of the annual rings increases from inside to outside according to its position in the cross-section.The circumference of the outermost annual wheel is the longest, and the circumference of the innermost annual wheel is the shortest.The circumference of the outer annual rings is often tens of times longer than that of the inner annual rings.In the discretization result, the points of the outer annual rings are sparsely distributed, and the points of the inner annual rings are densely distributed.According to the viewpoint proposed in this paper, each annual ring is regarded as composed of N sec wood fiber bundles.The cross-sectional area of wood fiber bundles in the outer annual rings is much larger than that of wood fiber bundles in the inner annual rings.However, according to actual observations of wood and theories in wood science and botany, the cross-sectional area of wood fiber bundles within the same branch cross-section should be essentially the same.For example, the number of segments of each annual rings should be set according to the circumference of the rings, with a longer circumference necessitating the divisions of more segments. Future work can build on the algorithm in this paper to obtain more realistic and accurate wood fracture simulation results, and future research can be conducted in the following directions: 1. Bark fracture can be simulated.The fracture effect of the bark and the wood fracture effect of the xylem are very different.Using the bark gullies and grain, the bark fracture effect can be further perfected.When the bark breaks, cracks tend to spread along these gullies and form other fracture structures. 2. The effect of fracture brought by structures such as knots on wood and forks on branches can be considered.For example, knots in branches and trunks can add significant complexity to reconstructing the internal structure of wood.It is necessary to design an algorithm that can detect the position of nodes or branches and reprocess the internal structure of the wood there. 3. The real internal structure of the wood obtained from the CT scan can be compared and evaluated with the results of this paper to improve the algorithm. 4. The obtained internal structure can be used to set the strength of the wood internal material.The wood fracture animation can be generated in combination with the material point method to handle anisotropic materials. Figure 1 . Figure 1.Schematic diagram of the method of mapping the fracture surface genera cylinder to a 3D branch model. Figure 1 . Figure 1.Schematic diagram of the method of mapping the fracture surface generated on a regular cylinder to a 3D branch model. Figure 4 . Figure 4. Bark structure point-cloud data: (a) pretreatment result (the different colors of the pointcloud data are used to facilitate the display effect; (b) reconstruction of the generated mesh structure. Figure 3 . Figure 3. Voxelization result of a tree model. Figure 4 . Figure 4. Bark structure point-cloud data: (a) pretreatment result (the different colors of the poi cloud data are used to facilitate the display effect; (b) reconstruction of the generated mesh stru ture. Figure 4 . Figure 4. Bark structure point-cloud data: (a) pretreatment result (the different colors of the pointcloud data are used to facilitate the display effect; (b) reconstruction of the generated mesh structure. 3. 3 . Branch Cross-Sectional Annual Ring Simulation Taking the resolution N = 512, after voxelizing the branch model, 512 × N model crosssections of the branch model were obtained.The length and width values of each crosssectional map were 512. Figure 5a shows the extracted outer contour drawing.The black color indicates the outer part of the branch, and the white color indicates the inner part of the branch. Figure 5 . Figure 5. Broken branch annual rings simulation results: (a) outer contour drawing; (b) simulation generated annual ring cross-section. Figure 6 . Figure 6.Simulation results of annual rings in cross-section of branches with different annual ring widths. Figure 7 . Figure 7. Simulation results of annual rings in branch cross-sections generated with different contours: (a) outer contour drawing; (b) simulation-generated annual ring cross-section. Figure 6 . Figure 6.Simulation results of annual rings in cross-section of branches with different annual ring widths. Figure 6 . Figure 6.Simulation results of annual rings in cross-section of branches with different annual rin widths. Figure 7 . Figure 7. Simulation results of annual rings in branch cross-sections generated with different con tours: (a) outer contour drawing; (b) simulation-generated annual ring cross-section. Figure 7 . Figure 7. Simulation results of annual rings in branch cross-sections generated with different contours: (a) outer contour drawing; (b) simulation-generated annual ring cross-section. Figure 8 . Figure 8. Simulation results of tree rings in branch cross-sections with different widths of the same annual ring. Figure 9 . Figure 9. Longitudinal section of reconstruction of timber interior structure. Figure 8 . Figure 8. Simulation results of tree rings in branch cross-sections with different widths of the same annual ring. Figure 8 . Figure 8. Simulation results of tree rings in branch cross-sections with different widths of the same annual ring. Figure 9 . Figure 9. Longitudinal section of reconstruction of timber interior structure. Figure 9 . Figure 9. Longitudinal section of reconstruction of timber interior structure. Figure 10 . Figure 10.Discrete particles represent a cross-section of the wood. Figure 11 .Figure 12 . Figure 11.Discrete particles represent the wood; the color of the particle indicates the annual ring it is in. Figure 10 . Figure 10.Discrete particles represent a cross-section of the wood. Figure 10 . Figure 10.Discrete particles represent a cross-section of the wood. Figure 11 .Figure 12 . Figure 11.Discrete particles represent the wood; the color of the particle indicates the annual ring it is in. Figure 11 . Figure 11.Discrete particles represent the wood; the color of the particle indicates the annual ring it is in. Figure 10 . Figure 10.Discrete particles represent a cross-section of the wood. Figure 11 .Figure 12 . Figure 11.Discrete particles represent the wood; the color of the particle indicates the annual ring it is in. Figure 12 . Figure 12.Branch model for discrete particle reconstruction, where the color of the particle indicates the annual ring it is in: (a) different annual rings; (b) different sectors. ForestsFigure 13 . Figure 13.The comparison of the method of extracting annual rings based on image processing method and the reconstruction results in this paper: (a) real tree rings; (b) image processing method [25]; (c) the method of this paper. Figure 13 . Figure 13.The comparison of the method of extracting annual rings based on image processing method and the reconstruction results in this paper: (a) real tree rings; (b) image processing method [25]; (c) the method of this paper. Figure 14 . Figure 14.Rendered production of broken tree branch model compared with the real picture: (a) real branch fracture picture; (b) rendered branch fracture model; (c) other work [28]. Figure 14 . Figure 14.Rendered production of broken tree branch model compared with the real picture: (a) real branch fracture picture; (b) rendered branch fracture model; (c) other work [28]. Table 1 . The comparison results of annual rings generated from different size cross-sectional maps. Table 1 . The comparison results of annual rings generated from different size cross-sectional maps.
2023-05-18T15:15:14.978Z
2023-05-16T00:00:00.000
{ "year": 2023, "sha1": "2f54b9a97cf691711ff150ddd51056c9a9b71454", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/14/5/1020/pdf?version=1684210400", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2130b5cdb9d5d192113f08be0e7134482b8c3257", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
246904747
pes2o/s2orc
v3-fos-license
Hyperspherical approach to atom--dimer collision with the Jacobi boundary condition In this study, we investigate atom--dimer scattering within the framework of the hyperspherical method. The coupled channel Schr\"odinger equation is solved using the R-matrix propagation technique combined with the smooth variable discretization method. In the matching procedure, the asymptotic wave functions are expressed in the rotated Jacobi coordinates. We apply this approach to the elastic scattering $^{3}$He(T$\uparrow$) + $^{4}$He$_{2}$ and H$\uparrow$ + H$\uparrow$Li processes. The convergence of the scattering length as a function of the propagation distance is studied. We find that the method is reliable and can provide considerable savings over previous propagators. Introduction Studies of three-body collision processes have attracted tremendous attention due to their substantial relevance in the rapidly growing field of cold and ultracold atomic gases [1,2,3,4]. In such systems, elastic atom-molecule collisions are crucial for determining the dynamics of ultracold atom-molecule mixtures at the mean field level, and inelastic atom-molecule collisions have a large impact on the lifetime of Feshbach molecules. Weakly attracted three-body systems such as helium trimer and mixed 4 He-4 He-A (A is another atom) systems are very interesting and important as they give us an opportunity to study the Efimov states in the realistic systems [5,6,7]. Interest in the Efimov states and other universal binding properties of such systems has been significantly investigated, and giant Efimov trimer has been detected in helium gas [8]. The scattering processes at ultralow energies are even more interesting due to their relevance for the lifetime and stability of gas samples. A few works have addressed the ultracold atom-molecule problem in these realistic systems. For instance, ultracold collisions of 3,4 He atoms with 4 He 2 have been studied within the adiabatic hyperspherical representation by Refs. [9,10]. The Faddeev differential equations have also been extensively used in these systems [11,12,13]. In addition to the well-studied elastic 4 He( 3 He)+ 4 He 2 scattering, the spinstretched case of H atom scattering from XH (X is an alkali atom) has been investigated using the method of hyperspherical coordinates [10]. Recently, atomdimer exchanges and dissociation reaction rates have been predicted for different combinations of two 4 He atoms and one of the alkaline species among 6 Li, 7 Li, and 23 Na using the Faddeev formalism [14]. On the other hand, it is known that there exist many similarities between spin-polarized tritium (T↑) and 4 He atoms. The bulk T↑ remains liquid in the limit of zero temperature and behaves much like liquid 4 He and therefore constitutes a second example of a bosonic superfluid. The bound states of mixed T↑ 4 He 2 clusters were studied in Refs. [15,16,17] and were found to possess one weakly bound state, which is by far the most weakly bound system. For this system, no scattering observables are available in the literature, which is of fundamental importance for current experiments. The hyperspherical adiabatic (HA) expansion method has been proven to be an efficient tool in studying few-atom systems [18]. For bound states, HA expansion shows particularly fast convergence for atom-atom interactions [7,19,6,20]. On the other hand, the method has also been extensively used to describe fewatom systems in the ultracold collision regime [1,9,10]. The convergence problem of the HA method appears for scattering states, particularly in the description of ultracold atom-dimer collisions. Since the asymptotic structure for atom-dimer scattering is that one particle moves relative to the center of mass of the twobody bound system, the correct boundary condition for the structure with the HA basis is achieved only at ρ → ∞, which requires a very large number of hyperradial functions in the solutions and long-range propagation [21]. To overcome the convergence problem, Refs. [21,22,23] introduced a method to compute the phase shift from two integral relations that involve only the internal part of the wave function. The convergence of the procedure has been demonstrated to be as fast as for bound states. An alternative method for addressing this problem is using asymptotic solutions expressed in Jacobi coordinates. This idea has been applied to treat rearrangement collisions by several quantum chemistry groups since 1980 [24]. In the calculations of Refs. [25,26,27], the probabilities were found to exhibit no oscillations as a function of the matching distance. Then, in treating the collision-induced dissociation problem, Refs. [28,29] used a mixed boundary condition scheme, in which the asymptotic bound solutions were expressed in Jacobi coordinates and the continuum solutions were expressed in hyperspherical coordinates. This method significantly decreases the amplitude of the oscillations and improves the convergence as a function of the distance. In atomic physics, the hyperspherical closecoupling method, which uses Jacobi asymptotic solutions, has been used to calculate the elastic and positronium formation cross section for electron and positron collisions with atomic hydrogen [30,31,32] and to study the photoionization cross section spectra of the two-electron system [33,34]. Zhao et al. [35] also used the hyperspherical close-coupling method to investigate the charge transfer process A + +B → A+B + . In the calculations of the low-energy collision of Coulomb threebody systems, Refs. [36,24] used the hyperspherical elliptic coordinates method. Their two-dimensional matching procedure also used the asymptotic wave function expressed in the mass-scaled Jacobi coordinates. For the ultracold atom-dimer elastic scattering, the collision quantities reach the threshold regime only at collision energies at the nk level, leading to longer propagation than in the systems described above. Thus, there is a great need to project numerical wave functions onto asymptotic solutions expressed in Jacobi coordinates to study the ultracold atom-dimer scattering process. In this work, we present an efficient method for investigating atom-dimer scattering within the framework of hyperspherical coordinates. The nonadiabatic coupling between the hyperradius and hyperangular variables is treated with the slow-variable discretization (SVD) method [37] in combination with the R-matrix propagation technique [38,39]. In the matching procedure, the asymptotic wave functions are expressed in the rotated Jacobi coordinates. We perform test calculations on the 3 He + 4 He 2 and H↑ + H↑Li systems, which represent two different kinds of asymptotic structures. The two systems have been studied previously with the asymptotic wave function expressed in the hyperspherical coordinates, and the propagation distance is at least 5000 a.u. [10]. Thus, these systems are good examples to illustrate our new approach. We also investigate the T↑ + 4 He 2 elastic scattering in J Π = 0 + symmetries and provide the scattering length value for T↑ atom scattering from the 4 He 2 dimer. The organization of this paper is as follows: Sec. II describes the theoretical approach. In Sec. III, we discuss the results and analyses of the systems under study. Finally, we conclude and summarize our work in Sec. IV. Theoretical formalism In this work, we consider a process where a particle hits a bound two-body system. We assume the incident energy to be below the breakup threshold for the three particles, and only the channels approaching the two-body bound state need to be considered. We use m τ (τ =A, B, C) to represent the mass of three atoms and use x τ to represent the column vector relative to the origin. In the center-of-mass frame, six coordinates are needed to describe the three-particle system. The Jacobi coordinates describing the relative motion are defined as where τ, τ + 1, τ + 2 are any cyclic permutations of A, B, and C. This is illustrated in Fig. 1. In addition, ξ 1τ and ξ 2τ are the corresponding mass-scaled Jacobi coordinates: where . Different sets of mass-scaled Jacobi coordinates can be transformed through kinematic rotations: where is a 6 × 6 matrix and 1 is the 3 × 3 unit matrix. The kinematic angles χ τ +1,τ are negative and obtuse, revealing the mass of three particles: where is a scaling factor. Delves hyperspherical coordinates can be defined in any set of mass-scaled Jacobi coordinates. In this work, hyperspherical coordinates are defined in the A-set (τ = A) mass-scaled Jacobi coordinates, where two identical atoms are connected through the Jacobi vector ρ 1 . We denote the angle between ρ 1 and ρ 2 as θ. The channel functions are symmetric with respect to the θ direction in this definition. After separation of the center of the mass motion, three of the six coordinates are taken to be the Euler angles-α, β, and γ-that specify the orientation of the body-fixed frame relative to the space-fixed frame. The remaining degrees of freedom can be represented by the hyperradius R and the two hyperangles θ and φ, which are defined as [18] and respectively. R is the only coordinate with the dimension of length, which represents the size of the three-body system. Here, θ, φ and the three Euler angles (α, β, γ) can be collectively represented by Ω [Ω ≡ (θ, φ, α, β, γ)]. In our method, wave functions are expanded in the body frame xyz, where ρ 2 lies along the z-axis and the three particles lie on the xz plane. We introduce the reduced wave function ψ υ (R; Ω) = Ψ υ (R; Ω)R 5/2 sin φ cos φ, and the Schrödinger equation is of the form: where Λ 2 is the squared "grand angular momentum operator", whose expression is given in Ref. [18]. The three-body interaction V (R; θ, φ) in Eq. (8) is taken to be a sum of the three pairwise two-body interactions. Equation (8) is solved in the hyperspherical adiabatic representation. Similar to the usual adiabatic approximation, the hyperspherical adiabatic potentials U ν (R) and channel functions Φ ν (R; Ω) are defined as solutions of the following adiabatic eigenvalue problem: We define the normalized and symmetrized D-functions associated with our choice of the body frame: where J is the total nuclear orbital angular momentum, M is its projection onto the laboratory-fixed axis, and Π is the parity with respect to the inversion of the nuclear coordinates. The quantum number I denotes the projection of J onto the body-frame z axis. The channel functions are expanded in terms of D-functions as follows: and u νI (R; θ, φ) is expanded with B-spline functions, where N θ and N φ are the sizes of the basis sets in the θ direction and φ direction, respectively. The constructed symmetric B-spline basis sets utilized in the θ direction reduce the number of basis functions to N θ /2. Following the method of Ref. [39], the R-matrix propagation method combined with the SVD approach is used. We divide the hyperradius into (N -1) intervals with the set of grid points R 1 < R 2 < · · · < R N . In the interval [R i , R i+1 ], the SVD method is used to solve Eq. (8). With this solution, we can determine the R-matrix, which is defined as where matrices F and F can be calculated from the solution of Eqs. (8) and (9) by Over the interval [R 1 , R 2 ], when the R matrix at R 1 is known, the R matrix at another point R = R 2 can be calculated as follows: Using the recurrence relation (16) in the R-matrix propagation method, we can obtain the R-matrix at the matching point R m where the wave function is matched to the wave function in the asymptotic region, and the three-body system is one dissociated atom plus a bound two-body system. The asymptotic wave function of the atom + dimer scattering process in τ -set Jacobi coordinates can be written as are the orientation angles of the vectors ρ 1 and ρ 2 , respectively, for the τ arrangement. ϕ i (ρ τ 1 ) are the wave functions of the dimer, and f and g are the energy-normalized regular and irregular spherical Bessel functions, respectively, and have the following form: The form of the angular part in the body frame is where the superscript (body) means these angles are measured in the body-fixed frame. For simplicity, S Il τ 1 l τ 2 JM (θ τ , φ τ ) is introduced: After transforming the asymptotic wave function into a body frame, the matching process between the inner region wave function Ψ = µI F µI (R m ) u µI (R m ; θ, φ)D J IM (α, β, γ) calculated in Delves coordinates and the asymptotic function ψ A in Jacobi coordinates can be implemented: where H λ σ denotes the expansion coefficients; that is, Using the orthogonality and normalization of u µI (R m ; θ, φ)D JΠ IM (α, β, γ), we can obtain the following relation: At the matching point R m , the logarithmic derivative of the inner and outer region wave functions should be equal; therefore, the derivative of the asymptotic wave function is also needed: and The matrix form of Eq. (24) is The derivative with respect to R at the matching point R m is According the definition of the R matrix, R = FF −1 , with (29) and (30), we can obtain the reaction matrix, and the scattering matrix, where 1 is the 3×3 unit matrix. The relation between the atom + dimer scattering phase shift δ 0 and the diagonal element scattering matrix S is With the phase shift δ 0 , we can obtain the atom + dimer scattering length a ad through and the total cross section σ 2 is 3 Results and discussion Pair potentials For the helium dimer potential v HeHe (r), we use the CCSAPT potential of Jeziorska et al. [40]. The interaction between He and the spin-polarized tritium (T↑) is identical to that between H and He. We choose the H-He potential developed by Cvetko et al. [41]. The H and Li atoms are assumed to be spin-stretched. Their short-range potentials are determined from ab initio calculations [42,43], and their long-range behavior is determined by the usual dispersion potentials [44,42]. All pairwise interaction potentials used in this work are shown in Fig. 2. Their bound state energies E υl = E 00 and scattering length a calculated with potentials are summarized in Table 1. In our definition of the hyperspherical coordinates, the two identical atoms are connected with Jacobi vector ρ 1 . Thus, the inner region wave functions are matched with the asymptotic solutions in A-set Jacobi coordinates for 3 He 4 He 2 and T↑ 4 He 2 systems. Several papers have reported on the process of 3 He atom scattering from the 4 He 2 dimer, which is a good example to test our procedure. Kolganova and Sandhans et al. [11,12,13] calculated the scattering phase shifts and scattering length for 3 He + 4 He 2 using the two-dimensional partial-wave integral-differential Faddeev equations based on the SAPT2 and LM2M2 potentials. They estimated the scattering length to be between 35.9 a.u. and 37 a.u. based on these two kinds of potentials. Soon thereafter, Suno [9] calculated the scattering length by solving the coupled-channel hyperradial equations using a combination of the finite element method [45] and the R-matrix method [46]. They used the improved He-He potential [40] and predicted a 3 He + 4 He 2 scattering length value of 40 a.u.. Due to the similarities between 4 He and T↑ atoms, similar behaviors of mixed T↑ 4 He 2 and 3 He 4 He 2 clusters are expected. For example, both systems have been found to possess one weakly bound state and exhibit a larger spatial extension with universal halo properties [16,15,17]. However, compared with the well-studied 3 He + 4 He 2 elastic scattering process, no scattering observables are available for the T↑ atom scattering from the 4 He 2 dimer. The potential curves of T↑ 4 He 2 and 3 He 4 He 2 are presented in Fig. 3. The lowest potential curves correspond asymptotically to the atom-dimer channel for T↑-4 He 2 and 3 He-4 He 2 , and the other potential curves represent three-body continuum states. From Fig. 3, the potential well of T↑ 4 He 2 is shallower than that of the 3 He 4 He 2 system. For ultracold atom-dimer collisions, the convergence of scattering observables depends critically on the accuracy of the adiabatic potentials. Thus, accurate potential curves and channel functions are highly desirable. According to the behavior of the channel function for these weakly bound systems, different B-spline knot distributions are used at short-and long-range hyperradii. For small hyperradii R, uniform knots are distributed; for large hyperradii R, the knot distribution is designed so that it becomes dense around the two-body coalescence points where the channel function is localized. Table 2 shows the convergence of lowest hyperspherical potential curves as functions of basis sets for T↑ 4 He 2 and 3 He 4 He 2 systems. The basis sets N θ = 168 and N φ = 504 are chosen as the final calculation, and the potential curves have at least six significant digits. The convergence of the scattering observables with respect to the number of adiabatic channels and sectors is also tested. We typically use 13 channels and 230 sectors distributed as R i ∝ i 3 from R = 2 a.u. to R = 500 a.u.. Figure 4 represents the J = 0 cross sections for elastic 3 He + 4 He 2 and T↑ + 4 He 2 scattering as functions of the collision energy (E-E 00 ). In the ultracold limit, σ 0+ obeys the threshold behavior as σ 0+ ∝ (E − E 00 ) 0 . Table 3 shows the convergence of scattering lengths a3 He+ 4 He 2 and a T↑+ 4 He 2 as a function of the matching distance. The scattering length converges at R m = 500 a.u. for both systems. As shown in Refs. [9,10], the numerical solutions of these kinds of systems are usually matched to the asymptotic analytical solutions at R m = 5000 ∼ 10000 a.u. in the hyperspherical coordinates boundary condition. A comparison of our calculations with the results available in the literature is given in Table 4. For 3 He + 4 He 2 elastic scattering, Suno et al. [9] obtained a3 He+ 4 He 2 = 40 a.u. using the potential from Ref. [40]. With the same potential, the scattering length we calculated is a3 He+ 4 He 2 = 34.6 a.u.. Sandhas et al. [13] obtained a3 He+ 4 He 2 = 37 a.u. and a3 He+ 4 He 2 = 35.9 a.u. using LM2M2 and SAPT2 potentials, respectively. For 3 T↑ + 4 He 2 scattering, we obtain a scattering length value of a T↑+ 4 He 2 = 166 a.u., which is larger than that of 3 He + 4 He 2 scattering. This result supports Suno's result that the 3 T↑ + 4 He 2 bound state extends to larger distances than the 3 He + 4 He 2 bound state. Matching in the B-set: LiHH system In the Delves hyperspherical coordinates defined in A-set Jacobi coordinates, where two identical atoms are connected with ρ 1 , the asymptotic wave functions of the scattering process A + AC → A + AC (the A and C atoms are bounded) involve the transformation between the A-set and B-set. The spin-stretched case of H atom scattering from LiH is such an example where the lowest adiabatic potentials asymptotically depend on the binding energies of the H-Li two-body bound states. Thus, the asymptotic wave function of the dissociated system is better represented in the B-set as follows: (36) The transformations between the A-set and B-set Jacobi coordinates can be implemented by the kinematic rotations given in Eq. (4). The components of ξ 2 and ξ 1 in the A-set body frame can be written as With Eqs. (3) and (4), we can obtain the components of ξ 2 and ξ 1 in the B-set body frame as follows: and With these equations, the expression of ρ 1B , ρ 2B , J λ µI , N i µI , J λ µI and N i µI in B-set Jacobi coordinates can be obtained. The adiabatic hyperspherical potentials U ν (R) of the H-H-Li system are presented in Fig. 5. The basis sets N θ = 168 and N φ = 504 are used, giving the potential curve at least six significant digits. We plot the J Π = 0 + partial wave cross sections for elastic collisions between H and LiH in Fig. 6. For this system, 18 channels and 230 sectors are used to ensure that the scattering length has at least two digits. Table 5 shows the convergence test of the atom-molecule scattering length a H+LiH as a function of the matching distance R m . The scattering length is converged at the matching distance R m = 500 a.u. Note that Yujun et al. [10] calculated the elastic cross sections for H + LiH collisions in hyperspherical coordinate conditions. They matched the numerical solutions to the asymptotic analytical solutions at R m = 5 × 10 3 a.u., and given the H + LiH scattering length of 80 a.u. with the same v HH (r) and v LiH (r) potentials, the present result shows good agreement with their results. Conclusions In this work, we present an efficient method for solving the coupled-channel Schrödinger equation for atom-molecule elastic collisions. We use Delves hyperspherical coordinates, expand the wave function in a coupled-channel basis and propagate the coupled-channel equations with the R-matrix propagation technique. To avoid derivative coupling terms, we adopt the smooth variable discretization method, which discretizes the propagation variable before expanding in the basis. In the matching procedure, the asymptotic wave functions are expressed in the rotated Jacobi coordinates. Test calculations of elastic atom-molecule collisions are performed. For 3 He(T↑) atom scattering from 4 He 2 , the asymptotic wave functions are also expressed in the A-set Jacobi coordinates. No coordinate rotation is needed in this case. For the spin-stretched case of H atom scattering from LiH, the asymptotic wave functions must be expressed in the B-set Jacobi coordinates to describe the final scattering shape. Coordinate rotation between the A-set and B-set is needed for this type of scattering process. The convergence of the scattering length as a function of the propagation distance is studied. We find that the method is reliable and can improve the convergence as a function of matching distance. We compare our results with those of other calculations. The scattering length of H-HLi shows good agreement with that of hyperspherical coordinate boundary conditions with less computational expense. The scattering observables of T↑-4 He 2 are scarce, whose scattering length and cross section values are given for the first time.
2022-02-18T06:42:44.024Z
2022-02-17T00:00:00.000
{ "year": 2022, "sha1": "24b209d4f4092e8d587c21b822b976db8f066094", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "998851c4383696efaf9cd70ce63e0d7aeda806af", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
205341658
pes2o/s2orc
v3-fos-license
Nuclear Matrix Protein SMAR1 Represses c-Fos-mediated HPV18 E6 Transcription through Alteration of Chromatin Histone Deacetylation* Background: HPV18 E6 oncogene represents one of the most promising therapeutic targets for the treatment of HPV-positive tumors. Results: Curcumin-induced SMAR1-HDAC1 recruitment at LCR and E6 region on E6 promoter deacetylates chromatin histones to attenuate c-Fos-mediated E6 transcription to reinstall p53-mediated apoptosis in HPV18-infected cervical cancer. Conclusion: SMAR1 induces E6 repression. Significance: SMAR1 is a repressor of E6-mediated anti-apoptotic network in HPV18-infected cervical cancers. Matrix attachment region (MAR)-binding proteins have been implicated in the transcriptional regulation of host as well as viral genes, but their precise role in HPV-infected cervical cancer remains unclear. Here we show that HPV18 promoter contains consensus MAR element in the LCR and E6 sequences where SMAR1 binds and reinforces HPV18 E6 transcriptional silencing. In fact, curcumin-induced up-regulation of SMAR1 ensures recruitment of SMAR1-HDAC1 repressor complex at the LCR and E6 MAR sequences, thereby decreasing histone acetylation at H3K9 and H3K18, leading to reorientation of the chromatin. As a consequence, c-Fos binding at the putative AP-1 sites on E6 promoter is inhibited. E6 depletion interrupts degradation of E6-mediated p53 and lysine acetyl transferase, Tip60. Tip60, in turn, acetylates p53, thereby restoring p53-mediated transactivation of proapoptotic genes to ensure apoptosis. This hitherto unexplained function of SMAR1 signifies the potential of this unique scaffold matrix-associated region-binding protein as a critical regulator of E6-mediated anti-apoptotic network in HPV18-infected cervical adenocarcinoma. These results also justify the candidature of curcumin for the treatment of HPV18-infected cervical carcinoma. (1), is highly associated with infection by high risk human papilloma viruses (HPVs) 2 (2). HPVs are nonenveloped DNA viruses that infect mucosal or cutaneous squamous epithelium and result in a global change in cellular gene expression that facilitates cellular hyperproliferation. Adenocarcinomas of the cervix account for ϳ10 -30% of cervical carcinomas, and their incidence is increasing, especially among young women (3). Similar to cervical squamous cell carcinomas, the high risk HPV types 16 and 18 are the most important types associated with cervical adenocarcinomas (4). However, in contrast to cervical squamous cell carcinomas with prevalence of HPV16, a predominance of HPV18 infection has been described in cervical adenocarcinomas (5,6), particularly in invasive ones. The high risk HPV encodes two transforming genes: E6 and E7, both of which interfere with the key elements in the cell cycle control machinery. The constitutive expression of E6 and E7 is mainly dependent on the availability of host cell transcription factor, activator protein-1 (AP-1) that is formed by either homodimerization of Jun proteins (c-Jun, JunB, and JunD) or heterodimerization of Jun and Fos proteins (c-Fos, FosB, Fra-1, and Fra-2) through the "leucine zipper." It was reported that JunB constitutes the major dimerization partner of c-Fos, which increases with increased severity of cervical cancer (7), at the active AP-1 complex during HPV oncogene expression in cervical cancers (7)(8)(9). It has also been reported that CBP/p300 acts as a co-activator of c-Fos during HPV oncogene expression (9,10). The known transforming functions of E6 include accelerated proteosomal degradation of tumor suppressor p53 (11,12), as well as activation of telomerase (13). In fact, E6 alters the substrate specificity of a cellular ubiquitin ligase, E6AP, so that it stably associates with and polyubiquitinylates tumor suppres-sor p53, thereby degrading it via 26 S proteasome (1). The resultant effect counteracts the normal apoptotic and cell cycle arrest responses of HPV-positive cells, thereby ultimately resulting in deregulated cell proliferation. The above discussion reveals that E6, contributing effectively in the antiapoptosis network, represents one of the most promising therapeutic targets for the treatment of HPV-positive tumors and dysplasias because its repression may result in reactivation of tumor suppressor pathways in cancer cells. Although prophylactic vaccines are currently available and show high efficacy against the establishment of HPV infection, low rates of initiation and lower rates of completion of the vaccination regimen, as well as the lack of an opportunity to be vaccinated prior to infection, has led to the development of a patient population for whom no therapy for infection is available. Increasing evidence suggests that epigenetic alterations are essential in establishing the transformed phenotype in addition to the genetic changes associated with the transformation of a normal cell into a cancer cell. In this relation, acetylation of histone, as well as other transcription regulatory non-histone factors by lysine acetyltransferases, e.g. Tip60 (14,15), commonly correlates with the open chromatin structures required for the binding of multiple transcription factors and leads to transcriptional activation correlated with an increase in gene expression, whereas removal of acetyl groups by histone deacetylases (HDACs) accompanies with transcriptional repression. Lysine acetyltransferases and HDACs have been shown to play a critical role in transcriptional regulation in eukaryotic cells. HPV18 E6 protein has been observed to induce the degradation of the tumor suppressor lysine acetyltransferase, Tip60 (Tat-interacting protein 60 kDa), which is involved in transcriptional regulation, checkpoint activation, and p53-directed proapoptotic pathways (14,16). On the other hand, nuclear matrix protein SMAR1 interacts with HDAC1-associated repressor complex at cyclin D1 promoter and allows histone deacetylation and transcriptional repression (17). SMAR1 also stabilizes p53 via post-translational modification (18) and inhibits tumor growth through cell cycle arrest (19). Further, SMAR1-derived p44 peptide is shown to actively inhibit tumor growth in vivo (20). SMAR1 has also been implicated in the transcriptional regulation of viral genes in which it regulates viral transcription by alternative compartmentalization of LTR, resulting in a decreased virion production of HIV-1 (21). All of this information leads to the possibility of reversing the key alterations in the apoptotic machinery in HPV18-infected cervical adenocarcinoma by modulating SMAR1 that may alter the status and/or function of E6, Tip60, p53, and HDACs. However, there is no report on this critical function of SMAR1, if any, in reinstalling the "missing" apoptotic program in HPV18-infected cervical cancer cells. Recently, curcumin-induced up-regulation of SMAR1 and the contribution of this MAR-binding protein in sensitizing breast cancer cells toward doxorubicin have been reported from our laboratory (22). Recent reports have also suggested the HPV16 E6 protein as a target for curcuminoids, curcumin conjugates, and congeners for chemoprevention of cervical cancers (23). In this regard, curcumin-induced suppression of STAT3 activation has been reported to be associated with the gradual loss of HPV16 E6 and E7 expression and cell viability (24). According to Maher et al. (25), curcumin restores p53, Rb, and PTPN13 proteins to induce apoptosis in HPV16-infected cervical cancer cells. However, there are hardly any reports describing the involvement of SMAR1 in curcumin-induced HPV18-infected cervical adenocarcinoma cell apoptosis. Here, for the first time to our knowledge, we elucidate the role of SMAR1 as the suppressor of HPV18 E6 to refurbish the lost apoptotic program of the cells. Mechanistically, curcumin installs a proapoptotic cycle in HPV18-infected cervical adenocarcinoma cells. Curcumin up-regulates SMAR1 to potentiate recruitment of SMAR1-HDAC1 repressor complex at the LCR and E6 MAR sequences, thereby decreasing histone acetylation at Lys-9 and Lys-18 leading to reorientation of the chromatin. As a consequence, c-Fos binding at the putative AP-1 sites on E6 promoter is inhibited. E6 depletion interrupts degradation of E6-mediated p53 and lysine acetyl transferase, Tip60. Tumor suppressor p53, being stabilized by SMAR1 and acetylated by Tip60, in turn induces SMAR1 to orchestrate the cycle that leads to HPV18-infected cervical adenocarcinoma cell apoptosis via p53-mediated transactivation of proapoptotic genes. In fact, earlier reports have shown that Tip60 also promotes repression of E6 after it was rescued from E6-mediated degradation (15). Cumulatively, restoration of SMAR1 by curcumin effectively warrants apoptosis in cervical cancer cells. This hitherto unappreciated but novel function of SMAR1 highlights the potential of this protein in regulating the E6-mediated antiapoptotic network in HPV18-infected cervical adenocarcinoma. EXPERIMENTAL PROCEDURES Cell Culture and Treatments-The cervical cancer cell line HeLa (p53 degraded/HPV18-E6-positive) were obtained from National Centre for Cell Science (Pune, India). The cells were routinely maintained in complete Dulbecco's modified Eagle's medium at 37°C in a humidified incubator containing 5% CO 2 (26). Cells were allowed to reach confluence before use. Viable cell numbers were determined by trypan blue exclusion test (26). Cells were treated with different concentrations of curcumin (Sigma) for different time points to select the optimum dose and time required for cancer cell apoptosis. An equivalent amount of carrier (dimethyl sulfoxide) was added to untreated cells. To inhibit HDAC1 activity, cells were preincubated with broad HDAC inhibitor, trichostatin A (0.5 M; Sigma-Aldrich) for 6 h prior to curcumin treatment. To inhibit proteasome activity, cells were preincubated with MG-132 (10 M; Sigma). Flow Cytometry-To assess cell death, cells were stained with propidium iodide and annexin V-FITC (BD Pharmingen) and analyzed on a flow cytometer (FACSCalibur; Becton Dickinson). Electronic compensation of the instrument was done to exclude overlapping of the emission spectra. Total 10,000 events were acquired for analysis using CellQuest software (Becton Dickinson). Annexin V-positive cells were regarded as apoptotic cells (27). Co-immunoprecipitation and Immunoblotting-To obtain whole cell lysates, cells were homogenized in lysis buffer (20 mM HEPES, pH 7.5, 10 mM KCl, 1.5 mM MgCl 2 , 1 mM Na-EDTA, 1 mM Na-EGTA, and 1 mM DTT) supplemented with protease and phosphatase inhibitor cocktails (28). For direct Western blot analysis, a total of 50 g of protein was resolved using SDS-PAGE and transferred to nitrocellulose membrane and probed with specific antibodies, for example, anti-p53, -Bax, -Puma, -Caspase-3, -Caspase-9, -E6, -c-Fos, -SMAR1, -Tip60, and -p300 antibodies (Santa Cruz Biotechnology, Santa Cruz, CA) and anti-Ser(P)-15-p53 (Cell Signaling Technology, Danvers, MA); thereafter, the immunoblots were visualized by chemiluminescence (GE Healthcare). Equal protein loading was confirmed with anti-␣-actin antibodies (Santa Cruz Biotechnology). For the determination of direct interaction between two proteins, a co-immunoprecipitation technique was employed (28). p53-ubiquitin or SMAR1-HDAC1 interactions were performed using cell lysates prepared in Nonidet P-40 (1%) lysis buffer containing protease inhibitors. Samples (300 g of protein from the total lysate) were incubated at 4°C overnight with anti-p53/SMAR1 antibody and then incubated for 2h at 4°C with protein G-Sepharose (Invitrogen). Immunocomplexes were washed of unbound proteins with cold Trisbuffered saline with protease inhibitors, and pelleted beads were boiled for 5min in SDS-polyacrylamide gel electrophoresis sample buffer. The immunoprecipitated proteins were resolved on SDS-polyacrylamide gel electrophoresis and analyzed by Western blotting for detection of Ub/p53/SMAR1/ HDAC1. The input protein used in immunoprecipitation was confirmed by Western blotting with anti-␣-actin. Plasmids, siRNA, and Transfections-The expression constructs pBK-CMV-SMAR1-cDNA and control pcDNA3.0 vectors (2 g/million cells) were introduced into exponentially growing cancer cells using Lipofectamine 2000 (Invitrogen) according to the protocol provided by the manufacturer. Stably expressing clones were isolated by limiting dilution and selection with G418 sulfate (1 mg/ml; Cellgro), and G418-resistant cells were cloned and screened by immunofluorescence or Western blotting with specific antibodies. For endogenous silencing of specific genes, cells were transfected with 300 pmol of Tip60 siRNA/p53 shRNA/SMAR1shRNA using Lipofectamine 2000 separately for 12 h. The mRNA and protein levels were determined by RT-PCR and Western blotting, respectively. Electrophoretic Mobility Shift Assay-For EMSA, nuclear extracts from HeLa cells were prepared according to standard protocol, and 30 g was used for gel shift assays. Oligonucleotides corresponding to the SMAR1 binding sites 1 and 5 or the AP-1 binding sites were end-labeled with [␥-32 P]ATP using T4 polynucleotide kinase. Probe purification was done using Probequant G-50 column (Amersham Biosciences). Binding reactions were performed in a 10 l of total volume containing 10 mM HEPES (pH 7.9), 1 mM dithiothreitol, 50 mM KCl, 2.5 mM MgCl 2 , 10% glycerol, 0.5-1 g of double-stranded poly(dI-dC), 10 g of BSA, and 1 g of SMAR1 recombinant protein or 30 g of nuclear lysate. For cold competitor, 100 times the unlabeled probe was used. Samples were incubated for 5 min at room temperature prior to addition of radiolabeled probe. The samples were then incubated for 15 min at room temperature, and the products of binding reactions were resolved using 5% native polyacrylamide gel electrophoresis. The gels were dried under vacuum and processed for autoradiography. Statistical Analysis-The values are shown as standard error of mean, unless otherwise indicated. The data were analyzed and, where appropriate, significance (p Ͻ 0.05) of the differences between mean values was determined by a Student's t test. SMAR1 Represses HPV18 E6 Oncogene Expression and Restores the Lost Apoptotic Program in HPV18-infected HeLa Cells-Nuclear matrix and matrix attachment regions (MARs) have been implicated in the transcriptional regulation of host, as well as viral genes, but their precise role in HPV or human papillomavirus transcription remains unclear. Recently the contribution of SMAR1 in regulating transcription of HIV genes has been accredited (21,29), but its effect on HPV18 oncogenes is not yet known. To address the role of SMAR1 in regulation of HPV18 E6, we first deciphered their relationship, if any. Our results depicted that the increase in SMAR1 by 25 M curcumin (according to our earlier reports (27) Fig. 1A confirmed the transcription efficiency of the experiment. Because HPV18 E6 disrupts the normal apoptotic machinery of the noncancerous cells, our next approach was to understand the effect of SMAR1-induced E6 down-regulation in HeLa cells. To this end, up-regulation of SMAR1 by curcumin in a dose-dependent manner in wild-type p53-expressing HPV-infected cervical cancer cells resulted in significant cell death at concentrations starting from 25 M (Fig. 1B, left panel). Because concentrations beyond 25 M were found to be toxic to normal cells (peripheral blood mononuclear cells) (Fig. 1B, middle panel), we restricted our future experiments to this dose. Next, we performed cell death analyses of HeLa cells using 25 M curcumin in a time-dependent manner. Because maximum cell death was obtained at 24 h (Fig. 1B, right panel), beyond which no significant change in cell death was observed (data not shown), our subsequent experiments were performed using a 25 M dose of curcumin for 24 h. In fact, curcumin-treated HeLa cells furnished an increased number of annexin V-positive cells (Fig. 1C) and DAPI-stained nuclear blebbing per visual field (Fig. 1D), thereby confirming induction of apoptosis in these HPV18-infected cervical carcinoma cells. SMAR1 Up-regulates p53-SMAR1 Feedback Loop to Suppress E6 and Mediate Apoptosis-Because E6 mediates the accelerated proteosomal degradation of p53 tumor suppressor (12,26), our next attempt was to check the status of p53 in these HPV18-infected cervical carcinoma cells. Our results indicate that with curcumin-induced up-regulation of SMAR1 (Fig. 1A), endogenous p53 protein expression also increased in a time-dependent manner ( Fig. 2A). Interestingly, in SMAR1 cDNA-transfected HeLa cells, E6-mediated p53 ubiquitination was decreased (Fig. 2B). In addition, whereas curcumin increased phosphorylation of p53 at residue Ser-15, the same was decreased in SMAR1-silenced cells, even in the presence of curcumin (Fig. 2C). Because it has been reported that SMAR1 stabilizes p53 at the Ser-15 residue (18), our results validated the role of SMAR1 in p53 stabilization and activation. Being a multifunctional protein, p53 forms molecular complexes with different DNA targets and interacts with a number of cellular proteins including SMAR1 (30). Interestingly, whereas curcumin treatment led to an increase in SMAR1 both at protein, as well as mRNA levels in HeLa cells (Fig. 1A), it failed to do so in cells stably transfected with p53-shRNA (Fig. 2D, left and middle panels). The results shown in the right panel OCTOBER 17, 2014 • VOLUME 289 • NUMBER 42 of Fig. 2D confirmed the transcription efficiency. In addition, SMAR1 ablated E6-expressing HeLa cells significantly resisted curcumin-induced apoptosis (Fig. 2E). These results therefore suggest the existence of positive interdependence between p53 and SMAR1 in which p53 activates SMAR1 transcription and SMAR1 in turn (i) up-regulates p53 via E6 down-regulation and (ii) stabilizes p53 to ensure its function in curcumin-treated HeLa cells. Culminating the results above, it can be suggested that proper functioning of p53-SMAR1 loop is indispensable for E6 repression and reinstalling apoptosis in HPV18-infected cervical cancer cells. SMAR1-dependent HPV18 E6 Transcriptional Repression SMAR1 Binds to Conserved MAR Sequence within HPV LCR and E6 -Several reports highlight the importance of MARs in viral integration and their role in viral transcription (29), but the mechanism by which they elicit these effects is not yet understood. In the previous sections, we observed SMAR1-mediated suppression of the HPV18 oncogene E6 and restoration of the apoptotic machinery. It is known that the promoter region of E6 is divided into several regions L1, LCR, origin of replication/promoter, and E6 itself (Fig. 3A) (31). To explore the SMAR1-binding site(s), if any, on the E6 promoter, different sets of overlapping primers of the total promoter region were designed for DNA ChIP experiment (overlapping primer sets 1-9; Fig. 3A). Fig. 3B depicted the recruitment of SMAR1 on the LCR (7110 -7299 bp; site 5) and E6 coding region (128 -162 bp; site 1) of HPV18 E6 promoter. We validated those results by EMSA using radiolabeled oligomers from the E6 promoter region containing SMAR1 binding sites 1 and 5. The results shown in Fig. 3C (lane 2 of left and right panels) indicate direct interaction between the E6 promoter region and SMAR1. Furthermore supershift EMSA using SMAR1 specific antibody validated the same. (Fig. 3C, third lanes of left and right panels). These results not only corroborate the direct recruitment of SMAR1 on E6 promoter sequence but also identify SMAR1 as the molecule responsible for repression of E6 transcription. SMAR1 Restrains E6 Expression in HPV18-infected Cervical Adenocarcinoma Cells by Inhibiting the Recruitment of c-Fos to E6 Promoter-We next assessed whether binding of SMAR1 with the E6 promoter leads to inhibition of any other transcriptional activator(s) of E6 because, being a nuclear matrix attachment protein, SMAR1 can modulate the binding of other transcriptional activators (30,32). It is known that among the various members of the AP-1 family, c-Fos acts as a tumor promoting factor, up-regulation of which causes cellular transformation (7)(8)(9). Moreover, during HPV-infected tumor development, a shift in the composition of AP-1 from Fra-1/c-Jun to c-Fos/c-Jun heterodimers has also been documented (7)(8)(9). In fact, various c-Fos target genes are reported to be expressed at higher levels in cervical cancer cells in comparison with normal cervical epithelial cells (33), thereby highlighting the importance of c-Fos in cervical carcinoma. To this end, we employed bioinformatics matrix/motif finding tool and identified AP-1 binding sites on E6 promoter. The results shown in Fig. 3D (top panel) indicate that the probability of transcription factorbinding matrix was highest on bp 34 -41 and 82-89 regions (Ori/pram and E6 sequence junction; AP-1-1) and bp 7164 -7175 (LCR region; AP-1-2) of E6 promoter (cutoff score, Ͼ5). Putative AP-1-binding sites, as deduced by in silico exploration, were validated by ChIP analysis using antibodies against AP-1 binding factors, c-Fos and Fra-1, in HeLa cells. It was observed that whereas c-Fos significantly occupied both the sites (AP-1-1: 34 -41 and 82-89; AP-1-2: 7164 -7175) on E6 promoter (Fig. 3D, middle panel), negligible binding was observed for Fra-1 (Fig. 3D, bottom panel). Previously our results depicted that SMAR1 binds at the LCR (7110 -7299 bp), a portion of which overlaps with an AP-1-binding site (7164 -7175 bp). SMAR1-dependent HPV18 E6 Transcriptional Repression Considering the findings from our in silico analysis that the binding probability of SMAR1 is highest around c-Fos binding sites at LCR (Fig. 3D, top panel), it was logical to hypothesize that in the presence of SMAR1, c-Fos might be failing to bind to its specific binding sites (AP-1-1 and AP-1-2) on HPV18 E6 promoter. Our chromatin immunoprecipitation results indeed demonstrated that in HeLa cells in which SMAR1 was up-regulated by curcumin, whereas SMAR1 was recruited to the LCR and E6 region of E6 gene, the recruitment of c-Fos to AP-1-1 (Fig. 4A, left panel) and AP-1-2 (Fig. 4A, right panel) sites, respectively, was reduced. However, such a decrease in the promoter occupancy was considerably rescued in SMAR1-silenced HeLa cells where significant c-Fos recruitment was observed (Fig. 4A, both left and right panels). These results were corroborated by EMSA, in which bindings of SMAR1 and c-Fos to the radiolabeled probes of AP-1-binding sites were monitored. The results shown in Fig. 4B (lane 2 of left and right panels) depicted the recruitment of rSMAR1 at both the AP-1-binding sites on the E6 promoter. Supershift assay using SMAR1 antibody validated these results (lane 3 of left and right panels). In contrast, in the presence of rSMAR1 (as in lane 2), c-Fos antibody failed to furnish any supershift of the bands (lane 4 of left and right panels), thereby negating the possibility of binding of c-Fos with the AP-1-binding sites on the E6 promoter once SMAR1 is bound to the same. All these findings together reinforced that SMAR1 functions as a transcriptional repressor of HPV18 E6 gene. SMAR1 Associates with HDAC1 to Repress E6 Transcription-Recent studies have shown that SMAR1 recruits HDAC1/Sin3A co-repressor complex to various promoters and repress gene expression (17,18) and that the presence of HDACs at the promoter is strongly correlated with transcriptional repression (34). Therefore, we next tested whether both SMAR1 and HDAC1 are co-recruited to the SMAR1-binding sites identified above, i.e. E6 FIGURE 3. SMAR1 binds to conserved MAR sequence within HPV LCR and E6. A, schematic diagram representing different regions of E6 promoter: L1, LCR, Ori (origin of replication), and E6 coding region, and the sequential order of primer sets (sets 1-9) was designed to identify SMAR1 binding regions on HPV18-E6 promoter by ChIP analysis. B, schematic representation of SMAR1-occupied region on E6 promoter (upper panel). ChIP assay for SMAR1 binding on the HPV18-E6 promoter in curcumin-treated HeLa cells. Binding site numbers correspond to primer set numbers. Positive bands for the SMAR1-binding sites on E6 promoter were shown in lanes 3 and 7 (binding sites 1 and 5, lower panel). C, EMSA was done using the radiolabeled probes for the binding sites 1 and 5 along with nuclear extract of HeLa cells. There was significant complex formation between nuclear extract with the probes (second lanes in both left and right panels). Incubation with SMAR1 antibody induced supershift of the band (third lanes in both left and right panels). Addition of the cold competitor (fourth lanes in both left and right panels) showed reduced complex formation. D, schematic representation of c-Fos-binding sites adjacent to SMAR1-binding sites, on E6 promoter (top panel). ChIP assay with anti-c-Fos (middle panel) and anti-Fra-1 (bottom panel) was performed on E6 promoter. Positive bands for c-Fos-binding sites (AP-1 sites: AP-1-1 and AP-1-2) on E6 promoter are shown in lanes 3, 7, and 11 (binding sites 1, 5, and 9, bottom panel). coding regions: 30 -162 bp and LCR: 7110 -7299 bp, by verifying HDAC1 occupancy on HPV18 E6 promoter (Fig. 4C, upper panel). ChIP experiment was performed (using primer sets 1-9) in chromatin fractions pulled with anti-HDAC1 antibody from HeLa cells in which SMAR1 was up-regulated by curcumin. The results shown in Fig. 4C (lower panel) confirmed the recruitment of HDAC1 to the SMAR1-binding sites on E6 promoter. Moreover, our co-immunoprecipitation studies validating the direct association of SMAR1-HDAC1 in curcumin-treated SMAR1-up-regu-lated HeLa cells (Fig. 4D, left panel) indicated the possibility of the involvement of HDAC1 in transcriptional repression of E6 gene by SMAR1. It is acknowledged that the transcription factors, which are the components of the chromatin remodeling complex, can affect transcription in two ways: one by recruiting repressor complexes, other by modifying the chromatin structure through direct binding. We therefore next assessed the role of SMAR1 in the SMAR1-HDAC1 repressor complex. ChIP assay SMAR1-dependent HPV18 E6 Transcriptional Repression of chromatin extracts from SMAR1-shRNA-transfected HeLa cells showed a reduced association of HDAC1 with LCR and E6 regions even when the transfectants were treated with curcumin (Fig. 4E). These results validate the indispensible role of SMAR1 for promoting association of HDAC1 with LCR and E6 regions of the HPV18 genome. To further confirm the involvement of HDAC1 in SMAR1-induced repression of E6, HeLa cells were preincubated with broad HDAC inhibitor trichostatin A prior to SMAR1 up-regulation by curcumin. The results revealed that perturbing HDAC1 activity attenuated SMAR1mediated E6 transcriptional repression, even in the presence of curcumin (Fig. 4F). In fact, although these cells demonstrated SMAR1 recruitment on E6 promoter (Fig. 4G), they failed to inhibit c-Fos binding to its cognate sites (AP-1-1 and AP-1-2) (Fig. 4H) as assessed by ChIP analysis. Together, these findings conclusively substantiated the involvement of SMAR1-HDAC1 co-repressor complex in perturbing c-Fos-regulated E6 transcription in HeLa cells. SMAR1 Binding to HPV18 E6 Promoter Causes Local Chromatin Condensation-It has been reported that significant deacetylations at H3K9 and H3K18 are specifically regulated by HDAC1 (35,36). Therefore, the effect of SMAR1-HDAC1 corepressor complex recruitment on chromatin condensation was verified by analyzing the histone modifications at c-Fosbinding regions on E6 promoter. Our results indicated significant acetylations of H3K9 (Fig. 5A, upper panel) and H3K18 (Fig. 5A, lower panel) on E6 promoter in untreated HeLa cells. However, the same was significantly reduced under curcumininduced SMAR1 up-regulated condition (Fig. 5B, upper and lower panels). In fact, modulation in chromatin observed above was rescued in SMAR1-shRNA-transfected cells as manifested by increased histone acetylation at H3K9 (Fig. 5B, upper panel) and H3K18 (Fig. 5B, lower panel), which even curcumin treatment failed to reverse (Fig. 5B). Increased acetylation was also observed when HeLa cells, in which SMAR1 was up-regulated by curcumin, were pre-exposed to broad HDAC inhibitor trichostatin A (Fig. 5B). All these results together confirm that SMAR1-HDAC1 repressor complex binds to LCR and E6 coding region and deacetylates histones to repress c-Fos-mediated E6 transcription. SMAR1, therefore, plays a major role in modulating chromatin structure at HPV18 E6 promoter. Inhibition of E6 Reinstalls Apoptotic Program in HPV18-infected Cervical Carcinoma Cells in p53-Tip60-dependent Manner-We next sought to identify the ultimate effect of SMAR1-induced E6 down-regulation in HeLa cells. For the purpose, we explored the possibility of p53-mediated apoptosis in cervical adenocarcinoma cells because curcumin-induced SMAR1 up-regulation resulted in E6 down-regulation and subsequent restoration of p53. Earlier reports stated curcumin as an inhibitor of the acetyl transferase CBP/p300, which is a coactivator of p53 apoptotic machinery (37). Fig. 5C furnishing similar results not only ruled out the involvement of CBP/p300 in curcumin-mediated p53-dependent apoptosis of HeLa cells but also indicated the involvement of other acetyl transferase proteins. Previous reports (14,38) have shown that p53 acts as a substrate for the proapoptotic acetyl transferase Tip60 that catalyzes acetylation of lysine 120 of the DNA binding domain of p53. Importantly, Lys-120 acetylation is crucial for p53 to trans-activate proapoptotic genes, e.g. PUMA, BAX, etc. (38,39). However, Tip60 has been shown to be down-regulated in HPV18-infected cervical adenocarcinoma cells in which HPV 18 E6 degrades it in a proteosome-dependent pathway (15). Our results depicted a significant restoration and increase in Tip60 expression in HeLa cells after curcumin treatment in a time-dependent manner (Fig. 5D). Next our attempt to confirm the involvement of SMAR1 in the restoration of Tip60 furnished above revealed down-regulation of this lysine acetyltransferase by SMAR1 ablation (Fig. 5E). Cumulatively, downregulation of E6 by SMAR1 might have resulted in Tip60 accumulation in HPV-18-infected cervical cancer cells. Finally, addition of the proteasome blocker, MG-132, restored the expression of Tip60 (Fig. 5F), thereby not only validating the role of E6 in degradation of Tip60 but also SMAR1-induced E6 down-regulation as the reason behind curcumin-induced upregulation of Tip60 protein. In the next experiment, Tip60 ablation by siRNA decreased endogenous Lys-120 acetylation of p53 (Fig. 5G), which curcumin treatment failed to restore. In line with these results, whereas curcumin treatment increased Puma and Bax at both mRNA and protein levels, Tip60 silencing abrogated both in HeLa cells (Fig. 5H). Finally, downstream, loss of mitochondrial transmembrane potential (Fig. 5I, left panel) and activation of caspases-9 and -3 (Fig. 5I, right panel) were observed. These results implicate the existence of the mitochondrial pathway of apoptosis in these SMAR1 up-regulated HeLa cells. SMAR1-p53-Tip60 Network Ensures the Fine Tuning of E6 Abrogation and Apoptosis-To further validate that SMAR1mediated E6 down-regulation was required for restoring p53 protein levels, a battery of HPV-DNA-negative cancer cells, MCF-7, HCT-116, A549, and H460, was ectopically expressed with HPV18 E6-cDNA. The results depicted in Fig. 5I demonstrated that there was significant E6 expression with low levels of p53 and SMAR1 in these transfectants, whereas curcumin effectively reversed the situation (Fig. 5J). These results validated that E6 suppression was indeed inevitable for preventing p53 protein degradation. All these results together underscore the role of SMAR1 in down-regulating E6 and relieving p53 and Tip60 from E6-mediated degradation. In turn, p53 and Tip60 activate the downstream apoptotic machinery. SMAR1, therefore, might be acting as a double-edged sword by (i) suppressing E6 through SMAR1-HDAC1 repressor complex and (ii) restoring the long lost apoptotic program through p53-Tip60 interplay. DISCUSSION The etiology, pathogenesis, and prophylaxis of poorly differentiated cervical adenocarcinoma exclusively expressing HPV18 oncogenes are feebly recognized despite its topical prevalence worldwide. Although HPV types 16 and 18 remain the most common in cervical lesions, causing 60 -80% of all cervical cancers, it is known that HPV18 behaves more aggressively than HPV16, and the transcriptional regulatory regions of HPV16 and HPV18, upstream of the E6 and E7 genes, are the major determinants that discriminate between the biological activities of the respective viruses (40). Because the capability of transcriptional activity is higher for HPV18 than HPV16, HPV18 is more aggressive in nature, and therefore detailed study on HPV18 is of the utmost necessity for betterment of screening and treatment of women progressing higher grade lesions or invasive carcinoma. It is acknowledged that the prominent role of HPV oncogene E6 is to inhibit p53 function, thus impairing the cell cycle or inhibiting the cells to enter the apoptotic pathway in response to DNA damage (41). Therefore, HPV E6 seems to be a potential therapeutic target for regression of cervical cancer. In the present study, using the natural plant polyphenol curcumin, a potent anticancer agent (22,23,25), we have demonstrated the restoration of the apoptotic program in HPV18infected cervical adenocarcinoma cells. Curcumin-mediated apoptosis of cervical adenocarcinoma cells relies on its ability to up-regulate SMAR1, which in turn causes transcriptional repression of HPV18 E6. In general, SMAR1 ensures the recruitment of HDAC1-dependent repressor complex at the LCR and E6 coding regions of the E6 promoter that deacetylates chromatin histones to restrict binding of its transcriptional activator, c-Fos to its putative AP-1-binding sites. As a result, E6 transcription is repressed, thereby restoring p53. On the other hand, E6 depletion stalls degradation of the acetyl transferase Tip60 to reinstall p53-mediated apoptotic program in HPV18-infected tumors. Cumulatively, we establish a critical role of curcumin-induced SMAR1 in repressing the viral onco-gene E6 and thereby inducing apoptosis in HPV18-infected cervical adenocarcinoma. Because E6-suppression is dependent on SMAR1, loss of SMAR1 in these HPV18 cells is possibly linked with the up-regulated levels of E6 in executing cervical tumor progression. We observed that curcumin re-establishes the p53-SMAR1-positive feedback tumor suppressor loop in these cervical adenocarcinoma cells. Although on one hand, curcumin-induced SMAR1depletes E6, thereby rescuing p53 from E6-mediated degradation, on the other hand, it stabilizes and activates p53 by phosphorylation at Ser-15 as reported previously (17,18). Activated p53 in turn augments SMAR1 transcriptionally. This p53-SMAR1-positive feedback loop therefore helps maintaining SMAR1 expression continually, thereby leading to repression of E6 and p53-mediated apoptosis of HPV-infected cervical adenocarcinoma cells. OCTOBER 17, 2014 • VOLUME 289 • NUMBER 42 JOURNAL OF BIOLOGICAL CHEMISTRY 29083 To understand the complete mechanism of SMAR1-induced repression of E6 transcription, we considered the possible role of the transcription factor, AP-1, because it transcriptionally activates both the viral proto-oncogenes, E6 and E7 (8,9). The present study illustrates that SMAR1 escalation by curcumin treatment or SMAR1-cDNA transfection down-regulated E6 by inhibiting c-Fos recruitment to AP-1 sites on E6 promoter. Further search for the detailed mechanism revealed the presence of overlapping SMAR1-and c-Fos-binding sites on the E6 promoter, as a result of which curcumin-induced SMAR1-HDAC1 complex recruitment to the E6 promoter hindered the binding of c-Fos to its AP-1binding sites on the E6 promoter, thereby causing transcriptional repression of E6. In addition, SMAR1-mediated recruitment of HDAC1 to the E6 promoter led to the deacetylation of local histones. The resultant change in chromatin structure might have hampered any further association of c-Fos to its specific binding sites, thereby enforcing transcriptional repression of the HPV18 E6. Our results not only verified the association of SMAR1 and HDAC1 but also binding of HDAC1 exactly on the SMAR1-binding site on E6 promoter. It was also perceived that SMAR1-HDAC1 recruitment to these SMAR-binding sites modulates the chromatin structure to finally hinder the access of transcription factors like c-Fos on E6 promoter. In keeping with previous reports (42), our results revealed curcumin-mediated down-regulation of the transcription coactivator p300, which is required for both c-Fos-and p53-dependent transcription (9,43). These results not only ruled out the involvement of CBP/p300 in p53-dependent apoptosis pathways in HeLa cells in which curcumin up-regulated SMAR1 to ensure p53 functionality but also demanded justification for the counteracting activity of curcumin in c-Fos and p53 functions. At this juncture, our search for other probable acetyl transferases revealed an increment in the endogenous tumor suppressor acetyl transferase Tip60 protein expression in the HPV18-infected cervical carcinoma cells under curcumin-treated conditions. It has been reported that whereas Tip60 promotes the proapoptotic Lys-120 acetylation of p53, thereby executing its anti-tumor function (14), it is degraded by the proto-oncogene E6 in the cervical adenocarcinoma cells (15). In our experimental sets, curcumin-induced repression of E6 via SMAR1 rescues Tip60 from E6-mediated degradation and promotes acetylation and activation of p53 to initiate downstream apoptotic pathways. These results suggest the presence of a SMAR1-p53-Tip60 synergistic network in repression of E6 and successive apoptosis of cervical adenocarcinoma. SMAR1 therefore appears to exert transcriptional repression of HPV promoter through bimodal mechanisms: (i) SMAR1 recruits HDAC1 to its binding sites within LCR and E6 coding region of E6 promoter to coordinate histone deacetylation and condensation of chromatin, and (ii) such chromatin modulation restricts binding of c-Fos to its putative AP-1-binding site on HPV18 E6 promoter because the binding probability of SMAR1 is highest around c-Fos-binding sites. These bimodal actions of SMAR1 make it a strong negative regulator of HPV18 E6 transcription. E6 depletion, in turn, leads to restoration of p53 and Tip60, which act together to reactivate the apoptotic pathways in these cervical adenocarcinoma cells to eventually lead to apoptosis (Fig. 6). Our studies presented here strongly suggest p53-SMAR1positive feedback loop as the likely targeting candidate that may inhibit the viral transcription as well as the aggressiveness of HPV18-infected cervical adenocarcinoma. However, further studies are needed to determine the complete cellular network that coordinates SMAR1-induced transcriptional repression of the HPV oncogenic and malignancy networks.
2018-04-03T02:21:07.857Z
2014-08-25T00:00:00.000
{ "year": 2014, "sha1": "a2bf8187a19d79472c5ae93b2e2fc1bc7fe38b81", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/289/42/29074.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "a2bf8187a19d79472c5ae93b2e2fc1bc7fe38b81", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
116018498
pes2o/s2orc
v3-fos-license
Detection of Azo Dyes in Curry Powder Using a 1064-nm Dispersive Point-Scan Raman System Curry powder is extensively used in Southeast Asian dishes. It has been subject to adulteration by azo dyes. This study used a newly developed 1064 nm dispersive point-scan Raman system for detection of metanil yellow and Sudan-I contamination in curry powder. Curry powder was mixed with metanil yellow and (separately) with Sudan-I, at concentration levels of 1%, 3%, 5%, 7%, and 10% (w/w). Each sample was packed into a nickel-plated sample container (25 mm × 25 mm × 1 mm). One Raman spectral image of each sample was acquired across the 25 mm × 25 mm surface area. Intensity threshold value was applied to the spectral images of Sudan-I mixtures (at 1593 cm−1) and metanil yellow mixtures (at 1147 cm−1) to obtain binary detection images. The results show that the number of detected adulterant pixels is linearly correlated with the sample concentration (R2 = 0.99). The Raman system was further used to obtain a Raman spectral image of a curry powder sample mixed together with Sudan-I and metanil yellow, with each contaminant at equal concentration of 5% (w/w). The multi-component spectra of the mixture sample were decomposed using self-modeling mixture analysis (SMA) to extract pure component spectra, which were then identified as matching those of Sudan-I and metanil yellow using spectral information divergence (SID) values. The results show that the 1064 nm dispersive Raman system is a potential tool for rapid and nondestructive detection of multiple chemical contaminants in the complex food matrix. Introduction Food spices are often used for food coloring and flavor.Curry powder is extensively used for food seasoning in Southeast Asian dishes.It is a blend of turmeric, coriander, cumin, cardamom, paprika, and other spices.Although these spices are free of economically motivated chemical contamination in their raw form, their powder form is often reported to be contaminated with chemicals for greater economic benefit.Instances of economically motivated adulteration of turmeric and paprika, the primary ingredients in curry powder, by metanil yellow and Sudan-I color dyes have increased the risk of chemical contamination in curry powder [1][2][3]. Metanil yellow (C 18 H 14 N 3 NaO 3 S) and Sudan-I (C 16 H 12 N 2 O) are azo compounds.There are over 3000 azo compounds that account for 65% of the commercial dye market [4].Azo dyes which are used as synthetic organic colorants are characterized by chromophoric azo groups (-N=N) [4].Azo dyes reduce to form aromatic amines under anaerobic condition and pose a carcinogenic risk to human health.Metanil yellow is toxicologically classified as a CII category by the Food and Agriculture Organization of the United Nations [5].Due to its similar color appearance to turmeric, it is often mixed with turmeric [6].Long-term consumption of metanil yellow causes neurotoxicity [7], hepatocellular carcinoma [8], tumor development [9], deleterious effect on gastrin mucin [10], and lymphocytic leukemia [11].Sudan-I is a non-ionic fat-soluble azo compound [12], used as color dye to color waxes, oils, petrol, plastics, printing inks, shoes, and polishes.Sudan-I is carcinogenic to humans [13].It has been classified as a category III carcinogen by the International Agency for Research on Cancer [14].Due to the low cost, wide availability, and similar color appearance of Sudan-I to spice powders like chili and paprika, it is illegally used as a food color additive [12,15,16]. Several studies are reported for detection of chemical contamination in spice powders.Conventional analytical methods such as high-performance liquid chromatography (HPLC) [17,18], polymerase chain reaction [19], and high performance capillary electrophoresis [20] are used for detection of chemical contaminants in chili and turmeric.Color dyes contamination in chili powder and paste was determined using HPLC-electrospray ionization tandem mass spectrometry [21].Dixit et al. (2008) used a two-directional HPLC method for detection of curcumin, metanil yellow, and Sudan dye in turmeric, chili, and curry powder [22].Although these methods have high accuracy, factors including high operational costs, requirement for skilled personnel, complicated sample preparation procedure, solvent disposal, and protracted sampling time limit their field application for rapid and non-destructive detection for food safety and quality evaluation. Fourier transform Raman (FT-Raman) spectroscopy using a 1064 nm laser source and a commercial Raman spectroscopic system (using 514 nm and 785 nm laser source), coupled with a surface-enhancement method are used for detecting chemical contamination in food spices [23].Dhakal et al. (2016) used FT-Raman for detection of metanil yellow contamination in turmeric [1].Turmeric and metanil yellow were mixed in de-ionized water to obtain a homogenously mixed sample.The liquid sample was filled in a NMR tube, held and adjusted in the sample compartment to focus the laser light on the sample for spectral measurement.Cheung et al. (2010) used a surface-enhancement Raman spectroscopic method to detect Sudan-I contamination in chili [24].In these systems, the sample is adjusted to focus the incident light on the sample surface for spectral measurement from selected spots.The need for sample adjustment for each subsequent measurement precludes these systems measuring a large surface area of the sample.These techniques, being spot measurement, are useful for evaluation of homogenous samples only.Detection of chemical contaminants in food powders requires spectral measurement of the entire surface area of the sample. The hyperspectral imaging technique is used for collection of spectral images of large samples.A line-scan 785 nm Raman chemical imaging system was developed for evaluation of food safety and quality [25].The system has been used for authentication of food powders by detection of chemical contaminants such as azodicarbonamide in flour [25]; melamine, benzoyl peroxide, and maleic anhydride in skim milk powder, wheat flour, and corn starch [26]; and urea in skim milk powder [27].However, pigmented food samples such as curry powder, turmeric, and paprika and chemical contaminants such as metanil yellow and Sudan dye emit high fluorescence.The 785 nm Raman chemical imaging system cannot be used to measure these samples.It is a challenge for currently available Raman systems to authenticate curry powder contaminated with azo dyes such as Sudan-I and metanil yellow. This study used a newly developed 1064 nm dispersive Raman system [28] to measure curry powder mixed with Sudan-I and (separately) with metanil yellow at different concentrations for detection of Sudan-I and metanil yellow contamination in curry powder.This study further demonstrates the use of the 1064 nm Raman system for simultaneous detection of multiple contaminants (Sudan-I and metanil yellow) in curry powder.Self-modeling mixture analysis (SMA) was used to decompose the multi-component spectra of the sample mixture and extract pure component spectra of Sudan-I and metanil yellow.The primary objectives of this study are to: (1) Obtain Raman spectral images of curry powder-metanil yellow, curry powder-Sudan I, and curry powder-metanil yellow-Sudan I samples prepared at different concentrations using the 1064 nm Raman system; (2) Identify the vibrational modes which are effective markers for specific chemical structural features unique to the metanil yellow and Sudan-I and discrete from the curry powder matrix vibrational modes; (3) Detect Sudan-I and metanil yellow contamination in curry powder at different concentrations; (4) Use self-modeling mixture analysis to resolve the multi-component spectra of curry powder-metanil yellow-Sudan I mixture sample into pure component spectra and scores for simultaneous detection of Sudan-I and metanil yellow. Point-Scan Raman System The Raman spectrograph uses a high throughput volume phase grating (VPG) (BaySpec, Inc., San Jose, CA, USA) optimized for 1064 nm laser excitation [28].The scattering Raman signal from the sample is directed to the VPG through a concave mirror in the spectrograph.The VPG diffracts the incoming light into different angular output paths.The dispersed light is reflected to the 512 pixels Indium-Gallium-Arsenide (InGaAs) detector (Nunavut, Bayspec, Inc., San Jose, CA, USA).A USB cable connects the detector directly with the computer for detector control and data transfer.The detector is thermoelectrically cooled to −55 • C during spectral acquisition to minimize the dark current. The sample is held and moved in two perpendicular directions using a two-axis motorized positioning table (MAXY4009W1-S4, Velmex, Bloomfield, NY, USA).The sample movement is controlled by stepper motor controller.The sample is moved in X-and Y-axes, below the fixed position Raman probe, collecting Raman spectra of the sample by point-scan method.The Raman spectra are accumulated to obtain a hyperspectral Raman image of the sample, which can be analyzed both spectrally and spatially. An interface software is developed in-house for parameter setup and data transfer.The software is used to control the operational parameters of the system such as initialization, adjusting exposure time, spectral acquisition and display, sample movement, and data transfer and storage.The interface software was developed using the software development kit (SDK) of the InGaAs detector and the positioning table.The hyperspectral data acquired by the interface software is stored in band interleaved by pixel (BIP) format, which can be analyzed by ENVI (ITT Visual Information Solutions, Boulder, CO, USA) and Matlab (MathWorks, Natick, MA, USA). Spectral calibration of the Raman system was done using polystyrene and naphthalene.After spectral calibration, the Raman system covered the wavenumber range of 142 cm −1 to 1820 cm −1 .The system has a spectral resolution of 12 cm −1 at full width half maximum (FWHM).A standard resolution test chart (Edmund Optics Inc., Barrington, NJ, USA) was used to evaluate the spatial resolution of the system.The system has a spatial resolution of 0.1 mm. Sample Preparation Sudan-I (95% dye, Aldrich, Carson City, NV, USA), metanil yellow (70% dye, Aldrich, Carson City, NV, USA), and organic curry powder (Frontier Natural Products CO-OP, Norway, IA, USA) were used to prepare mixture samples.Curry powder-Sudan I and curry powder-metanil yellow mixture samples were prepared separately at 1%, 3%, 5%, 7%, and 10% (w/w) concentrations by mixing the powders in a vortex mixer (Scientific Industries Inc., Bohemia, NY, USA) for 10 min.For simultaneous detection of the two azo compounds in the curry powder, Sudan-I and metanil yellow (5% each, w/w) were mixed together with the curry powder in the vortex mixer for 10 min.Each sample was packed in a shallow nickel-plated sample container (25 mm × 25 mm × 1 mm), and the surface was leveled flush with the top edge of the sample container.An amount of 0.27 g of mixture sample at each concentration level was prepared to completely fill the volume of the sample container for Raman spectral measurement. Acquisition of Spectral Image Each sample was held immobile in the two-stage moving platform.An exposure time of 1 s and laser power of 120 mW was used for collection of spectral signal.Figure 1 shows the process of acquiring the spectral image by the point-scan method.The process is a row-wise scan.After collecting Raman spectrum from a point, the sample is moved horizontally (X-axis) with a 0.25 mm increment to the next point.After completing the Raman spectral measurement from the first row, the sample is moved vertically (Y-axis) to the next row with a 0.25 mm increment.The process was repeated to collect Raman spectra across the 25 mm × 25 mm surface area of the sample.The Raman spectra were accumulated spatially to construct a hyperspectral image of the sample.The final sample image was a 100 × 100 × 512 hyperspectral cube with 10,000 spatial pixels, each of 512 spectral wavenumbers.One hyperspectral cube was constructed for each mixture sample.Prior to acquisition of Raman spectra, a dark current spectrum was acquired with the laser off and a cap covering the probe.The dark current spectrum was subtracted from the Raman spectrum at each pixel during spectral measurement. Spectral Image Analysis For identification of Sudan-I and metanil yellow mixed separately with curry powder, the hyperspectral images were analyzed in ENVI (4.5, IIT Visual Information Solutions, Boulder, CO, USA).The hyperspectral images of the curry powder-Sudan I sample across all the concentration range were converted into a single band images using the spectral peak of Sudan-I.Similarly, curry powder-metanil yellow hyperspectral images were converted into single band images using the spectral peak of metanil yellow.It was observed that the spectral intensity of the Sudan-I and metanil yellow were intense compared to the spectral intensity of curry powder in the single band images.An intensity threshold value was set to obtain binary detection images by converting all the pixels with intensities below the threshold value into background (curry powder) pixels, and the pixels with intensities above the threshold value into white pixels representing the chemical contaminant in the mixture. The hyperspectral Raman image of curry powder-Sudan I-metanil yellow mixture sample contained mixed spectral information of each component.The three-dimensional hyperspectral Raman image (100 × 100 × 512) of the sample was reshaped into two-dimensional spectral data (10,000 × 512) using Matlab (R2013a, MathWorks, Natick, MA, USA).The mixed spectral signal from the mixture sample was decomposed into pure component spectra of individual components using the self-modeling mixture analysis (SMA) method.Using SMA requires a pure variable in the mixed spectral data.This is the Raman wavenumber at which only one component contributes significant signal intensity.SMA determines pure variables in the first step; next it resolves mixed spectral data into pure component spectra (and contributions) by alternating least squares method [29][30][31].Extraction of pure components spectra by SMA requires obtaining a series of purity spectra, for which the average and standard deviation spectra of the data being analyzed is calculated.A correction factor is added to the average spectrum to reduce the effect of noise.The first purity spectrum is obtained by dividing the standard deviation spectrum by the average spectrum.The first pure variable is the Raman wavenumber with the maximum intensity in the first purity spectrum.The second purity spectrum is obtained by multiplying the first purity spectrum with the determinant-based weight function.The weight function is obtained by calculating the correlation matrix from the mixed data matrix being analyzed.The Raman wavenumber with the maximum intensity in the second purity spectrum is the second pure variable.The process of obtaining purity spectrum and pure variable is repeated until the purity spectra no longer exhibit spectral features [29,30,32].The purity function in the PLS_Toolbox (Eigenvector Research, Inc., Wenatchee, WA, USA) was used to decompose the 10,000 × 512 mixed spectral matrix from the sample mixture to obtain pure component spectra and corresponding score vectors.The score vectors (10,000 × 1) of each component were converted into two-dimensional contribution images (100 × 100) to match the spatial dimension of the hyperspectral Raman image of the sample. After obtaining the pure component spectra of two chemical contaminants, spectral information divergence (SID) was used to match each spectrum to their corresponding components.SID compares the dissimilarity between two spectra by relative entropy [33].Two similar spectra with less discrepancy have low SID value.Each pure component spectra were compared with the reference spectra of the components and SID value was obtained.The spectra with least SID value was assigned as the spectra matching the component. The contribution images of each component were imported into ENVI for further analysis.A pixel intensity threshold value was set to obtain a binary image of each component in the contribution images.A pixel value greater than the set threshold value represented a contaminant pixel.For visualization purpose, the binary images of Sudan-I and metanil yellow were fused together and simple image processing was performed to color code the detected Sudan-I and metanil yellow pixels [34]. Spectral Characteristics of Samples The chemical structures of synthetic and natural yellow compounds are dissimilar and thus the vibrational modes resulting from their chemical structures will be different.Figure 2 shows the Raman spectra of metanil yellow, Sudan-I, and curry powder in the range of 400 cm −1 to 1800 cm −1 and Table 1 shows the assignment of Raman spectral bands.Figure 3 shows the chemical structure of metanil yellow, and Sudan-I.Metanil yellow (Figure 3a Aromatic ring stretching 1340 cm −1 Aromatic ring stretching asym Ring breathing 995 cm −1 Ring breathing (II) 984 cm −1 δ (C-N=N) out of plane bending Both metanil yellow (Figure 3a) and Sudan-I (Figure 3b) contain the same N=N molecular moiety and include seven conjugated C=C bonds in series which explains their similar yellow color.The three aromatic C=C double bonds in ring I in metanil yellow are conjugated only with each other and thus do not shift its visible spectrum from yellow.The two additional double bonds in the naphthene group in Sudan-I are essentially in parallel with the three conjugated ring sites and therefore also do not shift its wavelength from yellow.Curry containing turmeric has a chemical structure that includes only five conjugated double bonds and a keto-enol moiety; its color is also yellow due the added conjugation from phenolic and methoxy sites on its ring sites.Thus, the overlapping spectral bands in the visible wavelength range cannot be used to differentiate between the natural color in spices and synthetic dyes, nor even to differentiate between the synthetic dye structures.Although different natural and synthetic dyes each have a different individual absorptivity coefficient and an individual visible spectral fingerprint, in practice deconstructing the mixture into constituent components accurately is not possible without a priori knowing precisely which chemicals and which structural analogs in the mixtures will contribute to the visible yellow spectral signal.Finally, since the visible spectra of many natural and synthetic dyes are routinely pH dependent, unless each and every reference spectra of all the compounds in a mixture and the sample are collected at the same pH, any calculations based upon the reference standard data will be uncertain and/or invalid.The spectrum range of 400 cm −1 to 1800 cm −1 contains vibrational modes that discern the structural components of the dyes of interest.The Raman spectra in this region is a fingerprint of the dye coded with discrete structural information about the dye structure.Interpretation of the spectral "code" is essential to minimize false positive results which are possible due to spectral lines close to but not identical to marker frequencies.Interpretation also is critical in assigning marker frequencies which can detect structural analogs, including potential metabolites of the compounds of interest. Metanil yellow and Sudan-I are structural analogs of each other.Each has three aromatic rings and an azo group (-N=N-) between the two of them.Rings III and II plus the azo group between the two at the molecular level are co-planar, which causes the yellow color chromophore in both compounds.Three vibrational modes in Sudan-I and metanil yellow relating to the azo group are similar: 1593 cm −1 and 1597 cm −1 (N=N stretching), 1448 cm −1 and 1452 cm −1 (N=N stretching + H-C bending in H-C=C-N=), and 1169 cm −1 and 1147 cm −1 (C-N=stretching + H-C bending in H-C=C-N=) [34,35].The H-C=C-N= bending components in vibrational modes near 1450 cm −1 corresponds to H-C=C-N= sites on Ring III and are similar for both dyes.The H-C=C-N= bending component in the vibrational mode at 1169 cm −1 in Sudan-I is present in metanil yellow in its IR spectrum (1171 cm −1 ).Thus 1169 cm −1 and 1147 cm −1 are assignable as sym and asym vibrational modes to the same molecular site on Ring III, that is, H-C6=C1.The absence of a Raman peak near 1169 cm −1 in metanil yellow spectrum could be due to the moiety H-C2=C1-on Ring III being predominantly symmetrical. The metanil yellow peak at 995 cm −1 due to ring breathing in ring II also confirms the presence of metanil yellow.The Sudan-I peak at 1593 cm −1 due to N=N stretching is most definitive for its identification.The 1340 cm −1 and 1387 cm −1 Sudan-I peaks are due to aromatic ring stretching on the naphthalene ring quite different from metanil yellow and curry vibrational modes.Curry powder formulations can contain a significant amount of the compound Linalool.Linalool has a pleasant smell, and can be a major component in some essential oils.However, sets of vibrational modes corresponding to this compound were not apparent in the Raman sample data. Detection of Sudan-I in Curry Powder Figure 4a shows the raw hyperspectral Raman images of curry powder-Sudan I mixed samples across five concentration levels at 1593 cm −1 .At 1593 cm −1 , peak intensity of Sudan-I is higher than the peak intensity of curry powder.This information was used to create binary detection image for identification of Sudan-I pixels.An initial spectral intensity threshold value was set and all pixels with a spectral intensity below the threshold value were converted to background pixels (curry powder).All the pixels above the set threshold value were converted to white pixels representing Sudan-I.For a higher-intensity threshold value, some of the Sudan-I pixels had a spectral intensity lower than the threshold value.These pixels were also converted into background pixels (false negative).For a significantly lower-intensity threshold value, some of the curry powder pixels were misinterpreted as the Sudan-I pixels (false positive).To avoid the false-positive and false-negative cases, pixel-by-pixel evaluation of spectral pixels was performed.Each pixel was evaluated with its corresponding spectra to select the appropriate intensity threshold value.Based on the pixel-by-pixel evaluation, a final threshold value of 550 was set to obtain a binary detection image.All the pixels with intensities below 550 were converted to background pixels.The remaining pixels represent the Sudan-I pixels.Figure 4b shows the 1593 cm −1 binary detection images of samples at five concentrations.The white pixels are scattered throughout the sample surface.The number of white pixels is low in the 1% Sudan-I image, and progressively increases in the 3%, 5%, 7%, and 10% images.A gradual increase in the white pixels from 1% to 10% indicate more Sudan-I particles were detected at increasing concentration.A total of 63, 153, 236, 344, and 515 Sudan-I pixels were detected in the 1593 cm −1 binary detection images of 1%, 3%, 5%, 7%, and 10% concentration samples.Several lumped pixels can also be observed in Figure 4b.The lumped pixels increased from a low frequency occurrence at 1% concentration to a higher frequency occurrence at 10% concentration.This increase was because the sample surface area and sample volume were held constant (25 mm × 25 mm × 1 mm) for all samples, while the sample concentration increased from 1% to 10%, resulting in overlapping of Sudan-I particles at different layers within the sample depth.Figure 5 shows the Raman spectra of detected Sudan-I pixels at 1%, 3%, 5%, 7%, and 10% concentration samples.In the figure, the number of spectra of each sample corresponds to the total number of detected Sudan-I pixels in the sample.The progressive increase in the detected Sudan-I pixels (Figure 4b) indicates that the percentage of detected pixels and sample concentration are correlated.The 515 pixels detected in 10% sample corresponds to 5.15% of total acquired pixels.Similarly, the 63 pixels detected in 1% sample corresponds to 0.63% of total acquired pixels.The percentage of the detected pixels are linearly correlated with the Sudan-I concentration in the samples with a correlation coefficient of 0.99 (Figure 6). Detection of Metanil Yellow in Curry Powder Figure 7a shows the hyperspectral Raman images of curry powder-metanil yellow mixture samples at 1147 cm −1 and 1437 cm −1 across five concentration levels.The 1147 cm −1 and 1437 cm −1 are the highest intensity metanil yellow peaks.The peak intensity of metanil yellow was found to be higher than curry powder peak intensity at 1147 cm −1 and 1437 cm −1 .Intensity threshold value of 525 was selected based on pixel-by-pixel analysis of all spectral pixels to obtain binary detection images.All the pixels with intensities below 525 were converted to background pixels.The remaining pixels represent the metanil yellow pixels.Figure 7b shows the 1147 cm −1 and 1437 cm −1 binary detection images of samples at five concentrations.The white pixels are scattered throughout the sample surface.The number of white pixels is low in the 1% metanil yellow image, and progressively increases in the 3%, 5%, 7%, and 10% concentration images.A gradual increase in the white pixels from the 1% to 10% image indicates more metanil yellow particles were detected at increasing concentrations.A total of 9, 21, 30, 43, and 57 metanil yellow pixels were detected in the 1147 cm −1 binary detection images of 1%, 3%, 5%, 7%, and 10% concentration samples.Similarly, 9, 22, 28, 43, and 55 metanil yellow pixels were detected in the 1437 cm −1 binary detection images of 1%, 3%, 5%, 7%, and 10% concentration samples.The metanil yellow pixels have similar spatial distribution in both the binary images.Figure 8 shows the Raman spectra of detected metanil yellow pixels at 1%, 3%, 5%, 7%, and 10% concentration samples.In the figure, the 10% metanil yellow consists of 57 spectra, while the 1% metanil yellow consists of 9 spectra.The progressive increase in the detected metanil yellow pixels (Figure 7b) indicates that the percentage of detected pixels and sample concentration are correlated.The percentage of the detected metanil yellow pixels are linearly correlated with the metanil yellow concentration in the samples, with a correlation coefficient of 0.99 (Figure 9). The number of detected metanil yellow pixels (Figure 7b) is less than that of Sudan-I (Figure 4b) for the same concentration.One of the reasons for this discrepancy can be the bulk density of the two chemicals.Sudan-I has a bulk density of 0.18 g/cm 3 .The bulk density of metanil yellow is 0.33 g/cm 3 .Due to the higher density of metanil yellow than Sudan-I, there were fewer metanil yellow particles than Sudan-I particles in the 25 mm × 25 mm × 1 mm sample volume. Simultaneous Detection of Sudan-I and Metanil Yellow in Curry Powder Sudan-I and metanil yellow (5% each, w/w) were mixed together with curry powder to investigate the methodology for identification of multiple chemical contaminants in a food powder.Raman spectra of the mixture sample consisted of mixed spectra of each component.For identification of Sudan-I and metanil yellow in the mixture, the Raman spectra of individual components must be identified.One of the methods for identification of individual components from the matrix of mixture spectra is to decompose the mixed spectral matrix to extract the spectra of individual components and then identify the components.Self-modeling mixture analysis (SMA) was used to extract pure component spectra and score vectors.The number of pure components is pre-defined for SMA computation.For an unknown number of components in a mixture sample, the number of pure components is overestimated, and the SMA result is visually analyzed to determine the actual number of pure components in the mixture sample.Although three known components, Sudan-I, metanil yellow, and curry powder were mixed together, overestimating the number of components (six components) produced favorable SMA results.The mixed spectra of the mixture sample were decomposed into six component spectra and six corresponding scores, of which two were identified as the pure component spectra of Sudan-I and metanil yellow.Due to the low spectral intensity of curry powder, the pure component spectrum of curry powder was not resolved.The remaining four spectra and scores were residual noise and disregarded from further analysis. The pure component spectra of Sudan-I and metanil yellow obtained by SMA is shown in Figure 10a.After SMA, the component corresponding to the extracted pure component spectrum and contributions was identified using the spectral information divergence (SID) method.Each pure component spectra extracted by SMA was compared and computed against the reference spectra of Sudan-I and metanil yellow in the spectral library to obtain SID values.The pure component spectra with the smallest SID values were assigned the identification of the component.The pure component spectra identified as Sudan-I and metanil yellow in Figure 10a are well matched with their corresponding reference spectra in Figure 2. Almost all spectral peaks of Sudan-I and metanil yellow are resolved in pure component spectra demonstrating that SMA and SID methods can be used to decompose the mixture spectra and identify the components in the mixture sample.The contribution images were used to create a Raman chemical image of the corresponding component.The pixel-by-pixel analysis method was performed to select the pixel intensity threshold value for each Raman chemical image.The threshold value of 2150 was selected for the Raman chemical image of Sudan-I and 3800 for metanil yellow.The intensity threshold value was applied to obtain binary detection images of Sudan-I and metanil yellow.All the pixels below the threshold value were converted into background pixels, and remaining pixels represented the component's pixels.The binary detection images of Sudan-I and metanil yellow were compared with the single band images at 1593 cm −1 and 1437 cm −1 , respectively, to ensure none of the pixels were misclassified.The two binary detection images were combined and detected pixels were color coded: white for Sudan-I and red for metanil yellow as shown in Figure 11.The black background represents the curry powder.A total of 240 Sudan-I pixels and 32 metanil yellow pixels were detected, which is similar to the number of detected pixels when each component was detected separately at 5% concentration (Sudan 236 and metanil yellow 30, Figures 4b and 7b).The result shows that the self-modeling mixture analysis coupled with SID can be used as a potential method for identification of multiple chemical contaminants in curry powder. Conclusions This study used a newly developed 1064 nm dispersive point-scan Raman system for detection of azo color dyes contamination in curry powder.Sudan-I and metanil yellow color dyes mixed separately with curry powder at 1%, 3%, 5%, 7%, and 10% concentration (w/w) were detected.One Raman spectral image of the mixture sample at each concentration was obtained covering the sample surface area of 25 mm × 25 mm, using a step size of 0.25 mm along X and Y directions.The Raman spectral images of curry powder-Sudan I and curry powder-metanil yellow mixture samples were converted into binary detection images to detect Sudan-I and metanil yellow pixels.The number of detected pixels of chemical contaminants linearly correlated with the actual sample concentration (R 2 = 0.99).The Raman system was further used for simultaneous detection of Sudan-I and metanil yellow mixed together with curry powder (each contaminant at 5% concentration, w/w).Self-modeling mixture analysis (SMA) was used to decompose the mixed spectral information of the sample mixture, and extract pure component spectra of Sudan-I and metanil yellow.The spectral features of extracted Figure 1 . Figure 1.Acquisition of Raman spectral image by point-scan method. Figure 5 . Figure 5. Raman spectra of detected Sudan-I pixels at five concentrations. Figure 6 . Figure 6.Relationship between Sudan-I concentration in samples and percentage of detected pixels in 1593 cm −1 binary images. Figure 8 . Figure 8. Raman spectra of detected metanil yellow pixels at five concentrations. Figure 9 . Figure 9. Relationship between actual metanil yellow concentration in samples and the percentage of detected pixels in 1147 cm −1 binary images. Figure 10 . Figure 10.Results of self-modeling mixture analysis of components.(a) Resolved spectra of Sudan-I and metanil yellow; (b) corresponding contribution images. Figure 11 . Figure 11.Color-coded chemical image of Sudan-I and metanil yellow generated by contribution images.White pixels represent Sudan-I and red pixels represent metanil yellow.
2019-04-16T13:27:57.525Z
2018-04-05T00:00:00.000
{ "year": 2018, "sha1": "89489b640c44c93c152df0ad92d214f54d100e70", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/8/4/564/pdf?version=1525347284", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "89489b640c44c93c152df0ad92d214f54d100e70", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
266059086
pes2o/s2orc
v3-fos-license
Activation of neurons in the insular cortex and lateral hypothalamus during food anticipatory period caused by food restriction in mice Mice fed a single meal daily at a fixed time display food anticipatory activity (FAA). It has been reported that the insular cortex (IC) plays an essential role in food anticipation, and lateral hypothalamus (LH) regulates the expression of FAA. However, how these areas contribute to FAA production is still unclear. Thus, we examined the temporal and spatial activation pattern of neurons in the IC and LH during the food anticipation period to determine their role in FAA establishment. We observed an increase of c-Fos-positive neurons in the IC and LH, including orexin neurons of male adult C57BL/6 mice. These neurons were gradually activated from the 1st day to 15th day of restricted feeding. The activation of these brain regions, however, peaked at a distinct point in the food restriction procedure. These results suggest that the IC and LH are differently involved in the neural network for FAA production. Background Eating is an animal's motivational behavior of seeking, obtaining, and consuming food driven by the craving instinct [1].The neural regulation of feeding behavior is a very complex process.First, the nervous system senses the energy state of the body, the appearance, smell, taste, and nutrients of food, and uses these information as food anticipatory clues [2][3][4].This sensory information is transmitted and integrated by the highly complex feeding regulation neural network, which sends instructions to regulate feeding behavior [5][6][7].Food anticipatory cues drive not only feeding behavior but also the body's expectancy of food.The expectation also motivates the body to seek out and consume food [8][9][10].In humans, foraging behavior driven by food anticipation is manifested as an increase in food intake [11], which may lead to obesity, diabetes, and other metabolic diseases.Therefore, it is essential to understand the neuronal mechanism of food anticipation. It has been reported that mice fed a single daily meal at intervals within the circadian range show increased locomotor activity in the precedent feeding period, considered food anticipatory activity (FAA) [12].However, the mechanism of FAA production is still unclear.The insular cortex (IC) is a higher-order sensory cortex that integrates multiple modalities, such as taste and visceral information [13][14][15].IC also has been suggested to play an essential role in food anticipation and control of tasteguided, reward-directed choices and actions [16][17][18].The previous study showed that the lesions of male Wistar rats' bilateral anterior agranular IC by electrolysis or ibotenic acid significantly increased the FAA, suggesting that the anterior agranular IC contributes to the network of brain regions involved in FAA [19].Yet it remains unclear whether the neurons in the IC are activated during FAA and play roles in FAA production.Several subregions (anterior (AI), middle (MI) and posterior (PI) regions) in the IC are known to have different connections with different brain regions and functions [20][21][22][23][24][25].Human brain imaging research shows that the AI exhibits high responsiveness to both anticipated food intake and actual food intake, indicating a significant association between AI and the processes involved in both the anticipation and actual intake of food [16].Sensory perception of food-related stimuli (including visual, olfactory, and taste) leads to increased activation of the AI and dorsal parts of the MI in human subject [18].Therefore, to clarify whether the AI, MI, and PI also play different roles in FAA production, we examined the activation patterns of neurons in the three subregions of IC during the food anticipatory period. The lateral hypothalamus (LH) is related to the control of primitive motivational behaviors, including feeding and energy homeostasis [26][27][28][29][30]. Previous research has shown that c-Fos expression in LH increases during FAA in mice [12].Orexin neurons in LH enhance appetite and food consumption, play an important role in feeding behavior regulation [29,31], increase spatial memory for food [32], and are required for the robust expression of FAA in mice during the meal anticipation period [33].Although it has been explored that neurons in LH, including orexin neurons, participate in FAA [12,33], the temporal patterns of activation of these neurons in LH during the development of FAA, and interrelationships of the temporal pattern between IC and LH remain to be confirmed. To corroborate the establishment of FAA, the current study examined the effects of daily scheduled food restriction for 4 h on c-Fos expression during the food anticipatory period in the bilateral IC and LH.Furthermore, the development of c-Fos expression was observed on the day-1, 8, and 15 of restricted feeding to investigate how IC and LH neurons are activated during food restriction protocol, and possible time-related changes that may lead to the identification of the brain network responsible for the formation of FAA. Animals Male C57BL/6 mice (n = 60) were weight (20-30 g) and 8-12 weeks old at the beginning of the experiment.The animals were singly housed in laboratory mouse cages (17.2 cm × 10 cm × 11 cm) in the experiment room seven days before the start of the food restriction to adapt to the experimental environment, under standard laboratory conditions 12:12 h light to dark (12:12 LD, lights on at 7:00 defined as Zeitgeber Time 0 (ZT0), and off at 19:00 as ZT12) cycle, with a constant room temperature (23 °C) and humidity (51%).All the mice received standard murine chow (CE-2, CLEA JAPAN, INC) and water available ad libitum. Feeding schedules and behavioral recording Mice were individually housed in the same cage mentioned above, with bedding on the bottom and an infrared motion sensor (AMN 1111, Panasonic Co., Osaka, Japan) on the top of the lid.Mice were assigned into two groups with their body weight counterbalanced, and the restricted feeding group (RF) received restricted access to food for 4 h (from ZT4 to ZT8).The ad libitum feeding group (AL) received free access to food throughout the study.Mice in the RF and AL groups were further divided into 6 groups (RF 1 day, RF 8 days, RF 15 days group, and their corresponding control AL groups).During the first 7 days of the experiment, all the animals received ad libitum feeding to record baseline data.After the week, mice in RF groups were restricted to accessing food for 4 h (from ZT4-ZT8) for 1 day, 8 days, and 15 days, respectively, while mice in AL groups received free access to food throughout the study (1 day, 8 days,15 days, respectively) (Fig. 1A).Each day at ZT4, newly measured chows were given to both AL and RF groups.At ZT8, newly calculated chows were prepared and provided for the AL groups but not for RF groups.The mice's body weight and food intake amount were measured at the timing of ZT8.All mice in RF 1 day, 8 days,15 days groups and their control AL groups were sacrificed at ZT4 on the last day of the RF, respectively (note: RF groups with no food delivery) (Fig. 1A). The behavioral activity of the mice was continually collected in 6-min bins throughout the study by the infrared motion sensor connected to a computer and expressed as the total counts in bins per 2 h or 24 h. Immunohistochemistry Mice were deeply anesthetized with urethane (1.8 g/ kg, intraperitoneal injection), and perfused transcardially with 25 ml of phosphate buffer (PB, 0.1 M, pH 7.4) followed by 25 ml of 4% paraformaldehyde (PFA) solution in PB.The whole head was detached and fixed in 4% PFA solution at 4 °C overnight, and the brain was removed and fixed in 4% PFA solution at 4 °C overnight for 1 day.Slicing the brain to a series of 40 μm using a vibratome (SuperMicroSlicer Zero1; DOSAKA EM, Kyoto, Japan), every 4th section was used for immunostaining.For double labeling of c-Fos and orexin, tissues were immersed in a blocking solution (1% normal horse serum and 0.3% Triton-X in 0.01 M PBS) for 30 min at room temperature.Primary antibodies were dissolved in blocking solution, and the conditions were as follows: guinea pig anti-c-Fos antibody (226004, Synaptic Systems, RRID AB_2619946) at 1/1000 at room temperature for 1 h; goat anti-orexin antibody (SC-8070, Santa Cruz Biotechnology, RRID AB_653610) at 1/200 at room temperature for 1 h.The sections were washed with PBS 3 times, each for 10 min.And tissues were incubated with a secondary antibody diluted in a blocking solution.The secondary antibodies were as follows: anti-Guinea pig IgG-biotin (706-065-148, Jackson ImmunoResearch Laboratories, INC, RRID AB_2340451) at 1/250 at room temperature for 1.5 h; anti-goat IgG-CF568 (20106, Biotium, RRID AB_10559672) at 1/200 at room temperature for 1.5 h.The sections were washed with PBS 3 times and each for 10 min.Streptavidin Alexa488 (S11223, Invitrogen) was used to visualize c-Fos at 1/200 in PBS at room Fig. 1 Experimental schedule and locomotor activity of mice in response to 15 days of restricted feeding.A Experimental schedule.After one week of baseline recording, mice in the ad libitum (AL) groups received free access to food throughout the study.In contrast, mice in the restricted feeding (RF) (1 day, 8 days, 15 days) groups were allowed access to food between ZT4-ZT8 during the days (for 1 day, 8 days, and 15 days, respectively).B Time course of the food anticipation activity (FAA) in response to 15 days of restricted feeding.C Mean locomotor activity of mice from day 4 to day 14 of restricted feeding.The mean (± SEM) locomotor activity for ten subjects is shown on the y-axis.The x-axis represents Zeitgeber time.D Time course of the 24-H locomotor activity in response to 15 days of food restriction.A similar trend in the time course, except for the first day of the food restriction, was observed in AL and RF groups (D).The mean (± SEM) locomotor activity during ZT2-ZT4 (B) or ZT0-ZT24 (D) for ten subjects is shown on the y-axis.The x-axis represents experimental days (B and D).*P < 0.05; **P < 0.01; ***P < 0.001; ****P < 0.0001 difference between AL mice and RF mice according to Bonferroni's multiple comparison test temperature for 1.5 h.Then, it was washed with PBS 3 times, each time for 10 min. Area setting of histological analysis To identify the number of c-Fos positive neurons in the IC, we counted neurons by positioning a counting container bilaterally over a region of interest.For the AI, we placed a 500 × 500 µm square box on Sections 2.58-1.37 mm rostral to bregma; For the MI, we put a 500 × 500 µm box on Sections 1.37-0.16mm rostral to bregma; For the PI, we set a 500 × 500 µm box on Sections 0.16 mm (rostral to bregma) to 1.06 mm (caudal to bregma) based on brain atlas [34].To identify the number of c-Fos positive neurons and orexin neurons in the LH, we placed a 1200 × 400 µm (width × height) rectangular box on sections from 0.94 mm to 2.18 mm caudal to bregma.We selected three slices on the left and right from the brain slices of mice to observe the brain areas of interest.The number of positive neurons on each side was calculated separately, and the average value of the number of positive neurons on each side was used for statistics.Cell counting was done with Adobe Photoshop CC software (Adobe Systems Inc., San Jose, CA, USA) and expressed as the number of cells per unit area of 0.25 mm 2 in AI, MI, and PI and per unit area of 0.48 mm 2 in LH. Statistical analysis Data were presented as Mean ± SEM.GraphPad Prism software (version 7, La Jolla, California, USA) was used for statistical analysis.Data on the mice's locomotor activity, food intake, and body weight were analyzed using the repeated measures two-way ANOVA followed by Bonferroni's multiple comparison test.Histology data were analyzed using an unpaired t-test with Welch's correction, one-way ANOVA followed by Bonferroni's multiple comparison test, or Pearson's correlation coefficient.The results were considered significant at P < 0.05. Results In rodents, a standard daily restricted feeding paradigm is often utilized, alternating between feeding 2-5 h and fasting 19-22 h daily for 7 days to 3 weeks [12,19,33,[35][36][37], to study the mechanism of FAA production.It has been reported that rodents express FAA within 3-14 days under the scheduled restricted feeding paradigm [38].Some brain structures began significantly increasing c-Fos expression on the 8th day of palatable food entrainment [39].Therefore, we selected day 15 (14 days of RF and 20-h fasting) and day 8 (7 days of RF and 20-h fasting) of RF for locomotor activity and c-Fos histochemical studies.In addition, to determine whether 20-h fasting can produce FAA and its effect on the expression of c-Fos in IC and LH neurons, we also observed the effect of 1-day RF (20-h fasting) on locomotor activity and the expression of c-Fos in IC and LH neurons as a control for day 8 or day 15 of RF. The development of food anticipatory activity In previous studies, it has been reported that FAA begins 2-3 h before food delivery [12].To confirm whether we could also observe a similar increase in locomotor activity in the pre-mealtime in our experimental setting, we investigated the effects of daily scheduled restricted feeding on the locomotor activity of mice. Figure 1B shows the development of FAA of mice in 15 days of RF schedule.RF-mice's locomotor activity transition during ZT2-ZT4 (2 h before the feeding) differed significantly from AL-mice (feeding-effect: F(1, 18) = 17.27,P = 0.0006; dayeffect: F(20, 360) = 6.284,P < 0.0001; interaction: F(20, 360) = 4.124, P < 0.0001, repeated measures two-way ANOVA).RF-mice showed significantly higher locomotor activity after three days of RF (P < 0.0001, Bonferroni's test).It then remained higher than AL-mice during the rest of the RF schedule.The results suggest that the appearance of FAA is gradual and requires repeated, predictable stimulation of scheduled feeding restrictions (at least three times).However, the strength of FAA fluctuated rather than remained constant during the daily scheduled restricted feeding (Fig. 1B), which is similar to previous research reports [12,33].Figure 1C shows the mean locomotor activity of mice from day 4 to day 14 of restricted feeding, i.e., after significant enhancement of FAA was observed in RF-group.There was a significant difference in the interaction of feeding and time between AL-and RF-group (feeding-effect: F(1, 18) = 0.2882, P = 0.5979; time-effect: F(11, 198) = 18.09,P < 0.0001; interaction: F(11, 198) = 11.98,P < 0.0001, repeated measures two-way ANOVA, Fig. 1C).The locomotor activity of RF-mice was significantly higher during ZT2-ZT4 (P < 0.0001, Bonferroni's test). The changes in food intake and body weight during daily scheduled RF We speculated that the RF mice might reduce their daily food intake due to time-limited access to food.Therefore, we examined the effect of daily scheduled RF on the overall daily food intake (Fig. 2A).The food intake between AL-group and RF-group showed a significant difference (feeding-effect: F(1, 18) = 16.26,P = 0.0008; day-effect: F(20, 360) = 14.91,P < 0.0001; interaction: F(20, 360) = 18.80,P < 0.0001, repeated measures two-way ANOVA).Compared with AL-mice, the food intake of the RF-mice decreased significantly on the first day of food restriction (P < 0.0001, Bonferroni's test).However, it recovered gradually with the number of days and reached a similar level to AL-mice after five days of food restriction (P > 0.05).Thus, limited food access reduced food intake only at the early phase of the protocol.The RF mice were also concerned about losing their body weight because of the food restriction.Compared with the AL group, the change in body weight of the RF group was significantly different in the interaction of feeding and day (feeding-effect: F(1, 18) = 1.285,P = 0.2719; day-effect: F(20, 360) = 44.20,P < 0.0001; interaction: F(20, 360) = 16.82,P < 0.0001, repeated measures two-way ANOVA, Fig. 2B).On the first day of feeding restriction, the body weight of the RF-mice decreased significantly by 8.6% (P < 0.0001, Bonferroni's test), then recovered and increased gradually with the number of the days (P > 0.05 on the 4th day of feeding restriction), then kept at a high level (increased by 5.8% to 10.3%) after six days of feeding restriction (P < 0.001).According to the findings, the RF between ZT4 and ZT8 only caused a drop in the mice's daily body weight during the early stages of the food restriction but an increase during the latter stages.The maximum body weight loss of the RF-mice in our study was 8.6% of their original body weight, within the range (loss of less than 20%) of the requirements of experimental animal ethics [40,41]. The activation of neurons in the IC during the food anticipatory period We examined whether IC neurons were activated during FAA to investigate the neuronal involvement of IC in FAA production.We sacrificed all mice at ZT4 (during FAA) on the 15th day of RF (note: RF-group with no food delivery).We examined three subregions of IC separately.Compared with AL-mice, the number of c-Fos positive neurons of 15 days RF-mice increased significantly in AI (AL: 2.3 ± 0.5/0.25 mm 2 , RF: 6.9 ± 1.5/0.25 mm 2 , n = 10, t = 2.959, P = 0.0143), MI (AL: 3.5 ± 0.8/0.25 mm 2 , RF: 12.1 ± 1.8/0.25 mm 2 , n = 10, t = 4.351, P = 0.0009), and PI (AL: 3.3 ± 0.5/0.25 mm 2 , RF: 7.5 ± 1.6/0.25 mm 2 , n = 10, t = 2.495, P = 0.0317, unpaired t-test with Welch's correction) (Fig. 3A-C).There was no significant difference in c-Fos expression between the bilateral IC's neurons of both the RF-and AL-mice (This paper only shows the experimental data of the left side IC as a representative.The data of the right-side IC are not shown unless specifically mentioned).The results indicate that the bilateral AI, MI, and PI neurons are activated during ZT2-ZT4 and may contribute to the network of brain regions involved in FAA [12]. The activation of neurons in the LH, including orexin neurons, during the food anticipatory period To confirm whether the neurons, including orexin neurons in LH, were also activated during FAA in our experimental setting, we examined the effect of daily scheduled RF on the number of c-Fos expressing neurons in LH during FAA.Compared with AL-mice, the number of c-Fos positive neurons in LH of 15 days RFmice increased significantly (AL: 14.1 ± 3.3/0.48mm 2 , RF: 58.9 ± 6.7/0.48mm 2 , n = 10, t = 6.046,P < 0.0001, unpaired t-test with Welch's correction, Fig. 4A).The same tendency was also observed in orexin-neurons (proportion of c-Fos positive orexin neurons in counted orexin neurons; AL: 0.040 ± 0.012, RF: 0.416 ± 0.043, n = 10, t = 8.435, P < 0.0001, Fig. 4B).There was no significant difference in c-Fos expression between the bilateral LH neurons or orexin neurons.The results indicate that the neurons, including orexin neurons, in the bilateral LH are activated during ZT2-ZT4 and may contribute to the network of brain regions involved in FAA. How does activation of neurons in the bilateral IC develop during food restriction protocol? As shown in Fig. 1B, the appearance of the FAA is gradual.Therefore, we examined if the activation of neurons in the IC of RF-mice was also gradual during the food anticipatory period.We compared two groups on 1st day and 8th day of the feeding restriction protocol (Fig. 5).Compared with AL-mice, the number of c-Fos expressing neurons of 1-day RF-mice was significantly larger in MI (AL: 0.4 ± 0.1/0.25 mm 2 , RF: 3.5 ± 1.2/0.25 mm 2 , n = 10, t = 2.578, P = 0.0292, unpaired t-test with Welch's correction) and PI (AL: 0.6 ± 0.2/0.25 mm 2 , RF: 3.7 ± 1.0/0.25 mm 2 , n = 10, t = 3.111, P = 0.0116).There was no significant difference in c-Fos expression between the bilateral MI and PI neurons in 1-day RF-mice.Meanwhile, the number of c-Fos expressing neurons in the left AI of 1-day RF-mice tended to be higher than AL-mice, but no statistical significance (AL: 0.3 ± 0.1/0.25 mm 2 , RF: 2.0 ± 0.8/0.25 mm 2 , n = 10, t = 2.030, P = 0.0729) (Fig. 5A-C the left).On the other hand, the number of c-Fos expressing neurons in the right AI increased significantly (AL: 0.3 ± 0.1/0.25 mm 2 , RF: 1.2 ± 0.3/0.25 mm 2 , n = 10, t = 3.101, P = 0.0101).The results indicate that the neurons in the right AI, the bilateral MI, and PI of 1-day RF-mice (fasting for 20 h) are activated during the food anticipatory period. In 8 days RF-mice, the number of c-Fos positive neurons was significantly larger in AI (AL: 0.6 ± 0.1/0.25 mm 2 , RF: 3.2 ± 0.9/0.25 mm 2 , n = 10, t = 2.813, P = 0.0195, unpaired t-test with Welch's correction), MI (AL: 0.9 ± 0.2/0.25 mm 2 , RF: 6.2 ± 1.5/0.25 mm 2 , n = 10, t = 3.453, P = 0.0069) and PI (AL: 1.3 ± 0.3/0.25 mm 2 , RF: 5.0 ± 1.1/0.25 mm 2 , n = 10, t = 3.158, P = 0.0100) (Fig. 5A-C the right).There was no significant difference in c-Fos expression between the bilateral IC neurons of 8 days RF-mice.The results indicate that the neurons in the bilateral AI, MI and PI are also activated at the 8th day of food restriction during FAA.We then compared the number of c-Fos positive neurons in three-time points to examine the development of c-Fos expression pattern.Among 1 day or 8 days or 15 days RF-mice, the number of c-Fos positive neurons in AI, MI, and PI during the food anticipatory period was the smallest in 1-day RF-mice, followed by 8 days RF-mice, and the largest in 15 days RF-mice (Fig. 6A-C).The results indicate that activation of the neurons in AI, MI, and PI of 1-day RF-mice during the food anticipatory period was at the beginning of the activation development.Compared with the 15 days RF-mice, the number of c-Fos positive neurons in AI of the 8 days RF-mice during FAA was slightly smaller, with no significant difference; but that of the 1-day RF-mice was significantly smaller (P = 0.0108, one-way ANOVA; P = 0.0119, Bonferroni's test, Fig. 6A).The results indicate that activation of the neurons in AI of the 8 days RF-mice during FAA was still in the middle of the development of activation.The number of c-Fos positive neurons in the MI was significantly higher in the 15 days RF-mice compared to the 8 days RF-mice and the 1-day RF-mice during FAA, respectively (P = 0.0016, one-way ANOVA; P = 0.0352 between 15 days RF-mice and 8 days RF-mice, P = 0.0014 between 15 days RFmice and 1-day RF-mice, respectively, Bonferroni's test, Fig. 6B).The results indicate that activation of the neurons in MI of the 8 days RF-mice during FAA was still in the middle of development of activation.The number of c-Fos positive neurons in PI of the 8 days RF-mice and the 1-day RF-mice during FAA was slightly lower than in the 15 days RF-mice, but there was no significant difference (P = 0.1119, one-way ANOVA, Fig. 6C).The results indicate that activation of the neurons in PI of the RFmice during food anticipatory period didn't significantly increase during our food restriction schedule. How does activation of neurons, including orexin neurons in the bilateral LH develop during food restriction protocol? To research the possible changes in the activation of LH neurons during the food anticipatory period, we compared two groups on 1st day and 8th day of the feeding restriction protocol.On 1st day, the number of c-Fos positive neurons in LH of RF-mice was significantly larger than AL-mice (AL: 4.3 ± 1.3/0.48mm 2 , RF: 22.8 ± 5.3/0.48mm 2 , n = 10, t = 3.376, P = 0.0071, unpaired t-test with Welch's correction) and the proportion of c-Fos positive neurons in orexin neuron was also larger (AL: 0.016 ± 0.005, RF: 0.113 ± 0.030, n = 10, t = 3.147, P = 0.0109) (Fig. 7A, B the left).There was no significant difference in c-Fos expression between the bilateral LH neurons and orexin neurons of 1-day RF-mice.The results indicate that the neurons, including orexin neurons in the bilateral LH of the 1-day RF-mice are activated during the food anticipatory period. On the 8th day, the number of c-Fos positive neurons (AL:12.3± 3.1/0.48mm 2 , RF: 57.4 ± 9.9/0.48mm 2 , n = 10, t = 4.362, P = 0.0014) and proportion of c-Fos positive orexin neurons (AL: 0.033 ± 0.013, RF: 0.332 ± 0.059, n = 10, t = 4.948, P = 0.0006, unpaired t-test with Welch's correction) in RF-mice were significantly larger as compared with AL-mice (Fig. 7A, B the right).There was no significant difference in c-Fos expression between the bilateral LH neurons and orexin neurons of 8 days RF-mice.The results indicate that the neurons in the bilateral LH including orexin neurons of the 8 days RF-mice are activated during FAA.Compared with the 15 days RF-mice, the 8 days RFmice showed slightly smaller c-Fos.Still, the difference did not reach a significant difference (P > 0.05, Bonferroni's test, Fig. 8).The results indicate that activation on day 8 had almost already reached the plateau.Among 1 day, 8 days, or 15 days RF-mice, the number of c-Fos positive neurons in LH in 1-day RF-mice was the smallest.There was a significant difference compared with 8 days or 15 days RF-mice (P < 0.01, P < 0.001, one-way ANOVA with Bonferroni's test, Fig. 8).The results indicate that activation of the LH neurons, including orexin neurons started from day 1 of RF and reached the plateau around day 8 of RF. The relationship between the number of c-Fos positive IC neurons and orexin neurons in the production of FAA To investigate the interaction between IC neurons and LH orexin neurons in FAA, we compared their activation during the food anticipatory period using FAA-established day 8 and 15 data.RF-mice showed a significant positive correlation between the number of neurons activated in AI or MI or PI or total IC and the number of orexin neurons activated in LH (AI, r = 0.7088, P = 0.0005; MI, r = 0.7117, P = 0.0004; PI, r = 0.7348, P = 0.0002; total IC, r = 0.8582, P < 0.0001, Pearson's correlation coefficient), but AL-mice did not (AI, r = 0.3322, P = 0.1525; MI, r = 0.4398, P = 0.0523; PI, r = 0.3606, P = 0.1183; total IC, r = 0.4251, P = 0.0617) (Fig. 9).These significant positive correlations were similar in the bilateral hemisphere of RF-mice.The results indicate an interaction between the activation of neurons in the IC (including AI, MI, PI) and the activation of orexin neurons of RF-mice during the FAA. Trends in changes in locomotor activity, food intake, and body weight of mice in the short version of the food restriction protocol To examine whether the trends were similar to those of the 15 days RF-mice, we investigated the effects of the daily scheduled RF for 8 days or 1 day on the locomotor activity, food intake, and body weight of mice.Compared with AL-mice, the locomotor activity during ZT2-ZT4 of 8 days RF-mice increased significantly (feeding-effect: F(1, 18) = 8.066, P = 0.0109; day-effect: F(13, 234) = 7.616, P < 0.0001; interaction: F(13, 234) = 5.537, P < 0.0001, repeated measures two-way ANOVA, Fig. 10A).It increased significantly after 2 days of RF (P = 0.0187, Bonferroni's test), then remained at a higher level than that of AL-mice during RF (P < 0.0001), which was similar to the results observed in the 15 days RF-mice, although a significant increase in the locomotor activity of 15 days RF-mice began after 3 days of RF.However, the locomotor activity during ZT2-ZT4 of 1-day RFmice was no significant difference (feeding-effect: F(1, 18) = 0.04694, P = 0.8309; day-effect: F(7, 116) = 3.935, P = 0.0007; interaction: F(7, 116) = 0.5740, P = 0.7758, repeated measures two-way ANOVA; P > 0.05, Bonferroni's test, Fig. 10B).The result suggests that the 1 day feeding restriction program, i.e., fasting for 20 h, in this study cannot affect the locomotor activity during ZT2-ZT4 in mice, and cannot form FAA, which is similar to that observed in the 15 days or 8 days RF-mice.The results also demonstrated that the formation of FAA in the 15 days or 8 days RF-mice was not due to the 20-h fasting just before the brain sampling but was due and specific to the food restriction protocol.The fluctuation of the FAA was also observed in 8 days RF-mice.In addition, the 8 days of food restriction between ZT4 and ZT8 did not affect the 24-h total ambulatory activity of mice (feeding-effect: F(1, 18) = 0.0007720, P = 0.9781; day-effect: F(13, 234) = 0.9675, P = 0.4842; interaction: P < 0.0001, repeated measures two-way ANOVA; P < 0.0001, Bonferroni's test), then recovered gradually with the number of days and reached a similar level to that of AL-mice after 5 days of feeding restriction (P > 0.05, Fig. 10D), which are similar to those in the 15 days RF-mice. The daily body weight of the 8 days RF-mice decreased significantly by 6.7% on the first three days of feeding restriction (feeding-effect: F(1, 18) = 4.140, P = 0.0569; day-effect: F(13, 234) = 6.580,P < 0.0001; interaction: F(13, 234) = 2.365, P = 0.0055, repeated measures two-way ANOVA; P = 0.0083, Bonferroni's test), then recovered gradually with the number of the days (up to 0.5%) (P > 0.05 (day 4), Fig. 10E).Although the maximum daily body weight loss of 8 days RF-mice was on the third day of feeding restriction, and that of 15 days RF-mice was on the first day of feeding restriction, the changing trend of daily body weight of mice in these two groups during the RF was similar.The RF between ZT4 and ZT8 caused a decrease in the body weight only in the early stage of the RF. Discussion To investigate the functional role of the insular cortex and lateral hypothalamus in FAA production, we examined the neuronal activation of these areas during the food anticipatory period.This study demonstrates that the development of FAA is a gradual process and requires periodic, predictable feeding restriction stimulation.In addition, an increase of c-Fos-positive neurons in the IC and LH during the food anticipation period was gradual from the 1st day to the 15th day of restricted feeding.There was a positive correlation between the activation of neurons in IC and orexin neurons in LH during the food anticipation period.However, the increase in c-Fos-positive neurons of these brain regions changed differently during the food restriction protocol.These results suggest that the insular and lateral hypothalamic neurons, including orexin neurons, are active during FAA, and the IC and LH are differently involved in the neural network for FAA production.We examined the formation and development of FAA in mice during food restriction protocol.Previous studies have shown that food-related cues are powerful time signals of physiological and behavioral systems [12,38,39].The scheduled daily access to food causes FAA.The FAA is characterized by behavioral arousal and activation, increased locomotion, increased proximity to a feeder, and food seeking [38,39].The current study showed that 1-time food restriction, i.e., fasting for 20 h, did not affect locomotor activity during the pre-feeding period in mice and did not result in FAA formation.However, periodic RF applied at least two to three times significantly increased locomotor activity during the pre-feeding period.This indicates that the FAA is not a result of the 20-h fast but rather from the feeding restriction protocol used in this study.It was also that the RF did not affect the movement capability but could cause significant alterations in daily locomotor activity patterns (an enhancement of locomotor activity during the light period), similar to the previous study [12]. In addition, this study shows that the neurons in three insular subregions are activated during FAA, which is the first report.Therefore, the IC may contribute to the network of brain regions involved in FAA.The mechanism of FAA production caused by daily RF at a fixed time is still unclear.Previous studies suggest that the food entrainment oscillator (FEO) is located outside the suprachiasmatic nucleus (SCN) because the damage to the SCN cannot eliminate FAA [42].FEO regulates behavior, tissue, cellular, and molecular processes in response to food intake patterns [38].FEO may have a distributed organization (multiple brain areas including the arcuate nucleus and LH, et al.) and not rely on a single nucleus [12,35,36], and damage to one brain area does not eliminate all manifestations of food entrainment [43].There is a report that food expectation cues significantly regulate the activity of the IC in mice which is necessary for food cues Fig. 10 Daily locomotor activity, food intake and body weight of mice during1 or 8-days restricted feeding.A and B Increased food anticipation activity (locomotor activity during ZT2-ZT4) in response to the restricted feeding in 8 days (A, n = 10) or 1 day (B, n = 10 during ad libitum feeding, n = 5 during food restriction) RF mice.C A similar daily total locomotor activity was observed in the 8 days RF mice to that of the AL mice.D Daily food intake fluctuation and (E) changes in daily body weight of the 8 days RF mice.The mean (± SEM) of daily food intake (D) or daily body weight (E) for ten subjects is shown on the y-axis.The x-axis represents experimental days.*P < 0.05 (A); ****P < 0.0001 (A and D); **P < 0.01 (E) difference between AL mice and RF mice according to repeated measures two-way ANOVA with Bonferroni's multiple comparison test.According to repeated measures two-way ANOVA, there was no significant difference in locomotor activity during ZT2-ZT4 (B) or ZT0-ZT24 (C) between AL mice and RF mice ◂ to induce behavioral responses [44].However, there are few reports on whether the IC participates in FAA [19].The previous study showed that the inactivation of male Wistar rats' bilateral anterior agranular IC by electrolysis or ibotenic acid lesions significantly increased the FAA [19].The present study observed a significant increase in c-Fos expression of the bilateral AI neurons of mice during FAA, suggesting the AI neurons are activated during the period.Functionally distinct regions inside the AI might contribute to this disparity between the two studies [21,45].The present study observed a few more c-Fos positive neurons in the III and V layers of the AI (data not shown).Manipulating the activity of neurons in the sub-area with Optogenetics or Chemogenetics during FAA might help to answer the question. We also observed that the developmental process of c-Fos expression differed slightly between the three insular subregions.The number of c-Fos positive neurons in AI gradually increased from 1 to 15 days of food restriction, with 15 days of RF-mice having significantly more than 1-day RF-mice.The number of c-Fos positive neurons in MI also gradually increased from 1 to 15 days of RF.The exposure to RF for 15 days was especially remarkable.However, c-Fos positive neurons in PI only showed a tendency of increasing gradually from 1 to 15 days of RF.There are significantly more c-Fos positive neurons on the 1-day RF-mice in PI compared with AL-mice, the activation of these neurons during the food anticipatory period may reach the peak of activation at the early phase of food restriction.These data suggest that the MI neurons may be more sensitive to repeated RF protocol than the AI and PI.It is well known that there are functional differences between these three areas [20,[22][23][24][25].However, the exact mechanisms that induce different changes in the three subregions of the IC need to be further studied. Although in the present study, 1-day food restriction caused an increase in c-Fos expression of the bilateral AI neurons, it was only significant on the right side, suggesting the right AI neurons can be more sensitive to the hunger signal.There have been some reports about the asymmetric activation pattern of the IC [22,[46][47][48].For example, human functional magnetic resonance imaging showed that interoceptive attention induced similar significant activation in the bilateral IC, with the highest degree of activation in the middle short gyrus, followed by the anterior and posterior short gyri.However, the interoceptive accuracy induced significant activation of the right dorsal AI [48].The present study and previous studies have shown that there may be some functional differences between the bilateral AI. This study also shows that the LH neurons, including orexin neurons, are gradually activated during FAA by the food restriction protocol.Orexin neurons in LH are associated with arousal [49][50][51][52], and are part of the "approach-exploratory" system that regulates muscle tone and motor behavior [53,54].It has been reported that mice with orexin neurons ablated had a severe defect in showing expected food-anticipatory increases in locomotor activity [55,56] and wakefulness [56] under RF conditions.Orexin neuron is necessary for the strong expression of locomotor activity in anticipation of feeding [33].Orexin neurons in LH are activated during food anticipation and exhibit self-sustained oscillations driven by food-entrainment [37].The current data would also support previous findings that neurons including orexin neurons in LH participate in FAA [12,33,37,55,56].The number of c-Fos-positive orexin neurons increased with the days of RF, reached a maximum, and remained stable after eight days of RF, which suggests the activation of orexin neurons in the LH during FAA is maintained at a high level after 8 days of RF protocol.The data that 1-day RF already activates the orexin neurons suggests their involvement in hunger-induced foraging motivation and behavior [31].Interestingly, the present study also found that non-orexin-neurons in the LH were activated during FAA with the same activation tendency as orexin neurons.As a result, more research is needed to determine how different neurons in the LH contribute to FAA. The mechanism of FAA production still needs to be clarified.We hypothesize the possible mechanisms of FAA production caused by daily scheduled RF.It has been reported that FAA is regarded as the output of FEO [12,38].The present study suggests that neurons in IC and LH, including orexin neurons, may be a part of FEO.In our research, the FAA became obvious from day 3 to day 4 of food restriction and remained higher until the 15th day.On the other hand, the increase in the number of c-Fos positive neurons in IC and LH, including orexin neurons, during the food anticipatory period began from day 1 of food restriction.It peaked at a different phase of the food restriction protocol.According to the beginning time, the activation of IC and LH neurons is faster than the formation of FAA, suggesting that the number of activated neurons in IC and LH is insufficient to cause FAA formation on the first day of RF.As the number of days of RF increases, the number of activated neurons in IC (especially in AI and MI subregions) and LH gradually increases and might be enough to cause the formation of FAA on the third or the fourth day of RF.How the increased number of activated neurons in the brain areas causes the formation of FAA needs further research. Interestingly, c-Fos expression of the IC neurons during the food anticipatory period peaked on the 15th day of food restriction.On the other hand, c-Fos expression of the LH neurons, including orexin neurons, peaked on the 8th day of food restriction.That is, the development process of the activation in the IC neurons was slower than that in the LH neurons, including orexin neurons.The nerve fibers of orexin neurons in LH project to IC [57,58].The increased orexin transmission in the IC of aged rats can enhance feeding behavior by significantly reducing the feeding latency [59].According to the activation pattern of the IC neurons and the LH neurons during FAA in the present study, we speculate that the LH neurons, including orexin neurons, could receive food entrainment information from receptors and brain regions that sense internal information (such as hunger or weight loss or Zeitgeber, etc.), awaken animals during food anticipatory period.Then this information might be transmitted to the IC by the orexinergic nerve fibers projected from the LH [57,58] and exciting neurons in the IC [59] to be involved in FAA.In addition, there was a positive correlation between the number of neurons activated in the IC and orexin neurons during FAA, suggesting there is an interaction between the activation of neurons in the IC (including AI, MI, PI) and orexin neurons during the food anticipation period.Another possibility is that the activation of the LH and IC developed independently and supported their different functional roles in FAA production [12,19,33].Therefore, the exact mechanism of the involvement of the IC and LH in FAA production and the relationship between them during FAA needs to be further studied. Before the experiment, we speculated that RF mice might reduce their daily food intake and body weight due to RF compared to AL mice.However, in the present study, even under the condition of time-limited access to food, the daily food intake of the RF-group increased to the same level as the AL-group, and the daily body weight of the RF-group increased to a level higher than that of the AL-group in the late stages of the protocol.Furthermore, the present study showed activation of the neurons in the IC and LH, including orexin neurons, during FAA.Therefore, we hypothesize the following mechanism.The activation of the IC and LH during the last stage of the food restriction protocol may compensate for the reduction of eating amount and increase of energy consumption during the early phase of RF.In particular, the information about reducing food intake and increasing energy use was sent to excite orexin neurons in the LH [31], which then sent the information to the IC [57,58] and excited neurons in the IC [59].This might lead to the recovery of food intake [14,15,28,31,60] during the scheduled feeding period and an increase in the daily body weight of mice in the late stage of the RF. Limitations of the experiments Most previous studies on FAA have used male rodents [12, 35-37, 39, 61], and a few studies reported no significant differences in FAA between male and female mice [33].However, it has also been reported that male mice show significantly more FAA than female mice [62].Although this study only used male mice, it would be valuable to compare FAA development between male and female mice. In the study of the developmental process of c-Fos expression in IC and LH neurons during the food anticipatory period, we examined the changes in c-Fos expression in these brain areas on the 1st, 8th, and 15th day of RF (Fig. 6; Fig. 8).The findings of this study provide insights into the developmental dynamics of c-Fos expression in neurons located in the IC and LH regions during the food anticipation period.However, future experiments with more time points may yield more comprehensive and conclusive results regarding the developmental process of c-Fos expression in neurons within this specific brain area during the food anticipation period. Conclusion In summary, this study demonstrates that the bilateral insular and lateral hypothalamic neurons, including orexin neurons, are active during FAA.The temporal patterns of neuronal activation in several subregions of the IC are different, and those of neuronal activation between the IC and LH are also different, suggesting that the IC and LH are differently involved in the neural network for FAA production. Fig. 2 Fig. 2 Changes in daily food intake and body weight during15 days restricted feeding.Time course of the daily food intake (A) and changes in daily body weight (B) in response to 15 days of food restriction in mice.The mean (± SEM) daily food intake (A) or body weight (B) for ten subjects is shown on the y-axis.The x-axis represents experimental days.*P < 0.05; **P < 0.01; ****P < 0.0001 difference between AL mice and RF mice according to Bonferroni's multiple comparison test Fig. 3 Fig. 3 Increase in the number of c-Fos-positive neurons in IC of 15 days RF-mice.A, B, and C The number of c-Fos positive neurons counted within a square (500 × 500 μm) in the anterior (A), middle (B), and posterior (C) insular cortex in ad libitum feeding (AL) and 15 days of restricted feeding (RF).A significantly higher number of c-Fos was observed in RF than in AL.A-C, Left.The mean (± SEM) of c-Fos positive neurons for ten subjects are shown on the y-axis.*P < 0.05; ***P < 0.001 difference between AL mice and RF mice according to unpaired t-test with Welch's correction.A-C, Middle.The tallied regions in AI, MI, and PI are marked with yellow squares in representative mouse brain sections counterstained with DAPI.A-C, Right.Photographs of representative sections showing c-Fos positive neurons in AI (A, Right), MI (B, Right), and PI (C, Right) from AL and RF mice.Arrows indicate representative c-Fos signals (A-C, Right).Green represents the c-Fos signal.The scale bar is 50 μm Fig. 4 Fig. 5 Fig. 4 Increases in the number of c-Fos-positive and c-Fos-orexin-double-positive neurons in LH of 15 days RF-mice.A and B The number of c-Fos positive neurons (A) and the ratio of c-Fos-orexin double-positive neurons to the orexin neurons (B) counted within a rectangle (1200 × 400 μm) in the lateral hypothalamus (LH) in ad libitum feeding (AL) and 15 days of restricted feeding (RF).RF group showed significantly higher values.A and B, Left.The mean (± SEM) of c-Fos positive neurons (A, Left) and c-Fos-orexin double-positive neurons ratio in orexin neurons (B, Left) for ten subjects are shown on the y-axis.****P < 0.0001 difference between AL mice and RF mice according to unpaired t-test with Welch's correction.A and B, Middle.The tallied region in LH is marked with a yellow rectangle in the representative section of the mouse brain counterstained with DAPI.A and B, Right.Photographs of representative sections showing c-Fos positive neurons in LH (A, Right) and c-Fos positive orexin neurons in LH (B, Right) from AL and RF mice.Arrows indicate representative c-Fos signals (A, Right) and representative c-Fos-orexin signals (B Right).Green represents the c-Fos signal, and red represents the orexin signal.The scale bar is 50 μm Fig. 6 Fig. 6 The number of c-Fos-positive neurons in AI/MI/PI of RF mice at 1, 8, and 15 days.The mean (± SEM) number of c-Fos positive neurons for ten subjects is shown on the y-axis in the AI (A), MI (B), and PI (C).Data are reproduction of RF groups shown in A-C of both Figs. 3 and 5. *P < 0.05; **P < 0.01 difference among the 1 day, 8 days, and 15 days RF mice according to one-way ANOVA with Bonferroni's multiple comparison test Fig. 7 Fig. 7 The number of c-Fos-positive neurons and c-Fos-positive orexin-neurons in LH of 1-day or 8-day RF-mice.A and B, Top.Representative sections show c-Fos positive neurons (A) and c-Fos positive orexin neurons (B) in the LH of AL mice and RF mice for 1-day food restriction (Left top) and 8 days of food restriction (Right top), respectively.The scale bar is 50 μm.The mean (± SEM) of c-Fos positive neurons (A, Bottom) and the ratio of c-Fos-orexin double-positive neurons to orexin neurons (B, Bottom) for ten subjects are shown on the y-axis.*P < 0.05; **P < 0.01; ***P < 0.001 difference between AL mice and RF mice according to unpaired t-test with Welch's correction Fig. 8 Fig. 9 Fig. 8 Comparison of the number of c-Fos-positive-neurons and ratio of c-Fos-orexin-double-positive-neurons/orexin-neurons in LH of 1-day/8 days/15 days RF-mice.The mean (± SEM) of c-Fos positive neurons (A) and the ratio of c-Fos-orexin double-positive neurons to orexin neurons (B) for ten subjects are shown on the y-axis.Data are reproduction of RF groups shown in A-B of both Fig. 4 and Fig. 7. **P < 0.01; ***P < 0.001 difference among the LH of the 1, 8, and 15 days RF mice according to one-way ANOVA with Bonferroni's multiple comparison test
2023-12-08T14:54:46.911Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "ab5bb53dc4b548debaac633ae7947733783efa38", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "ab5bb53dc4b548debaac633ae7947733783efa38", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
13006801
pes2o/s2orc
v3-fos-license
Evaluation of serum protein-based arrival formula and serum protein supplement (Gammulin) on growth, morbidity, and mortality of stressed (transport and cold) male dairy calves Previous studies with calves and other species have provided evidence that blood serum-derived proteins and fructooligosaccharides (FOS) may benefit intestinal health. We assessed the effects of supplementing products containing serum proteins as a component of arrival fluid support or serum proteins plus FOS (in addition to additional solids, minerals, and vitamins) in an early life dietary supplement on performance, morbidity, and mortality of stressed (transport, cold) male calves. Male Holstein calves (n = 93) <1 wk old were stratified by arrival body weight (BW) and plasma protein concentration, and then randomly assigned to 1 of 4 treatment groups in a 2 × 2 factorial arrangement of one-time administration of fluid support [either control electrolyte solution (E) or the serum protein-containing arrival formula (AF)] and 14 d of either no supplementation (NG) or supplementation with Gammulin (G; APC Inc., Ankeny, IA), which contains serum proteins and FOS in addition to other solids, minerals, and vitamins. Upon arrival at the research facility, calves were orally administered either AF or E. At the next feeding, half of the calves from each fluid support treatment received either milk replacer (20% crude protein, 20% fat) or the same milk replacer supplemented with G (50 g/d during the first 14 d). Starter and water were freely available. Feed offered and refused was recorded daily. Calf health was assessed by daily assignment of fecal and respiratory scores. Stature measures and BW were determined weekly. Blood samples were obtained at d 0 (before treatments), 2, 7, 14, and 28. Calves were weaned at d 42 and remained in the experiment until d 56. After 2 wk of treatments, calves previously fed AF had greater body length (66.6 vs. 66.0 cm), intakes of dry matter (38.7 vs. 23.5 g/d) and crude protein (9.2 vs. 5.6 g/d) from starter, and cortisol concentration in blood (17.0 vs. 13.9 ng/mL) than calves fed E. Supplementation with G resulted in greater BW gain during the first 2 wk, increased intakes of dry matter and CP, and decreased respiratory scores. For the 8-wk experiment, G supplementation resulted in lower mean fecal score (1.6 vs. 1.8) and fewer antibiotic treatments per calf (1.5 vs. 2.5) than NG. Survival was greater in G than in NG calves (98 vs. 84%). Despite the marked reduction in morbidity and mortality, blood indicators of acute-phase response, urea N, and total protein were not affected by AF or G in transported cold-stressed male calves. INTRODUCTION The most challenging period for young calves is from birth through weaning; during this time calves experience remarkable physiological, metabolic, and environmental changes (Davis and Drackley, 1998). Despite advances in calf nutrition and health, morbidity and mortality are greater during this time than at any other point of the animal's life (Quigley et al., 2005;NAHMS, 2010). The main causes of mortality during the preweaned period are diarrhea and respiratory problems, with most of the deaths occurring during the first 2 to 3 wk of life (NAHMS, 2010). For the dairy enterprise, decreasing morbidity and mortality is important because of the economic losses associated with treating illnesses, decreased calf performance, and death. Factors associated with morbidity and poor performance during the preweaned period are inadequate feeding, poor management, and the presence of stressors such as cold weather and transport (Wells et al., 1996;Svensson et al., 2006b;Vasseur et al., 2010). Success in calf rearing is a function of excellent nutrition and management practices that start as soon as the calf is born. Adequate colostrum intake and provision of a dry, comfortable, and clean environment to minimize exposure to pathogens are the first steps Evaluation of serum protein-based arrival formula and serum protein supplement (Gammulin) on growth, morbidity, and mortality of stressed (transport and cold) male dairy calves to provide better adaptation of the newborn calf to its new environment (Quigley et al., 1995;Godden, 2008). Colostrum-deprived calves exposed to unfavorable conditions (e.g., high pathogen loads, environmental stressors) after birth exhibit greater risk for morbidity and mortality (Quigley et al., 2005;Godden, 2008;Gulliksen et al., 2009). Subsequent nutrition is crucially important to provide all nutrients for proper growth and health; however, adequate nutrition is not enough to ensure proper survival, growth, and health in circumstances where the young calf is immunologically disadvantaged, facing unfavorable environmental conditions, or subjected to other stressors. Transport (Johnston and Buckland, 1976;Odore et al., 2004) and cold weather represent stressors for the young calf, affecting its immune system (Odore et al., 2004;Fike and Spire, 2006) and increasing the susceptibility to enteric and respiratory problems (Fike and Spire, 2006;Svensson et al., 2006a), which are the principal causes of high morbidity and mortality during the preweaned period (NAHMS, 2010). Dietary supplements or additives that would improve nutrition and diminish the negative effects of poor management and unfavorable conditions would be useful for calf managers. Previous research with calves (Arthington et al., 2000b;Quigley et al., 2002) suggested that serum protein products improve health and decrease morbidity and mortality (Quigley and Wolfe, 2003) while maintaining or increasing growth. In addition, studies in humans and other species have shown that fructooligosaccharides (FOS) stimulate growth of beneficial bacteria in the gastrointestinal tract, inhibit colonization by pathogens, and improve mineral absorption (Howard et al., 1995;Sabater-Molina et al., 2009;Grand et al., 2013). Based on these data, our prediction was that products containing serum proteins or serum proteins plus FOS, in addition to milk products (dried whey and dried whey protein concentrate), minerals, and vitamins, would improve performance, morbidity, and mortality of dairy calves subjected to the stressors of winter cold weather and transport, especially during their first 2 to 3 wk of life. Therefore, the aim of the present study was to evaluate the effects of an arrival fluid support product containing serum proteins or an early life dietary supplement containing serum proteins and FOS among other ingredients on performance, morbidity, and mortality of stressed male calves. Calf Management and Experimental Design All procedures were approved by the University of Illinois Institutional Animal Care and Use Commit-tee. Ninety-three male Holstein calves <1 wk old were purchased from dairy farms in western New York State by a buyer in 3 blocks of 30 to 32 calves each and transported to the University of Illinois Nutrition Field Laboratory. Calves were transported in a livestock trailer and subjected to a ~14-h trip. The first group was acquired in February 2009 (32 calves), the second group in November 2009 (31 calves), and the third in January 2010 (30 calves). Each group of calves was examined by the veterinary staff upon arrival. The evaluation included heart and lung auscultation, hydration status, alertness, mobility, navel status, and body temperature. After receiving an ear tag in the left ear calves were subjected to a standard arrival and postarrival processing scheme. A blood sample was obtained by jugular venipuncture and plasma protein was determined by refractometry. Calf BW was measured. Calves were stratified by arrival BW and plasma protein concentration, blocked, and assigned randomly within blocks to 1 of 4 treatments. Treatments were in a 2 × 2 factorial arrangement of type of arrival fluid support (AFS) and early-life dietary supplementation (SUP). The AFS factor comprised one-time administrations of either control electrolyte solution (E) or arrival formula (AF). The SUP factor consisted of either no supplement (NG) or 14 d of supplementation with a commercial product (Gammulin; APC Inc., Ankeny, IA) containing serum proteins and FOS (G). The resulting individual treatments, therefore, were EN (n = 25), AFN (n = 24), EG (n = 22), and AFG (n = 22). Calves were housed in individual hutches (Calf-tel, Hampel Corp., Germanton, WI) bedded deeply with straw over crushed rock covered with landscape cloth. Straw was added to each hutch daily as needed. All calves were castrated by the veterinary staff via surgical removal of the testicles using local anesthesia 1 wk after arrival. Calves remained in hutches from arrival until the end of the experiment at d 56 when they were sold. Feeds and Feeding Program The AF (APC Inc.) fluid support supplement was administrated via nipple feeder or esophageal feeder upon arrival, and was prepared by mixing for each calf 250 g of AF in 2 L of warm (45°C) water. The ingredient composition of AF included spray-dried bovine serum in addition to milk products (dried whey and dried whey protein concentrate), minerals, electrolytes, and vitamins. Administration of AF was compared with administration of a commercial electrolyte solution (Land O'Lakes electrolyte system; Land O'Lakes Animal Milk Products Co., Arden Hills, MN), which was prepared 9029 by mixing 77 g in 2 L of warm (45°C) water. Therefore, each calf received one 2-L feeding of either AF (n = 46) or E (n = 47). At the next feeding, all calves received a nonmedicated milk replacer (Sav-A-Caf; Milk Products LLC, Chilton, WI), containing all milk proteins, 20% CP, and 20% fat. The milk replacer was either not supplemented (NG) or supplemented with G by adding 25 g at each feeding (50 g/d) to the reconstituted milk replacer during the first 14 d after arrival only. Gammulin nutritional supplement contained bovine serum, FOS (inulin), dried whey, maltodextrin, minerals, and vitamins. Milk replacers were reconstituted to 12.5% solids (not including G supplementation) and were fed at a rate of 10% of arrival BW in 2 feedings daily. Beginning on d 3 after arrival, milk replacers were fed at a rate of 12% of arrival BW for 2 wk, followed by 10% of arrival BW during wk 3 to 5. During wk 6, the afternoon milk replacer feeding was eliminated, and calves were weaned at d 42 after arrival. Commercial texturized starter (20% MomentaCalf Starter -RUM; Vita Plus Corporation, Madison, WI) containing 20% CP was provided daily to all calves for ad libitum intake from d 4 until 56. Calves were fed at 0530 and 1630 h each day; milk replacer, starter, and water intakes were recorded daily. Clean, fresh water was provided twice daily, after the morning and evening feedings. Data Collection Calf health status was monitored throughout the day and assessed by daily assignment of fecal and respiratory scores, as well as recording all medical treatments and elevated body temperatures. Fecal scores were recorded using a 1 to 4 scale with the following guidelines: 1 = firm, well formed (not hard); 2 = soft, pudding-like; 3 = runny, pancake batter; and 4 = liquid, splatters. Respiratory scores on a 1 to 5 scale were recorded using the following guidelines: 1 = normal; 2 = runny nose; 3 = heavy breathing; 4 = cough, moist; and 5 = cough, dry. Body temperatures were recorded daily until d 7, and at any time when a calf appeared depressed or was off feed. Calves were measured for BW, withers height, body length, heart girth, hip height, and hip width at arrival and after the morning feeding on the same day each week through wk 8. Individual intakes of milk replacer, starter, and water were measured and recorded daily. For each replicate of calves, milk replacer and starter grain were sampled weekly from wk 1 to 8, whereas G was sampled during the supplementation period (wk 1 and 2). The AF was only offered and sampled on the arrival day for each replicate of calves. All samples were stored at −20°C and then composited by period and replicate. Composited samples were sent to a commercial laboratory (Dairy One Corporative Inc., Ithaca, NY), where they were analyzed for concentrations of DM, CP, fat, and minerals by wet chemistry methods as described (http://dairyone.com/general-resources/ publications/). Water intake was determined as free water, water in reconstituted milk replacer, water in feed, and total water intake. Water freely available after feedings was considered as free water. Water in milk replacer was the water used to reconstitute the milk replacer. Water in feed was the water contained in the as-fed starter and milk replacer powder. Total water was the sum of all sources of water. Samples of blood were obtained at arrival (d 0) before the initial feeding and on d 2, 7, 14, and 28. Blood samples were collected via jugular venipuncture with 20-gauge × 2.5 cm needles (Becton Dickinson and Company, Franklin Lakes, NJ) into 10-mL evacuated tubes for serum (Becton Dickinson and Company) containing clot activator. Samples were allowed to clot at room temperature for at least 30 min and then placed on ice. All tubes were centrifuged within 2 h of collection at 4°C for 15 min at 959 × g. After centrifugation, the serum was removed, placed into 5-mL tubes, and stored at −20°C until analysis. Serum from d 0, 2, 7, and 14 was analyzed for concentrations of IgG (d 0 only) and acid-soluble protein (ASP), haptoglobin, cortisol, Zn, and albumin to determine evidence of stress and inflammatory responses (Ballou et al., 2011). Serum from d 28 was analyzed for concentrations of urea N and total protein. Concentrations of albumin, urea N, and total protein were determined at the University of Illinois, College of Veterinary Medicine Clinical Pathology Laboratory using commercially available kits (Olympus America Inc., Center Valley, PA). Concentration of IgG was determined by radial immunodiffusion (RID kit, VRMD Inc., Pullman, WA) by APC Inc. Acid-soluble protein, haptoglobin, cortisol, and Zn were analyzed at Texas Tech University. Haptoglobin and Zn were determined as described by Makimura and Suzuki (1982) and Ballou et al. (2011), respectively. Cortisol was quantified using an enzyme immunoassay kit (Arbor Assays, Ann Arbor, MI) as described by the manufacturer (http:// www.arborassays.com/documentation/inserts/K003-H. pdf). Acid-soluble protein was determined as follows. Serum samples (50 μL) were incubated with 1 mL of perchloric acid solution (66 mL of 0.6 M perchloric acid diluted in 1 L of distilled water) for 20 min at room temperature. Samples were then centrifuged (1,380 × g, at room temperature for 30 min). Aliquots (25 μL) of distilled water, standards, and unknowns (perchloric acid supernatants after centrifugation) were pipetted in duplicate into a 96-well plate. Then, 200 μL of working reagent [50 parts of bicinchoninic acid solution and 1 part of copper (II) sulfate pentahydrate 4% solution] were pipetted into each well. The plate was covered with film and incubated at 60°C for 15 min before reading at 562 nm using a plate reader. Statistical Analysis Daily and weekly statistical analysis was performed using the GLIMMIX, MIXED, LOGISTIC, and LIFETEST procedures of SAS (v9.4, SAS Institute Inc., Cary, NC). A linear mixed model (MIXED procedure) was constructed to analyze data for growth, feed intakes, and blood metabolites. The model contained the fixed effects of AFS (E or AF), SUP (NG or G), the interaction of AFS and SUP, time (day or week), and interactions of time with treatments. Replicate and calf were considered random effects; calf was nested within replicate, AFS, and SUP. Time (day or week) was specified as the repeated factor with calf nested within replicate, AFS, and SUP as subject. Calf was the experimental unit. The covariance structures considered for repeated measures analysis were compound symmetric, autoregressive order one, and unstructured. The covariance structure that yielded the lowest Akaike information criterion-corrected value was autoregressive order one, and therefore it was used in the models (Littell et al., 1998). Initial measurements (before treatment administration) of BW, withers height, body length, heart girth, hip height, hip width, and concentrations of plasma albumin, ASP, cortisol, haptoglobin, and Zn at d 0 were used as covariates when analyzing the respective data. Least squares means were calculated and are presented with respective standard error of the mean. Degrees of freedom were estimated by using the Kenward-Roger method in the model statement (Littell et al., 1998). Residual distribution for each variable was evaluated for normality and homoscedasticity. Daily health data were analyzed using multivariable logistic mixed models (GLIMMIX procedure) considering the count outcome variables: number of calves having high fecal score (fecal score ≥3 in a 1 to 4 scale), number of days with high fecal score, percentage of calves with high fecal score, number of calves with high respiratory score (respiratory score ≥3 in a 1 to 5 scale), number of days with high respiratory score, and percentage of calves with high respiratory score. The model contained the fixed effects of AFS and SUP, the interaction of AFS and SUP, time (week), and interactions of time with AFS and SUP. When the variance and means of fecal and respiratory scores were evaluated, the data were found to be over-dispersed. To compensate, negative binomial regression analysis was used to establish relationships of the fixed effects AFS, SUP, and the interaction of AFS and SUP. Means for variables are presented with their respective SEM as well as the odds ratio (OR). Calf survival analysis was assessed using a Cox proportional hazard model (LIFETEST procedure). The fixed effects of AFS, SUP, and the interaction of AFS and SUP were treated as strata and forced into the model. The assumption of the proportionality of hazard of the model was assessed graphically by plotting the logarithm of the hazard function versus the logarithm of time. Residuals were evaluated for homogeneous distribution. Finally, a logistic regression model (LO-GISTIC procedure) considering the binary outcome variable mortality was constructed. The OR from main effects and treatments are described. In all statistical procedures significant differences were declared when P ≤ 0.05, and trends toward significant effects were noted when 0.05 < P ≤ 0.10. Ambient Temperature Mean low and high ambient temperatures ranged from −4.7 to 16.5, −16.0 to 11.8, and −10.4 to 14.1°C for replicates 1, 2, and 3, respectively. Ambient temperature was below the lower critical temperature of 15°C (NRC, 2001) most of the time throughout the study (Figure 1), indicating that calves needed to expend energy for thermoregulation. Nutrient Composition of Diets Analyzed nutrient composition of milk replacer and starter are listed in Table 1. Commercial milk replacer was declared to contain 20% CP on an as-fed basis and the actual analyzed CP (DM basis) was 21.7%. Similarly, commercial starter declared to contain 20% CP on an as-fed basis had measured CP content (DM basis) of 23.6%. Chemical analysis of AF and G indicated that CP content on a DM basis was 47.5 and 75.3%, respectively (Table 1). Although nutrient composition of milk replacer and starter were similar, G supplementation provided greater DM and CP intake in supplemented calves. Initial Plasma Protein and Serum IgG Concentrations At arrival calves were blocked by plasma protein concentration determined by refractometry, which did not differ among treatments (Table 2). Initial concentrations of IgG in serum analyzed later also did not differ significantly among treatment groups (Table 2), although we noted a weak tendency (P = 0.13) for greater IgG in calves that subsequently received G. Total plasma protein was analyzed after the experiment to further explore this relationship, but it too did not differ significantly among treatments ( Table 2). The number of calves with initial IgG below 1.0 g/dL (10 g/L) was numerically less for calves assigned to G (Table 2). Intakes Calves were fed a limited amount of milk replacer (10 to 12% of arrival BW) as described. Compared with NG calves, calves supplemented with G had 54.5 g/d greater (P < 0.01) DMI from milk replacer during the supplementation period primarily because of the addition of G (Table 3). Supplementation with G resulted in 36.2 g/d greater (P < 0.01) intake of milk replacer CP in G-supplemented calves than in NG calves (Table 3). Calf starter was offered ad libitum from d 4 to help meet nutrient requirements and support growth. Although the amount of starter intake during the first 2 wk was not large, calves fed AF had greater intakes of DM (15.1 g/d) and CP (3.7 g/d) from starter (P < 0.01) and tended to have greater total DMI (P = 0.09) than calves fed E. As a result of greater intakes of DM and CP, both imposed (G added to milk replacer) and voluntary (starter), calves supplemented with G had greater (P < 0.01) intakes of total DM and total CP during the first 2 wk (Table 3) of 50.1 and 35.1 g/d, respectively. Greater starter intake in AF-fed calves led to greater starter ME intake (P < 0.01; Table 3). Trends toward significance were observed in the interaction AFS and SUP for intakes of milk replacer DM (P = 0.08), CP (P = 0.08), and ME (P = 0.09). As designed, supplementation of the high-CP G supplement resulted in calves fed AFG or EG having greater intakes of milk replacer DM (P < 0.01) and CP (P < 0.01) compared with calves fed AFN or EN. Additionally, calves fed the EG treatment had greater (P = 0.04) intake of ME from milk replacer than calves receiving the EN treatment (Table 3). Calves supplemented with G had 0.2 kg/d lower (P = 0.02) free water intake than NG-supplemented calves (Table 3). We found no difference in intakes of water in milk replacer or total water; however, water intake from feed was affected by AFS (P = 0.02) and SUP (P < 0.01; Table 3). These differences were small and likely not biologically significant. Through 8 wk, intakes were not affected by the main effects of AFS or SUP except for milk replacer DM and CP intake (Table 4). Greater mean intake (P < 0.01) of milk replacer DM and CP through 8 wk was the result of increased intake of milk replacer DM and CP at 2 wk (Table 4), as designed. Through 8 wk, significant interactions of SUP and week (P < 0.01) were observed for milk replacer DM, milk replacer CP, and total CP intake. Milk replacer DM and CP intakes increased from wk 1 to 2 and were greater (P < 0.01) for G-supplemented calves, as designed (Figure 2A, B). After the second week, intakes of G-supplemented Measured at completion of the study in samples from day of arrival (d 0). calves decreased and became similar to intakes of NGsupplemented calves because of the removal of G from the diet (Figure 2). Total CP intake through 2 wk was greater (P = 0.01) for calves supplemented with G; from the second week, total CP intake was similar for G-and NG-supplemented calves ( Figure 2C). Growth Initial calf BW (Table 5) and body conformation measures (data not presented) did not differ among treatment groups. Differences at 2 wk were not pronounced; neither AFS nor SUP had a great effect on most growth parameters at this time. Calves fed AF had greater (P = 0.05) body length and tended to have greater (P = 0.07) heart girth than calves fed E; conversely, calves supplemented with G tended (P = 0.06) to have greater BW than NG calves (data not presented). Through 8 wk, BW and body conformation parameters, except for hip width, were similar for all groups of calves. Calves fed AF exhibited greater (P = 0.03) hip width (Table 5) than those fed E, but differences were numerically small. The interaction AFS and SUP was significant (P = 0.04) for hip height. Calves fed AFG or EN treatments had greater (P = 0.05) hip height than EG-fed calves ( Table 5). The SUP and week interaction was significant for withers height (P = 0.05) and gain-to-feed ratio (P < 0.01), and tended to be significant for ADG (P = 0.06) and mean BW (P = 0.07). Through 8 wk, ADG did not differ among treatments, except in the first week where G-supplemented calves had superior (P = 0.03) ADG compared with nonsupplemented calves ( Figure 3A). Feed efficiency was greater (P < 0.01) for G-supplemented calves in the first week only, then became lower (P < 0.05) until wk 4 for these same calves. After wk 5, no differences were detected in feed efficiency between G-and NG-supplemented calves ( Figure 3B). Mortality, Health, and Medical Treatments Mortality was greater (P = 0.02) for calves that did not receive G (Table 6, Figure 4). Analysis of plasma IgG revealed that the concentration of IgG was lower (P < 0.01) in calves that died than in those that survived ( Figure 5). Although we found no difference in number of electrolyte administrations among treatments, calves supplemented with G received fewer (P = 0.05) antibiotic treatments than calves without G supplementation (Table 6). When data from wk 1 and 2 were analyzed separately, neither SUP nor AFS supplementation affected mean fecal score, days with high fecal score, number of calves with high fecal score, or percentage of calves with high fecal score (data not presented). However, the interaction of the main effect AFS and week was significant (P = 0.01) for fecal score. Mean fecal score decreased from wk 1 to 2 for calves fed AF, whereas the opposite occurred for E-fed calves. Over the 8-wk experiment, calves supplemented with G had lower (P = 0.01) mean fecal scores than NG-supplemented calves (Table 6). During the first 2 wk after arrival, calves supplemented with G had lower (P ≤ 0.01) mean respiratory score, fewer days with high respiratory scores, fewer calves with high respiratory scores, and a smaller percentage of calves with high respiratory scores compared with NG-supplemented calves (Table 6). Over the entire 8 wk, a significant interaction of SUP and week (P = 0.02) was observed for respiratory score (Table 6). Although differences were small, calves supplemented with G had a lower respiratory score (P = 0.04) and fewer days with a high respiratory score (P = 0.01) than NG calves. The small difference in respiratory score in G-supplemented calves was probably due to lower respiratory score during the first 2 wk compared with NG-supplemented calves ( Figure 6). Odds ratios for effects of treatments on respiratory scores and mortality were determined (Table 7). Compared with NG-supplemented calves, G-supplemented calves had lower OR for number of calves with high respiratory scores (OR = 0.32, P < 0.01), lower OR for percentage of calves with high respiratory score (OR = 0.30, P < 0.01), lower OR for the number of days with high respiratory scores (OR = 0.33, P = 0.01), and lower OR for mortality (OR = 0.12, P = 0.04). Calves fed treatment AFG also presented smaller OR for percentage of calves with high respiratory score (OR = 0.18, P < 0.01; Table 7). The main effect of AFS administration tended to result in lower OR for mortality (OR = 0.80, P = 0.07). Throughout the study, mortality occurred during the first 3 wk (Table 7 and Figure 4). Blood Metabolites Although blood metabolite concentrations were statistically different (P < 0.01) in time, main effects and interactions of treatments with time were not significant for plasma concentrations of albumin, haptoglobin, Zn, and ASP (Table 8). Plasma cortisol, on the other hand, was higher (P = 0.04) for calves fed AF than for E-fed calves (17.8 vs 13.3 ng/mL, respectively). Interactions of main effects and time were not significant for plasma cortisol. Urea N and total protein in plasma measured at wk 4 did not differ among main effects and interactions (Table 8). DISCUSSION The AF and G had only small effects on growth and intakes in our study, but health status, morbidity, and mortality were significantly improved. Fecal score, incidence of scours, and respiratory problems decreased during the supplementation period. Improved health in AF-and G-supplemented calves led to less antibiotic use, decreased morbidity, and less mortality. Whereas some of the differences in mortality and morbidity may have resulted from the random assignment of calves Figure 2. Least squares means and associated SEM for daily intakes of milk replacer DM, milk replacer CP, and total CP for calves supplemented or not with Gammulin (APC Inc., Ankeny, IA). (A) Milk replacer DMI from wk 1 to 6 [early-life supplementation (SUP) × wk, P < 0.01]; (B) milk replacer CP intake from wk 1 to 6 (SUP × wk, P < 0.01); (C) total CP intake from wk 1 to 8 (SUP × wk, P < 0.01). 9035 with slightly better IgG status at arrival, apparent effects of G supplementation are consistent with previous research. Arthington et al. (2000a) reported fewer treatments for illness when calves were fed bovine serum as an IgG source at birth and 12 h later. Lower mortality and improved indices of enteric health (improved fecal scores, fewer days with diarrhea, and lower use of electrolytes) were reported by Quigley et al. (2002) when additives containing bovine serum or milk replacer containing spray-dried plasma were fed to calves. Similarly, Arthington et al. (2002) reported improvement in average respiratory rate of calves infected with coronavirus when bovine serum was supplemented. Reduced calf morbidity and mortality also were reported by Quigley and Wolfe (2003) with inclusion of spray-dried bovine or porcine plasma in milk replacer. Serum protein-based AF contained spray-dried bovine serum, minerals, and vitamins. Gammulin contained the same components as AF plus FOS (Quigley et al., 2002). Bovine serum is a source of immunoglobulins that might provide local intestinal protection against enteric pathogens (Arthington et al., 2000a(Arthington et al., , 2002Quigley et al., 2002;Quigley and Wolfe, 2003). Fructooligosaccharides may increase the growth and population of beneficial intestinal microorganisms, thereby improving intestinal health and decreasing incidence Table 5. Initial, mean, and final BW; mean body conformation measurements; ADG; and feed efficiency from wk 1 to 8 for calves fed electrolyte plus milk replacer without Gammulin (EN), arrival formula plus milk replacer without Gammulin (AFN), electrolyte plus milk replacer with Gammulin (EG), or arrival formula plus milk replacer with Gammulin (AFG) Variable or severity of diseases (Grizard and Barthomeuf, 1999;Menne et al., 2000). Because the commercial products tested in our study contained several ingredients that differed from the control treatments, it is not possible to discern whether serum proteins exerted the predominant effects or if the FOS and other ingredients worked additively or synergistically with the serum proteins. Future experiments could examine these component factors individually or in factorial combinations in comparison with appropriate controls. Improvements in health for calves supplemented with G also might be related in part to greater nutrient intake in those calves, as supplementation with G led to greater intake of milk replacer DM and CP by design during the first 2 wk. These results agree with Quigley et al. (2002) and Quigley and Wolfe (2003), who reported, even though minimal, an increase in milk replacer intake when spray-dried serum or plasma (either from bovine or porcine origin) were fed to calves. Similar results have been described in other species; Pierce et al. (2005) reported enhanced feed intakes when spray-dried High RS: calves with respiratory score ≥3 (1-5 scale). Figure 4. Survival percentage from wk 1 to 8 for calves supplemented or not with Gammulin (APC Inc., Ankeny, IA). Survival was greater for calves that received Gammulin (P = 0.02). Gammulin was supplemented for the first 14 d twice daily by adding 25 g at each feeding (50 g/d) to the reconstituted milk replacer. 9037 plasma was fed to early weaned pigs. Supplementation with serum protein and greater intakes of DM and CP during the first 2 wk improved BW, ADG, and feed efficiency during early serum protein supplementation. In a 56-d experiment, Quigley et al. (2002) found a 12.9% increase in BW gain from d 29 to 56 when serum protein was supplemented to calves. Morrill et al. (1995) also reported greater BW gain when calves were fed milk replacer containing plasma protein of bovine or porcine origin; however, Quigley et al. (2000) did not see a change in calf performance when spray-dried red blood cells were tested in milk replacer. Although not all measures were significant, the single dose of AF given at arrival resulted in weak tendencies for greater growth during wk 1 to 2, as shown by small increases in final BW (P = 0.17), heart girth (P = 0.07), body length (P = 0.05), hip height (P = 0.19), hip width (P = 0.13), and ADG (P = 0.15). These tendencies may be related to the stimulation of starter intake during the first 2 wk. Greater early starter intake in stressed calves would be a benefit in terms of growth and resistance to disease. The improved early growth is in agreement with Jones et al. (2004), who fed a colostrum replacement derived from bovine serum at 1.5 and 13.5 h of age. Figure 5. Least squares means and associated SEM for plasma IgG, determined by radial immunodiffusion, at arrival day before treatment assignment for calves that survived (black) or died (gray) during the 56-d experiment. Calves that died had lower initial IgG concentration (P < 0.01). Figure 6. Least squares means and associated SEM for respiratory score from wk 1 to 8 for calves supplemented or not with Gammulin (APC Inc., Ankeny, IA; early-life supplementation × wk, P < 0.01). Although concentrations of albumin, haptoglobin, Zn, and ASP were statistically different by day, they remained within the considered normal ranges. Plasma cortisol concentrations in AF-fed calves were greater than those in calves fed E, but concentrations were still very much lower than values of 38.0 and 79.0 ng/mL reported by Khan et al. (1970) and Willett and Erb (1972), respectively. Thus, these increases in our study were small and likely cannot be considered biologically elevated. Increases may be related to the greater protein intake during the first AFS feeding. Together, these results provide no evidence of difference in inflammation or acute phase response between calves supplemented with serum proteins and those not supplemented. A tendency toward greater plasma protein concentrations was observed by Quigley et al. (2002) and Jones et al. (2004) when feeding milk replacer containing spray-dried bovine plasma. However, in our study, concentrations of urea N and total protein in plasma at wk 4 did not differ among G-and NG-supplemented calves. CONCLUSIONS Feeding a product containing serum proteins and FOS in addition to milk products, minerals, and vitamins to preweaned dairy calves that were exposed to stressors of transport and cold weather resulted in small differences of early feed intakes and growth but significantly decreased morbidity and mortality. The AF, which contained serum proteins in addition to additional milk solids, minerals, and vitamins, promoted starter intake, growth, and improved fecal score during the first 2 wk. Similarly, G supplementation increased early nutrient intake and stimulated early growth, decreased incidence of respiratory problems at 2 wk, and significantly decreased mortality. Whether these effects may be the result of serum proteins, FOS, and other ingredients working additively or synergistically cannot be discerned in the present study. Indicators of acutephase response were not affected by AF versus E or G versus NG. Because serum IgG status was somewhat better by random chance in calves assigned to receive G than in calves that were not supplemented, decreases in mortality and morbidity in G-supplemented calves should be confirmed in studies with larger numbers of at-risk calves. tend our gratitude to all the graduate and undergraduate students that assisted with this research.
2018-04-03T00:33:46.365Z
2016-09-07T00:00:00.000
{ "year": 2016, "sha1": "662595a58d6de0b8f28585261d362c31edd88b9c", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.journalofdairyscience.org/article/S0022030216306154/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "66aba1ace0ac4c2f990646833987b3006c0fe5d6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
259056759
pes2o/s2orc
v3-fos-license
Multi-Objective Framework for Optimal Placement of Distributed Generations and Switches in Reconfigurable Distribution Networks: An Improved Particle Swarm Optimization Approach : Distribution network operators and planners face a significant challenge in optimizing planning and scheduling strategies to enhance distribution network efficiency. Using improved particle swarm optimization (IPSO), this paper presents an effective method for improving distribution system performance by concurrently deploying remote-controlled sectionalized switches, distributed generation (DG), and optimal network reconfiguration. The proposed optimization problem’s main objectives are to reduce switch costs, maximize reliability, reduce power losses, and enhance voltage profiles. An analytical reliability evaluation is proposed for DG-enhanced reconfigurable distribution systems, considering both switching-only and repairs and switching interruptions. The problem is formulated in the form of a mixed integer nonlinear programming problem, which is known as an NP-hard problem. To solve the problem effectively while improving conventional particle swarm optimization (PSO) exploration and exploitation capabilities, a novel chaotic inertia weight and crossover operation mechanism is developed here. It is demonstrated that IPSO can be applied to both single-and multi-objective optimization problems, where distribution systems’ optimization strategies are considered sequentially and simultaneously. Furthermore, IPSO’s effectiveness is validated and evaluated against well-known state-of-the-art metaheuristic techniques for optimizing IEEE 69-node distribution systems. Background According to statistical studies, distribution networks represent the largest share of causes of consumer blackouts in power networks [1]. This can be due to the high amount of equipment and high failure rate associated with electrical distribution networks [2]. There is no doubt that one of the primary objectives of distribution companies is customer satisfaction. A distribution network, on the other hand, suffers from high energy losses and poorly regulated voltages due to its high currents and low operating voltages. As a result, it is imperative to review the existing policies governing network planning and operation, and to direct them towards improving reliability, reducing losses, and enhancing voltage stability. Making distribution networks more reliable and efficient by using optimal switch placement, distributed generation (DG) placement, and distribution feeder reconfiguration (DFR) are the most effective and well-known ways to achieve these goals [3]. Distribution networks can be optimized in some ways, such as the optimal placement of switches, DGs, and DFR. Of course, each of these strategies alone can lead to this goal. demand uncertainty. Ref. [21] proposed a two-stage dispatch model for an economical energy system with renewables, storage, and load uncertainty. However, the excessive penetration or inefficient installation of such generation sources may raise some challenges. For instance, when incorporating Distributed Generators (DGs) into the network, the network's impedance is altered, which leads to an increase in the fault current level [22]. To mitigate the effects of the increasing penetration of DGs on distribution systems, a multi-objective framework has been proposed in [22] by optimally setting the fault current limiters and directional overcurrent relay. To maximize DGs' benefits, it is essential to ensure that they are located and sized in distribution networks in a manner that maximizes their advantages. Reductions in losses, the improvement of voltage profiles, and the enhancement of system reliability are among the key goals in locating and determining the most suitable location and size when integrating DGs in distribution networks. To solve this problem, many meta-innovative methods have been introduced in previous studies that have dealt with the problem as a single-purpose or multi-purpose issue. For example, PSO [23], BFOA [24], SKHA [25], HHO, and IHHO [26] were employed to address this problem. As a more comprehensive solution and to achieve the maximum advantages of connecting DGs and network reconfiguration, these two schemes have been proposed and examined as a simultaneous or sequential optimization problem. For example, HSA [27], FWA [28], ACSA [29], UVDA [30], and SFS [31] have been introduced to solve this problem. In terms of reliability enhancement, little research has been done on evaluating the impacts of the DFR on reliability metrics. For example, in reference [32], a model is proposed to investigate how DFR affects the reliability indicators for distribution networks. A method using a genetic algorithm is also presented in reference [33] to increase the reliability of distribution systems. It is noted that the power flow equation, voltage limits, and losses have not been considered in [32,33]. To overcome this shortcoming, a multiobjective optimal DFR was developed to minimize losses and improve reliability [34]. With the aim of improving distribution network reliability by means of optimal DFR, a periodic analysis-based approach has been presented in [35]. Contribution Distribution networks' operating parameters can be improved by the optimal DFR, optimal switch positions, and optimal DG sizes. Simultaneously considering these factors may change the optimal results compared to those obtained by individual optimization methods. The primary objectives of these mechanisms are cost reduction, reliability enhancement, the improvement of voltage profiles and the minimization of losses in distribution networks. Additionally, an analytical reliability evaluation of distribution system is proposed to calculate the reliability metrics considering the switching-only and repairs and switching interruptions for DG-enhanced reconfigurable distribution systems. Additionally, important limitations of the problem, such as the constraints related to the relationship between the reliability constraints and the installation location of switches, power flow limitations, radial structure constraints, the number of allowable switches, the allowable thermal capacity of branches, and sufficient voltage magnitude should be considered and appropriately defined. This problem is one of the most challenging problems due to its nonconvex nature. The optimization problem is formulated based on mixed integer non-linear programming. In addition, a proper multi-objective framework for simultaneously optimizing distribution systems using optimal switch/DG placement and optimum DFR strategies is introduced in this paper. For a more in-depth investigation of the model's effectiveness, several case studies have been conducted in comparison to scenarios wherein each of these strategies is applied independently or sequentially to optimize distribution networks. Furthermore, developing a method for obtaining the best and most reliable solutions to this complicated real-world optimization problem is another goal of this research work. The method used should have a good search capability in the multimodal space of the problem that includes binary decision variables. Additionally, it should be able to provide an appropriate balance between the phases of exploration and exploitation. The methods that have been used in the past to solve such complex problems are still considered to be weak from this perspective. Two major weaknesses of these algorithms are their rapid convergence and their tendency to get stuck in local optimal points. One of these methods is the PSO algorithm, which, in addition to the two mentioned weaknesses, requires the adjustment of several control parameters, making its application somewhat more challenging. Hence, to overcome these shortcomings, a novel chaos-oriented inertia weight (COIW) and crossover operation mechanisms are developed to improve the conventional PSO's performance. The proposed method requires fewer control parameters than the PSO algorithm, and only requires setting inertia coefficients. Figure 1 shows how the proposed optimization problem works. scenarios wherein each of these strategies is applied independently or sequentially to optimize distribution networks. Furthermore, developing a method for obtaining the best and most reliable solutions to this complicated real-world optimization problem is another goal of this research work. The method used should have a good search capability in the multimodal space of the problem that includes binary decision variables. Additionally, it should be able to provide an appropriate balance between the phases of exploration and exploitation. The methods that have been used in the past to solve such complex problems are still considered to be weak from this perspective. Two major weaknesses of these algorithms are their rapid convergence and their tendency to get stuck in local optimal points. One of these methods is the PSO algorithm, which, in addition to the two mentioned weaknesses, requires the adjustment of several control parameters, making its application somewhat more challenging. Hence, to overcome these shortcomings, a novel chaos-oriented inertia weight (COIW) and crossover operation mechanisms are developed to improve the conventional PSO's performance. The proposed method requires fewer control parameters than the PSO algorithm, and only requires setting inertia coefficients. Figure 1 shows how the proposed optimization problem works. Article Organization The following is a description of the remainder of the presented paper: Section 2 represents the formulation of the optimization problem of the DG/switch placement as well as the reconfiguration. Section 3 describes the analytical reliability evaluation of the DG-enhanced reconfigurable distribution systems. The proposed solution methodology is introduced. The simulation results and the concluding remarks are given in Sections 5 and 6, respectively. Problem Formulation It is possible to improve the distribution networks' operating parameters by reconfiguring the network optimally, optimally placing the DG units, locating switches at optimal positions, and determining whether switches are closed or open. Simultaneously considering these factors may change the optimal results compared to those obtained by individual optimization methods. The primary objectives of these mechanisms are improving the cost, reliability, losses, and voltage profiles of distribution networks. Additionally, important limitations of the problem, such as the constraints related to the relationship between consumer damage cost and installation location of switches, power flow limitations, radial structure constraints, the allowable number of switches, the maximum permissible line thermal capacity, and acceptable voltage magnitude, are formulated and described here. This section describes the mathematical formulations for the objective functions and constraints. Objective Functions The proposed optimization problem includes switching cost minimization, reliability improvement, loss reduction, and minimizing voltage deviation. (1) Switch cost minimization. In the first objective function, we represent the normalized total cost of sectionalized switches. It consists of three component-the investment cost, the installation cost, and the operation and maintenance cost: They key is shown below. Number of lines; sw l Location of installed sectionalizing switch l that equals 1 if installed, and 0 otherwise; The investment cost of switch l; I l The installation cost of switch l; T The lifetime of sectionalizing switches; dr Annual discount rate; M l The cost of operating and maintaining a sectionalizing switch l on an annual basis; C * The maximum cost value of sectionalizing switches. (2) Reliability maximization. The reliability objective function is characterized by two reliability metrics: SAIFI and SAIDI. In addition, the resulting weighted sum of the normalized indices is presented as follows: Min SF/SF * System expected interruption frequency index/the base value of SAIFI; SD/SD * System expected interruption duration index/the base value of SAIDI; α F /α D The weighting factor for SAIFI/SAIDI. (3) Minimizing voltage deviations. One of the critical goals of utilities is to improve the voltage profile of their distribution systems. Consequently, the voltage deviation is proposed as the third objective function as follows: V B and V j represent the nominal voltage and voltage magnitude of load bus j, respectively. For the base case, the value of voltage deviation is VD * . The number of load nodes in the distribution system is indicated by NB. (4) Loss minimization. The total power loss in a system can be calculated by summing up the power losses on each line. This is done by multiplying the line resistance by the square of the current passing through it. By minimizing the power loss of each line, the total power loss of the system can be minimized. where R l and I l are the resistance and the current magnitude flowing through line l. L * represents the loss value for the base case without considering DGs and DFR. Constraints The proposed optimization problem is subjected to power flow limits, voltage limits, line thermal limits, switch operation limits, and radiality limits, as described below. (1) Power flow constraints. Using the following equation, the nodal active/reactive power balance can be satisfied: The key is shown below. P Gi /Q Gi Active/reactive power injection at bus i; P Di /Q Di Active/reactive load demand at bus i; G ij /B ij Conductance/capacitance of line ij; θ i Voltage phase angle at load bus i. (2) Generation limits. This is the maximum capacity limit of the DG unit that can be installed at each generation node due to economic limitations, installation space restrictions, etc. P max Gi is the maximum DG installation capacity at generation bus i. d i indicates the installation location of DGs. If the DG is installed at candidate node i, then d i = 1; otherwise, d i = 0. (3) Voltage limits. The voltage magnitudes must be maintained within the minimum (V min ) and maximum (V max ) bounds, as follows: (4) Thermal limit. Line currents should not exceed the lines' maximum thermal limit (I max ), as follows: (5) Switching operation limit. Due to operational concerns, the number of times the switches can change state must be limited. Therefore, the following constraints are considered in the optimization problem: where ss B,l /ts B,l and ss l /ts l are the statuses of the switch and the tie-switch l before and after reconfiguration, respectively. Switching actions are limited to a maximum number of N max . NT defines the number of tie-switches in the network. (6) Radiality limit. It is necessary to satisfy Constraints (14) to (16) to ensure the radiality structure of the network: (14) where N sub defines the total number of substation nodes in the distribution network. Note that (14) is not a sufficient constraint for ensuring a radiality structure. Hence, the following constraints should be considered to prevent infeasible configuration during the optimization process. Constraint (15) states that if a switch is not installed on a line, i.e., sw j = 0, then that line can only have a state of 1 or closed, i.e., ss j = 1. On the other hand, if a switch is installed on a line, i.e., sw j = 1, using (15), the state of the line can be opened or closed, i.e., ss j ≥ 0. Condition (16) says that to create a radial structure in the network, in every fundamental loop only one switch must be opened. Ω i defines the set of lines in the fundamental loop i. The possible loops' number is equal to the installed tie-switches. FL i represents the total number of branches in the fundamental loop i. Decision Variables The decision vector of the proposed formulation can be defined as follows: x 1 = [sw 1 , sw 2 , . . . , sw l , . . . , sw NL ]; (18) x 2 = [ts 1 , ts 2 , . . . , ts i , . . . , ts NT ]; (19) x 3 = [s 1 , s 2 , . . . , s i , . . . , s NFL ]; s i = ss 1 , ss 2 , . . . , ss N i ; (20) As shown, the decision vector X includes five parts. The first part, as expressed in (18), specifies the installation position of the sectionalizing switch. The second part, as defined in (19), indicates the open/closed state of the tie-switches. If ts i = 0, it means that tie-switch i is open. Otherwise, this line is closed. The third vector, x 3 , determines the status of installed switches at the lines in every fundamental loop. NFL in (20) denotes fundamental loops of the distribution system. According to (21), decision vector x 4 shows the installation location of the DG at each candidate node. If DG is installed at candidate node i, then d i = 1; otherwise, d i = 0. Finally, the power generation vector is determined by x 5 , as defined in (22). Proposed Reliability Evaluation Model To assess network analytical reliability, component outage analyses are conducted on networks with certain structures and loading conditions [36]. According to the duration and interruption rates of distribution networks, SAIFI, EENS, SAIDI, and ASAI metrics are used to determine the reliability of distribution networks. This quantitative analysis assists in planning and maximizing distribution network reliability, as well as monitoring distribution companies' quality of electrical service. Consequently, effective methods are required for calculating the reliability indices that are utilized in (3). Network Model and Hypothesis Proper network modeling under different conditions is necessary to gain a deeper understanding of the proposed reliability evaluation process. The following are the reasonable hypotheses being investigated [36][37][38]: • Network operations are carried out radially; • Reliability evaluation assumes only one sustained outage of system branches; • There is no consideration for the malfunction of switches on the sectionalizing switches; The feeders are equipped with circuit breakers at the substation outputs; • There is a known time duration and failure rate for repair and switching in all branches; • All branches have known repair and switching durations and failure rates. Assessment of the Reliability of Reconfigurable Distribution Systems Enhanced with DGs According to the proposed reliability assessment model, the faulty area is first separated from the rest of the healthy parts of the network using proper switching actions. The load nodes in the healthy part of the network can be reenergized through the main substation. Therefore, these nodes only experience switching-only interruptions. In contrast, the nodes in the area of failure experience repair-and-switch interruptions. The proposed model has the advantage of accommodating DG support to restore a part of the customer service in the faulted zones. When a line fails and the faulted zones are cleared through a switching scheme, DGs can provide reliable backup for downstream load nodes. Consequently, these load nodes also experience switching-only interruptions. Assuming the hypothesis, the following actions are considered after a sustained fault on the system permitting islanded operations: 1. Upon failure of the faulted section, the first circuit breaker upstream trips, and the DGs trip as well; 2. The faulted zone can be identified and isolated by opening downstream and upstream switches. The circuit breaker is then closed to re-energize the healthy sections; 3. DGs are reconnected when their output surpasses the load required in the island zone; 4. When the fault is cleared, the open switches are synchronized with the DGs for the closing process. After a sustained fault, customers or the load bus will experience a switching-only or switch-and-repair interruption. In the first case, the network is reconfigured to isolate the system's faulty parts. Meanwhile, the latter relates to rehabilitating defective components and restoring interrupted loads. Based on branch information, i.e., the length (L), failure rate (λ), switching time (TS), and repair time (TR), it is possible to calculate metrics for measuring nodal reliability, in particular, interruption rates and durations. Figure 2 shows the flow chart for evaluating these metrics using simulation-based algorithms. After a sustained fault, customers or the load bus will experience a switching-only or switch-and-repair interruption. In the first case, the network is reconfigured to isolate the system's faulty parts. Meanwhile, the latter relates to rehabilitating defective components and restoring interrupted loads. Based on branch information, i.e., the length ( ), failure rate ( ), switching time ( ), and repair time ( ), it is possible to calculate metrics for measuring nodal reliability, in particular, interruption rates and durations. Figure 2 shows the flow chart for evaluating these metrics using simulation-based algorithms. There are two main loops in the flowchart, one on each branch of the network and one on each load node. All network nodes are quantified for the impact of each outage caused by each network component. We have carried out an analysis of nodes affected by a failure in a particular line. The next branch's outage effects are reviewed in the case of there being no affected load nodes. In the event of this failure affecting a load node, its reenergizing capability is determined through the reconfiguration program. In such cases, switching-only interruptions are considered if that node can be fed through the substation node or DGs. Then, interruptions due to switching-only are expected to occur at a certain rate and for a certain duration, i.e., y j and z j , are calculated for each load node j. Alternatively, interruptions caused by switching and repairs are considered, and quantified interruptions due to repairs and switching are expected to occur at a given rate and duration, which are defined by m j and n j , respectively. Every load node's interruption rate and duration are calculated for all branches. Hence, SAIFI and SAIDI can be defined using the averages and the durations and rates of the interruptions, as follows: An indication of how many customers are connected to a load node i is given by C i . PSO Algorithm PSO is a well-known metaheuristic algorithm that simulates the social interactions of particles or agents to optimize both continuous and nonlinear functions [39]. As part of the PSO algorithm, possible solutions are represented by a population of particles. Iteratively, swarm particles are moved throughout the search area, starting from a random position and velocity. Particles tend to move towards locations where they have succeeded the most and to the best particle positions. Let . , x P iD represent the best personal experience of particle i, and Gbest = x G 1 , x G 2 , . . . , x G D define the particles' best position so far. PSO updates each particle's velocity and position as follows: Velocity/position of ith particle at iteration k; ω k Inertia weight factor at iteration k; r 1 , r 2 Random values from [0, 1]; c 1 , c 2 Acceleration factors. In the velocity expression (25), the inertia and acceleration coefficients must be calculated first. Usually, acceleration coefficients are obtained experimentally and are assumed to be constant. To maintain a balance between the exploration phase and the exploitation phase, it is important to select the coefficient of inertia appropriately. Traditionally, this coefficient is calculated linearly during the algorithm process as the following linearly varying inertia weight (LVIW): Here, ω max and ω min represent the maximum and minimum bounds of the inertia coefficient, respectively. M It is the maximum iteration number. The new values of Pbest i and Gbest should be updated after calculating the new position based on (26). Proposed Improved PSO (IPSO) Algorithm This is based on the inertia weight coefficient and the control coefficient of the second and third terms on the right side of (25), according to which the PSO algorithm can provide satisfactory results. The first term in (25) allows a particle to fly in a search space by influencing its previous velocity. An inertia weight may be useful in balancing exploration and exploitation. Hence, the proper control of inertia weight allows for a global solution to be found. Chaos-Oriented Inertia Weight Thus, the COIW is proposed here to improve the PSO algorithm's performance in escaping from local solutions and finding the global solution. Accordingly, the velocity update in (25), an updated position of ith particle in (26), and the conventional LVIW in (27) are modified from (28) to (29), respectively. where ω k is the chaotic weight, and ξ k represents the chaotic factor, which is expressed as follows: In the LVIW, ω k decreases linearly from its upper value to its lower value, while the suggested ω k in COIW decreases in an oscillatory manner as shown in Figure 3. In this regard, it is important to note that the coefficient takes values in both positive and negative directions, and provides the possibility of opposite movement for the particles. According to the numerical results, these chaotic changes significantly improve the algorithm's search capabilities. This is based on the inertia weight coefficient and the control coefficient of the second and third terms on the right side of (25), according to which the PSO algorithm can provide satisfactory results. The first term in (25) allows a particle to fly in a search space by influencing its previous velocity. An inertia weight may be useful in balancing exploration and exploitation. Hence, the proper control of inertia weight allows for a global solution to be found. Chaos-Oriented Inertia Weight Thus, the COIW is proposed here to improve the PSO algorithm's performance in escaping from local solutions and finding the global solution. Accordingly, the velocity update in (25), an updated position of th particle in (26), and the conventional LVIW in (27) are modified from (28) to (29), respectively. where is the chaotic weight, and represents the chaotic factor, which is expressed as follows: In the LVIW, decreases linearly from its upper value to its lower value, while the suggested in COIW decreases in an oscillatory manner as shown in Figure 3. In this regard, it is important to note that the coefficient takes values in both positive and negative directions, and provides the possibility of opposite movement for the particles. According to the numerical results, these chaotic changes significantly improve the algorithm's search capabilities. Crossover The following trial vector is proposed for the PSO algorithm as a means of improving the diversity of particles in the population: Here, j = 1, 2, . . . , D. ψ is a random integer set from [1, D] with the length of r × D/3, where r is a random value from [0, 1]. Note that this set is updated in each iteration. and Gbest k+1 , the vector with the best fitness is chosen by comparing the current particle with the trial vector generated in (31) based on the greedy criteria, as follows: Then, Gbest k+1 is obtained from the best individual positions of the particles (Pbest k+1 i ), and is calculated in (32). Implementation of IPSO to Solve the Problem The problem here is to optimize several planning objectives at the same time. Then, these goals are defined as a multi-objective function with certain weighting coefficients. The problem's constraints are also entered into the fitness cost function with the appropriate technique, such as the penalty coefficient. They are placed as a sub-function within the proposed solution algorithm. The proposed optimization problem involves a variety of decision variables, including the installation position of the sectionalizing switches, the size and installation site of DG units, and the open/close state of the switches and tie-switches (network configuration). The proposed solution technique is developed based on the IPSO algorithm. In IPSO, random solutions (decision variables) in the feasible space of the problem are generated and improved during a predefined iterative process. Under the current solution, network reliability indices are calculated for all possible preset scenarios and entered into the cost function. As a result, the final solution meets all constraints and achieves an optimal objective function. Fitness Function Calculation The following procedure follows the constraint treatment for evaluating the multiobjective fitness function (objective functions): (1) Derive the input population of the switches' sites, switches' statuses, DG locations, and sizes; (2) Run load flow for the new configuration; (3) Evaluate power loss and voltage deviation objective functions; (4) Evaluate the reliability indices considering the network configuration and installation location of switches; (5) Evaluate the problem's objective functions or multi-objective index. Overall Procedure The optimization problem is solved using the proposed IPSO algorithm in the following way: • Step 1-Configure the IPSO's control parameters, including ω max , ω min , and M It ; • Step 2-Determine the fundamental loops, and the upper and lower bounds of the variables; • Step 3-Generate initial solutions using (34) to (38); • Step 4-Check the constraints of the problem and apply corrective action if necessary; • Step 5-Perform a fitness calculation function of the initial solutions, and identify the individual best and global best solutions; • Step 6-Evaluate the chaotic inertia weight using (30) and (31); • Step 7-Using (28) and (29) update the particle's velocity and position; • Step 8-Check the constraints and modify them if necessary; • Step 9-Generate the trial vector using (32); • Step 10: Evaluate the fitness function of the trial vector, and update individual best and global best using (33); • Step 11-If the stopping criteria are not satisfied, proceed to Step 6. Numerical Results Using the proposed IPSO algorithm, switch/DG placement and the DFR problem are optimized for the IEEE 69-bus and large-scale 136-bus radial test systems. The proposed IPSO is evaluated by examining several cases, and its performance is compared with those of other well-known metaheuristic algorithms. The IPSO algorithm was developed using MATLAB R2021b. This was conducted on a PC with an Intel Core i3 2.5 GHz CPU and 4 GB of RAM. A total of ten independent trials were conducted to determine the best IPSO algorithm. Test System Description In the IEEE 69-node radial distribution system, there are 73 branches, 69 nodes, and five tie-switches. There is a total load demand of 3802 MW and 2696 MVAr on this system. This system's data can be found in [40]. A single-line diagram of this system is shown in Figure 4. Based on graph theory [41], Table 1 presents the FLs identified for the test system. There are five tie-switches at the beginning: 69, 70, 71, 72, and 73. DGs are placed optimally with a fixed number of three and a size limit of two megawatts per DG. At the substation, the circuit breaker will be automatically turned off in the event of a permanent fault, and will not be reconnected until the faulty sections are isolated using proper switching techniques. For each branch, 0.1 failures per year are considered. A further assumption is that the switching and repairing time will be 30 and 360 min, respectively. Results of Optimal DG Placement and DFR Problems A potential effect of simultaneously optimizing DG placement and network reconfiguration on power loss reduction in distribution systems is evaluated in this section. For solving this problem, IPSO and other state-of-the-art optimization methods are employed and compared. All switches were considered for the DFR problem assuming the substation voltage equals 1.0 p.u. The maximum population size is 30 and the maximum number of iterations is 500 for all approaches. As part of this study, six cases are used to validate the applicability of the proposed improvement in the conventional PSO algorithm [42]:  Case 1-without considering DFR and DG, the base case;  Case 2-only optimal DFR problem;  Case 3-only DG placement;  Case 4-DG placement after optimal DFR in case 2;  Case 5-optimal DFR after optimal DG placement in 3;  Case 6-optimal DFR and DG placement simultaneously. Cases 2 to 6 are investigated for optimizing the IEEE 69-node system to minimize power loss using IPSO, PSO, and other methods and the results are presented in Tables 2-6. IPSO is compared with the conventional PSO, ACSA [29], FWA [28], HAS [27], UVDA [30], SFS [31], HHO, and IHHO [26]. In the base case, there is a power loss of 225.03 kW. As shown in these tables, the optimal reconfiguration and placement of DGs can significantly reduce losses and improve the voltage profile. Compared to the base case, IPSO reduces losses in cases 2 to 6 by 56.13%, 69.07%, 84.27%, 82.49%, and 84.27%, respectively. In the base case, there is a power loss of * = 225.03 kW. As shown in Table 1, the optimal Results of Optimal DG Placement and DFR Problems A potential effect of simultaneously optimizing DG placement and network reconfiguration on power loss reduction in distribution systems is evaluated in this section. For solving this problem, IPSO and other state-of-the-art optimization methods are employed and compared. All switches were considered for the DFR problem assuming the substation voltage equals 1.0 p.u. The maximum population size is 30 and the maximum number of iterations is 500 for all approaches. As part of this study, six cases are used to validate the applicability of the proposed improvement in the conventional PSO algorithm [42]: • Case 1-without considering DFR and DG, the base case; • Case 2-only optimal DFR problem; • Case 3-only DG placement; • Case 4-DG placement after optimal DFR in case 2; • Case 5-optimal DFR after optimal DG placement in 3; • Case 6-optimal DFR and DG placement simultaneously. Cases 2 to 6 are investigated for optimizing the IEEE 69-node system to minimize power loss using IPSO, PSO, and other methods and the results are presented in Tables 2-6. IPSO is compared with the conventional PSO, ACSA [29], FWA [28], HAS [27], UVDA [30], SFS [31], HHO, and IHHO [26]. In the base case, there is a power loss of 225.03 kW. As shown in these tables, the optimal reconfiguration and placement of DGs can significantly reduce losses and improve the voltage profile. Compared to the base case, IPSO reduces losses in cases 2 to 6 by 56.13%, 69.07%, 84.27%, 82.49%, and 84.27%, respectively. In the base case, there is a power loss of L * = 225.03 kW. As shown in Table 1, the optimal reconfiguration and placement of DGs can significantly decrease losses and improve the voltage profile. Cases 4 and 6 have the greatest impact on reducing losses and increasing the voltage profile. On the other hand, using these two cases, the minimum voltage has increased from 0.9092 to 0.9813 per unit. Using Case 2, we examine cases where only the reconfiguration program is considered to reduce network losses. Table 2 summarizes the optimal results obtained from different algorithms, including the losses and locations of switches open in the network. It can be seen that the amount of losses using the rearrangement program is significantly reduced compared to the base case. Moreover, IPSO and FWA algorithms achieve a network structure with 14, 56, 61, 69 and 70 open switches, with a minimum loss of 98.56 kW. Other algorithms have also achieved results close to this optimal value. Furthermore, as illustrated in the third column, it is possible to improve the voltage profile by reducing losses. Regarding Case 3, IPSO identified three optimal nodes, 11, 18, and 61, for the installation of DGs, as shown in Table 3. These optimal sizes correspond to DGs of 0.5268 MW, 0.3800 MW, and 1.7189 MW, respectively. Compared to the other methods employed, IPSO caused the least power loss at 69.402 kW. In contrast, HSA shows the worst result by finding a loss of 86.77 kW among all the algorithms used in this case. In Case 4, as shown in Table 4, IPSO has determined that DG sizes of 1.434 MW, 0.5375 MW, and 0.4902 MW would be most suitable for installation on buses 61, 11, and 64, respectively. The power loss decreased by 35.15 kW following the integration of the DGs. In contrast to other methods, IPSO provided the best results in this case. In Case 5, IPSO determined the open switches as 13, 57, 64, 69, and 70, resulting in an optimal power loss of 39.17 kW. This indicates its superiority to those used by other comparative algorithms (see Table 5). As a result of the simultaneous optimization of DFR and DG placement in Case 6, which is shown in Table 6, IPSO provided the optimal network configuration of 14-56-61-69-70 with the open switches. At the same time, it determined the optimal locations (sizes) for DGs on buses 11 (0.5376 MW), 61 (1.4340 MW), and 64 (1.4340 MW). The optimal power loss by IPSO is superior to those of PSO and ACSA in this case, and close to the SFS. It is thus evident that the IPSO method offers a practical solution to the complex problem of optimally placing DGs and DFR. The results obtained from Case 6 are close to the results obtained from Case 4. This similarity can be attributed to the limited availability of locations for DG installations. In such a situation, two optimization problems can be solved separately, as in Case 4. Hence, utilities, consumers, and DG owners benefit more from the utilization of the procedure in Case 4 or Case 6. A comparison of IPSO and existing well-established optimization techniques shows that IPSO offers superior solution-search capability in solving optimal DG placement and DFR problems. Results of Optimal DFR Problem for the Large-Scale Test System In order to confirm the performance of the IPSO algorithm in facing large-scale problems, in this section, the DFR of an extensive network of 136 buses is tested. This network's information can be found in [43]. The optimal results for system losses, minimum voltage, and open switches obtained by the proposed algorithm and some state-of-the-art methods used in Case 2 are represented in Table 7. As can be seen, by using the optimal DFR, the total loss is reduced, and the voltage profile is improved. As can be seen, the proposed algorithm achieves lower losses compared to other algorithms, which indicates its effectiveness in solving large-scale optimization problems. Sensitvity Analysis on the Switching Actions One of the limitations of network operators is the number of times switches must be operated to reconfigure the network. This is because the higher operation of these switches reduces their lifespan and causes them to fail prematurely. Here, sensitivity analysis is performed on the optimal loss results of the 136-bus test system for different switches' action limitations. To this end, N max in (11) is changed from 0 to the number of tie-switches, i.e., NT, and the results are shown in Table 8. As can be seen, network loss decreases as switching actions increase, because the number of radial structures that can be created in the network increases. Therefore, the probability of finding a state where the network has fewer losses increases. On the other hand, the results show that an optimized network structure with minimum loss is obtained for switching actions equal to 18. The fourth and fifth columns show open and closed switches to determine the optimal configuration. These results indicate that one switch must be closed for every switch opened to maintain the network's radial structure. Therefore, the radiality constraints and switching action limits are satisfied. Multi-Objective Optimal Switch/DG Placement and DFR Problem This section examines the possible effects of DFR placement and DG placement on the multi-objective optimal switch placement problem. The following cases are investigated in this section: • Case 7-only switch placement with the cost and reliability objectives; • Case 8-optimal DFR after the switch placement in Case 7 with the reliability, loss, and voltage deviation objectives; • Case 9-optimal DG placement after the switch placement in Case 7 with the reliability, loss, and voltage deviation objectives; • Case 10-optimal switch/DG placement and DFR simultaneously with the cost, reliability, voltage deviation, and loss objective functions. The problem in Case 7 is primarily concerned with the minimization of costs and the maximization of reliability. As a result, the following multi-objective index (MOI) is proposed as the fitness function in this case: where w 1 + w 2 = 1. For Case 7, the weighting coefficient of each objective function is assumed to be 0.5. This value can be calculated assuming that all distribution network branches are equipped with sectionalizing switches. Additionally, the switching actions are not considered for reenergizing the faulty area. Accordingly, the base values of the cost and reliability metrics are calculated as: C * = 1.3878 M$, SF * = 1.986, and SD * = 11.917. The optimal results obtained from the PSO and IPSO algorithms for the switch placement problem for Case 7 are represented in the second column of Table 9. It is evident that the number of switches and the cost of installing the sectionalized switches are reduced by about 63% in relation to the base values. Since the number of installed switches has decreased with respect to the base case, the amount of SAIFI has increased a little, but with the application of the switching action mechanism, the amount of SAIDI is greatly reduced. Comparing the two implemented algorithms indicates that IPSO has been able to obtain a lower multi-objective index than PSO. A noteworthy point is that the lower values of the multi-objective index in Case 7 indicate the greater advantage of optimal switch placement with the objectives of the cost minimization and system reliability maximization. The reliability indices obtained by IPSO are somewhat lower than the indices obtained by PSO, and on the other hand, this method requires more costs. The cause is clear: the use of 52 switches is suggested when using PSO, while this number is 51 when using PSO. In Case 8, after addressing the optimal switch placement problem in Case 7, the DFR problem is solved to reduce losses, minimize voltage deviation, and improve network reliability. In this case, the best network structure to minimize the following multi-objective index combining the objectives is obtained. Here, identical weighting coefficients are assumed for the objective functions. Hence, Results of the PSO and IPSO algorithms for optimal DFR and sectionalized switch placement obtained in the previous case (using IPSO) can be found in Table 9 in the third column. Looking at the optimal results derived using IPSO, optimally changing the network structure improves the network's parameters compared to Case 7. In Case 8, the optimal DFR problem focuses more on reducing losses and voltage deviations. For example, network losses are reduced to 121.126 kW, the minimum voltage increases to 0.935, and SAIFI is improved. Additionally, with the optimum value of MOI = 0.3876, IPSO could find the optimal open switches of 14, 57, 69, 71 and 73, resulting in a superior solution to the PSO algorithm. The amount of SAIDI is slightly increased compared to Case 7. This is because a smaller number of subscribers can be re-energized through the switching action scheme due to the change in the network topology. The optimization problem in Case 9 deals with the optimal DG placement for the 69-node test system with the optimum sectionalized switch placement results in Case 7. The main objectives and fitness function of this problem are similar to the previous case, i.e., Case 8. Like the results obtained for Case 3, optimal DG integration into the system (Case 9) yields a better operational performance than the optimal DFR (Case 8). In particular, power losses are reduced to 72.66 kW, and voltage deviations fall to 0.0088. Case 9 compared to Case 8 shows a better SAIDI, and on the other hand, Case 8 has a greater effect on the reduction in SAIFI. It is worth noting that there are no differences between the results of Cases 7 and 9 in terms of SAIFI and SAIDI, as the structure of the system remains unchanged. In Case 9, IPSO determines that DG sizes of 0.41 MW, 1.70 MW, and 1.82 MW would be optimal for installation on buses 22, 61, and 63, respectively. In addition, compared to the results of optimal DFR, optimal DG placement exhibits more benefits, as can be seen from the resulting multi-objective index, i.e., MOI = 0.3335. Moreover, IPSO shows better solution quality than PSO because of its global search ability. The results of the previous cases indicate that optimal DG placement and DFR may alter the optimal results of the multi-objective switch placement problem. Therefore, the simultaneous optimization of these problems can bring more benefits to distribution network utilities. This problem requires finding the optimal placement of the switch/DG, and the optimum network structure. This will minimize costs and losses, as well as improving the system's reliability and voltage profile. For this purpose, the following multi-objective index is assumed as the weighted sum of these objective functions: where w 1 + w 2 + w 3 + w 4 = 1. The weighting coefficients of each function can be determined empirically or based on the importance coefficient of each function by decisionmakers, planners, and operators of distribution networks. In this article, the same coefficients are assumed. The best compromise solutions using IPSO and PSO algorithms for the problem in Case 10 are reported in the fourth column of Table 9. In solving the problem, we demonstrate a relative equilibrium among different objective functions. To solve this difficult and complex problem, using the COIW strategy, IPSO has been able to work effectively and give better results than PSO. The obtained MOI using IPSO is 0.2665, where the value of 0.3046 is achieved using PSO, indicating the search power of IPSO in solving such a complicated problem. Compared to the sequence optimization strategies in Cases 7 to 9, the proposed simultaneous optimization model in Case 10 has reduced the number of installed switches, and thus their cost by about 15.4%. This is because the proposed model considers the optimization of DG/switch placement and the network topology simultaneously, which leads to the more efficient utilization of the installed switches, and thus eliminates the need to install additional ones. In addition, line losses decreased to 47.86 kW. As a result, the SAIFI and SAIDI have been reduced to 1.608 failures/customer/year and 2.844 h/customer/year. This has led to an increase in system reliability and cost-effectiveness. Moreover, the optimization process can be further improved by implementing advanced optimization techniques. Therefore, decision-makers of distribution networks using the model in Case 10 and the proposed algorithm will be able to effectively balance the objective functions of the problem based on their priorities. In other words, the proposed algorithm gives decision-makers the flexibility to weigh their desired objectives and craft an optimal distribution network solution. Sensitivity Study on the Weighting Coefficients In this section, evaluations of the potential trade-offs between four objective functions are conducted by considering weighting factors, as shown in Table 10. To this end, 11 weighting factors are assumed, as seen in the first column of this table. The second row of the table shows that when the goal is to minimize total costs, no switches are installed in the network. In addition, other functions are in their base state, i.e., under their most adverse conditions. Therefore, the cost function contrasts with other objective functions. The third row of this table illustrates that the optimization model tries to install the highest number of switches and DGs in the network. This is when the goal is to improve reliability. As a result, the cost function has the highest value. This indicates that it is impossible to achieve maximum reliability at minimum cost. The optimization model should incorporate trade-offs to balance multiple goals in this case. Reliability is also prone to conflict with other objectives. As seen in the third row of Table 10, losses and voltage deviations are close to 1. This is due to the system reconfiguration and excessive power injection by DGs to achieve maximum reliability. It should be noted that when the only objective is to improve the voltage deviation objective function, the other objectives are also enhanced to a certain extent. However, the most significant conflict between this function and cost arises from the need to install a switch in the network. This is to improve the voltage profile. The optimization model, on the other hand, attempts to reduce network losses by installing a switch and utilizing DGs. Loss improvement and voltage deviation exhibit similar behavior. They have the most significant conflict with the cost objective function, and a minor conflict with the reliability objective function. Due to this, if these two functions have a weighting factor, they can make a reasonable relative compromise relative to other functions if they have a weighting factor. The last row of this table confirms this. This analysis shows how changes in weighting coefficients can affect planning results. Therefore, it is necessary to obtain Pareto solutions using appropriate techniques to make a better decision. Furthermore, it is also required to extract the most compromised solution using techniques such as the fuzzy technique. The Robustness and Effectiveness Evaluation of IPSO The proposed optimization problem is a mixed integer non-linear programming (MINLP) problem owing to the non-linear and non-convex nature of the load flow constraints and binary decision variables. Therefore, the decision space contains many local optimal points and is known as a challenging mathematical problem. The decision variables of the problem, i.e., X, are randomly generated in this space and are improved based on an evolutionary approach to optimize the objective functions. Therefore, an algorithm used to solve such a problem should have the ability not to get stuck in the local solutions and to reach global or near-optimal solutions. Meanwhile, evolutionary algorithms are stochastic, making it possible to achieve different solutions in each run. So, the algorithm's robustness is also another critical characteristic used to recognize its efficiency. Accordingly, in the interests of testing IPSO's efficiency, the statistical results of solving the multi-objective optimization problem in Case 10 using IPSO and several algorithms such as PSO, DE [44], GA [45], CO [46] and GWO [47] are compared and given in Table 11. The optimal solutions obtained from the best run of these algorithms are summarized in Table 12. The results indicate that IPSO outperforms other comparative algorithms on a variety of measures, including minimums (Min), means, maximums (Max), and standard deviations (SD). The lower Min and SD values shown in the IPSO results compared to other competitive algorithms show their effectiveness and robustness in solving optimization problems in the real world. The best MOI value of 0.2665 is obtained by IPSO, and the next best one is achieved by CO. Under the best run conditions, Figure 5 shows comparisons between the convergence curves for Case 10. When solving the multi-objective problem, IPSO, PSO, GWO, and GA show fast convergence speeds in the first 30 iterations. However, from iteration 40 onwards, GA and GWO have fallen into local optimal points and converged. After 70 iterations, PSO has also converged. CO, despite its lower convergence speed, has shown its use as a means of escaping the local optimal point trap in iteration 120, and finding a superior solution compared to PSO, GA, and GWO. In contrast to these algorithms, the proposed IPSO algorithm has been able to establish a good balance between the speed of convergence and reaching the best solution. Hence, the proposed improvements in the PSO algorithm using COIW and the new updating operators could significantly help the IPSO algorithm in ensuring a balance between exploration and exploitation. Besides this, the cross-over operator in IPSO could provide good diversity throughout the iterations. show fast convergence speeds in the first 30 iterations. However, from iteration 4 wards, GA and GWO have fallen into local optimal points and converged. After 70 tions, PSO has also converged. CO, despite its lower convergence speed, has show use as a means of escaping the local optimal point trap in iteration 120, and find superior solution compared to PSO, GA, and GWO. In contrast to these algorithm proposed IPSO algorithm has been able to establish a good balance between the sp convergence and reaching the best solution. Hence, the proposed improvements PSO algorithm using COIW and the new updating operators could significantly he IPSO algorithm in ensuring a balance between exploration and exploitation. Beside the cross-over operator in IPSO could provide good diversity throughout the iterat Conclusions An optimization model with costs, losses, reliability, and voltage deviations i sented in this paper as part of multi-objective DFR and switch/DG placement opti tion. A new version of the traditional PSO algorithm, called IPSO, has been develop solve such a complicated real-world optimization problem. Ten cases of IEEE 69 distribution system optimization considering optimal DFR, DG placement, and s placement problems were examined independently and simultaneously with differe jectives. Furthermore, compared to the more well-known optimization algorithm proposed IPSO method offers a practical solution to the complex problems of opti placing DGs and DFR. Furthermore, the proposed multi-objective optimization pr has been examined using Cases 7 to 10. The results of Cases 2 to 6 demonstrate tha ties, consumers, and DG owners benefit more from the utilization of the simulta optimization of DG placement and DFR problems for minimizing losses in distrib systems. For example, compared to the base case, IPSO reduces losses in Cases 2 t 56.13%, 69.07%, 84.27%, 82.49%, and 84.27%, respectively. On the other hand, by these two cases, the minimum voltage increases from 0.9092 to 0.9813 per unit. Cas Conclusions An optimization model with costs, losses, reliability, and voltage deviations is presented in this paper as part of multi-objective DFR and switch/DG placement optimization. A new version of the traditional PSO algorithm, called IPSO, has been developed to solve such a complicated real-world optimization problem. Ten cases of IEEE 69-node distribution system optimization considering optimal DFR, DG placement, and switch placement problems were examined independently and simultaneously with different objectives. Furthermore, compared to the more well-known optimization algorithms, the proposed IPSO method offers a practical solution to the complex problems of optimally placing DGs and DFR. Furthermore, the proposed multi-objective optimization problem has been examined using Cases 7 to 10. The results of Cases 2 to 6 demonstrate that utilities, consumers, and DG owners benefit more from the utilization of the simultaneous optimization of DG placement and DFR problems for minimizing losses in distribution systems. For example, compared to the base case, IPSO reduces losses in Cases 2 to 6 by 56.13%, 69.07%, 84.27%, 82.49%, and 84.27%, respectively. On the other hand, by using these two cases, the minimum voltage increases from 0.9092 to 0.9813 per unit. Cases 7 to 10 demonstrate that the simultaneous optimization of the three problems is more economical, reliable, and efficient than sequential optimization. For example, the proposed concurrent optimization model in Case 10 outperforms the sequence optimization strategies in Cases 7 to 9. Consequently, the cost of installing switches decreases by approximately 15.4%, and the loss decreases by 78.72%. Additionally, SAIFI and SAIDI have been reduced to 1.608 failures per customer and 2.844 h per customer, respectively. Moreover, to evaluate the effectiveness and robustness of the proposed optimization algorithm, the statistical results of the proposed multi-objective optimization problem have been compared with those of other algorithms. Based on the results, IPSO has been proven to be superior to other evolutionary algorithms when it comes to solving real-world optimization problems. Future studies will examine the effects of uncertainty of renewable energy sources, such as wind and solar, as well as storage systems, on the results of the multi-objective optimization problem. The proposed multi-objective method is based on a weighted sum, which has limitations in finding Pareto fronts, and does not express the trade-off between the objectives well. Therefore, techniques based on more effective Pareto surface generation can provide more valuable tools for decision-makers. Fuzzy decisionmaking can then also be utilized to determine the most effective compromise solution.
2023-06-04T15:09:14.217Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "d000603d87b8b4fa143d60f3f746ec010f6d6da8", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/su15119034", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2ec1b1736070c6959a009d78aab9cfdd57be89ea", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
253384397
pes2o/s2orc
v3-fos-license
Detecting Interference in A/B Testing with Increasing Allocation In the past decade, the technology industry has adopted online randomized controlled experiments (a.k.a. A/B testing) to guide product development and make business decisions. In practice, A/B tests are often implemented with increasing treatment allocation: the new treatment is gradually released to an increasing number of units through a sequence of randomized experiments. In scenarios such as experimenting in a social network setting or in a bipartite online marketplace, interference among units may exist, which can harm the validity of simple inference procedures. In this work, we introduce a widely applicable procedure to test for interference in A/B testing with increasing allocation. Our procedure can be implemented on top of an existing A/B testing platform with a separate flow and does not require a priori a specific interference mechanism. In particular, we introduce two permutation tests that are valid under different assumptions. Firstly, we introduce a general statistical test for interference requiring no additional assumption. Secondly, we introduce a testing procedure that is valid under a time fixed effect assumption. The testing procedure is of very low computational complexity, it is powerful, and it formalizes a heuristic algorithm implemented already in industry. We demonstrate the performance of the proposed testing procedure through simulations on synthetic data. Finally, we discuss one application at LinkedIn, where a screening step is implemented to detect potential interference in all their marketplace experiments with the proposed methods in the paper. Introduction The technology industry has adopted online randomized controlled experiments, also known as A/B testing, to guide product development and make business decisions [Kohavi et al., 2013[Kohavi et al., , 2020. In the past decade, firms have developed a dynamic phase release framework in which a new treatment (such as a new product feature) is gradually released to an increasing number of units in the target population through a sequence of randomized experiments [Kohavi et al., 2020]. Companies including Google, Microsoft, LinkedIn, and Meta all developed in-house platforms that implement this framework at-scale [Tang et al., 2010, Kohavi et al., 2013, Bakshy et al., 2014, Xu et al., 2015. Contrary to the sophisticated engineering design of such platforms, the strategy to analyze A/B testing is relatively simple-often, only the most powerful experiment in the sequence is used to provide a summary of the treatment effect, using tools from classical causal inference assuming independence among test units [Imbens and Rubin, 2015]. In scenarios such as experimenting in a social network setting or in a bipartite online marketplace, interference among units may exist. Thus, a natural question is whether such interference harms the validity of simple inference procedures. Specific designs have been proposed to test or correct for the interference effects in different applications [Saveski et al., 2017, Eckles et al., 2017, Ugander et al., 2013, Pouget-Abadie et al., 2019a, Johari et al., 2022. However, these designs are limited to specific applications and often require significant engineering work to implement in parallel to the existing A/B testing infrastructure in most companies. Even when such designs are implemented, their complex nature often results in lower throughput and can slow down the decision process. In this work, we introduce a widely applicable procedure to test for interference in generic online experiments. The proposed method utilizes data from multiple experiments in the sequence. It can be implemented on top of an existing A/B testing platform with a separate flow and does not require a priori the knowledge of the underlying interference mechanism. Once implemented, this test can be run as a standard screening for any A/B test running on the platform. If the test suggests that no interference exists, the experimenter can proceed with classical causal inference analysis with confidence; if the test suggests that some form of interference does exist, the experimenter may need to redesign experiments in a more delicate way. At the platform level, such screening could provide valuable and timely feedback on the choice of designs and help experimenters update development roadmaps accordingly. A motivating example and our contribution The most straightforward statistical analysis following A/B tests is to compute the difference-inmeans estimator, i.e., the difference in the average of outcomes of the treatment group and that of the control group. Under the classical Stable Unit Treatment Value Assumption (SUTVA), which requires that the potential outcomes for any unit do not vary with the treatments assigned to other units, one can easily show that the difference-in-means estimator will be close to the causal effect as long as the sample size is large [Imbens and Rubin, 2015]. This implies that when we compute the difference-in-means estimator for any single randomized experiments in an A/B test with increasing allocation, the value of the estimator should not change by much. However, in some real-world scenarios, we observe drastic change in the difference-in-means estimators throughout the experiments. In Figure 1, we show an example from an A/B test implemented by LinkedIn. In this example, we see that the difference-in-means estimator decreases as the treatment is released to more units. We naturally wonder: What causes this phenomenon? Could it be purely due to randomness? Is the SUTVA assumption violated in this case? One plausible explanation for this phenomenon is the existence of interference, i.e., when treatment assigned to one unit may affect observed outcomes for other units. One form of interference is marketplace competition. Imagine a new treatment that can help units perform better in the market. For any particular unit, the treatment brings benefit, but when more of the other units are treated, the other units become more competitive and thus negatively impact the performance of that particular unit. Therefore, in these cases, we often observe that the difference-in-means estimator decreases with treatment probability. Indeed, the experiments in Figure 1 were run in a setting with marketplace competition. One other common form of interference is through social networks. People's behaviors tend to be positively correlated with those of others connected to them in the network. Think about a treatment that encourages users to comment on a social media platform: users tend to comment more when they see comments from friends. In these cases, we usually observe the difference-in-means estimator increasing with treatment probability. On the x-axis, we show the percentage of units that are in the treatment group; on the y-axis, we show the value of the difference-in-means estimator. Note that A and B stand for different outcome metrics. In practice, however, the structure of interference can be more complicated than the two apparent forms discussed in the above paragraph. Often, experimenters manually examine the difference-inmeans plot and decide whether to send the job to other experimentation platforms that deal with interference more carefully. We need a way to formally test whether interference exists. In this work, we introduce statistical testing procedures that test for interference in A/B testing with increasing allocation. The methods we propose are scalable and parallelable. They are also agnostic to interference mechanism: even if we have no knowledge of the interference structure, the testing procedure is still valid. Knowledge of the interference structure can, however, be helpful in increasing the power of the testing procedure. We introduce two different testing strategies under different assumptions in Sections 3.1 and 3.2. In Section 3.1, we introduce a general statistical test for interference, a test that requires no additional assumptions. The proposed method is inspired by the testing procedure proposed by Athey et al. [2018], but it is more powerful than that of Athey et al. [2018] by making use of multiple experiments. In Section 3.2, we introduce a testing procedure that is valid under a time fixed effect assumption. The testing procedure is of very low computational complexity, and it is more powerful than the test proposed in Section 3.1. In particular, one special case of this method formalizes a heuristic algorithm discussed above, which decides that interference exists when the difference-in-means estimators are very different. Related work The classical literature on causal inference often assumes that there is no cross-unit interference. When interference presents, many classical inference methods break down. Interest in causal inference with interference started in the social and medical sciences [Sobel, 2006, Hudgens andHalloran, 2008]. Since then, one line of work focuses on estimation and inference of treatment effects under network interference [Tchetgen and VanderWeele, 2012, Toulis and Kao, 2013, Aronow and Samii, 2017, Sussman and Airoldi, 2017, Basse and Feller, 2018, Bhattacharya et al., 2020, Leung, 2020, Sävje, 2021, Hu et al., 2022. In order to facilitate estimation, these works either assume that there are special randomization designs or that the interference has some restricted form defined by a given network. Applications to A/B testing are also considered in Ugander et al. [2013], Eckles et al. [2017], and Basse and Airoldi [2018]. One assumption implicitly made in these works is that the experiment is conducted only once. In the multiple experiments regime, Viviano [2020] studies the design of two-wave experiments under interference. and Cortez et al. [2022] consider estimating the total treatment effects under interference with data from more than two time steps. and Han et al. [2021] further investigate the problem in panel experiments. Our work differs from the above works for at least two reasons: (1) instead of focusing on estimation, we focus on testing whether interference exists and (2) we do not need to make additional assumptions in order for the testing procedure to be valid. In the literature of testing for interference, Bowers et al. [2013] consider model-based approaches, Pouget-Abadie et al. [2019b] introduce an experimental design strategy, and Aronow [2012] and Athey et al. [2018] propose conditional randomization tests restricted to a subset of what they call focal units, and a subset of assignments that make the null hypothesis sharp for focal units. Basse et al. [2019] and Puelz et al. [2022] further extend this method by using a conditioning mechanism to allow the selection of focal units to depend on the observed treatment assignment. However, none of these works addresses the problem of multiple experiments, and their methods tend to have lower power when directly applied in our setup. To the best of our knowledge, our work is the first to consider testing interference with a sequence of randomized experiments. Our work is also related to research on interference in online marketplace experiments (See Basse et al. [2016], Fradkin [2019], Holtz et al. [2020], Bajari et al. [2021], Wager and Xu [2021], Johari et al. [2022], among others). This line of work usually requires careful modeling of the market and the interference mechanism. The testing procedure introduced in this paper, in contrast, can be applied to arbitrary forms of interference. Problem Setup We work in a setting where we run a sequence of A/B tests with increasing allocations. Formally, suppose that there are K experiments on a population of n units. Let π k be the marginal treatment probability of the k th experiment. The treatment probabilities satisfy π 1 < π 2 < · · · < π K . For each experiment k ∈ {1, . . . , K} and each unit i ∈ {1, . . . , n}, let W i,k := treatment of unit i assigned in the k th experiment, Y i,k := outcome of unit i in the k th experiment. Here we assume that W i,k ∈ {0, 1} is a binary treatment variable and that a value of 1 corresponds to the treatment group while a value of 0 corresponds to the control group. The experiments are implemented in the following way. In the first experiment, each unit i is randomly assigned a treatment W i,1 , where W i,1 ∼ Bernoulli(π 1 ) independently. (1) In the subsequent experiments, more units are assigned to the treatment group. Specifically, conditioning on the previous treatments, each W i,k is sampled from the following distribution independently: This formulation guarantees that if we look at the k th experiment alone, then the treatments W i,k 's are i.i.d. Bernoulli(π k ). Let W 1:n,1:K be the n × K treatment matrix and Y 1:n,1:K be the n × K outcome matrix of all units and all experiments. Let X i ∈ R d be the observed covariates of unit i that do not change over the course of the experiments. Correspondingly, let X 1:n ∈ R n×d be the matrix of covariates of all units. Following the Neyman-Rubin causal model, we assume that potential outcomes Y i,k (w 1:n,1:K ) ∈ R exist for all w 1:n,1:K ∈ {0, 1} n×K and that the observed outcomes satisfy Y i,k = Y i,k (W 1:n,1:K ). 1 The goal is to test the following hypothesis: Hypothesis 1 (No cross-unit interference). Y i,k (w 1:n,1:K ) = Y i,k (w 1:n,1:K ) if w i,1:K =w i,1:K . The hypothesis states that the outcomes of unit i depend only on the treatments of unit i and not on the treatments of others. We call this hypothesis the no cross-unit interference hypothesis. Testing for interference In this section, we introduce methods that test for the existence of cross-unit interference. For brevity's sake, we focus on testing with two experiments. We then discuss further extensions to multiple experiments in Section 3.5. Naturally, the first question that occurs is how interference might arise. To formalize this, we introduce a notion of candidate exposure that captures the potential form of interference. Using domain knowledge, experimenters can specify the candidate exposure, which can vary from application to application. When we consider user-level data, we have a natural social network. Here experimenters may suspect that a user's outcome is influenced by treatments of "friends", i.e., users connected through the social network. And thus in this example, some plausible choices of candidate exposures include the fraction of friends who are treated, and the number of friends who are treated. When we consider marketplace competition, advertisers are the subjects of treatment. Here, the sales of an advertiser may be impacted by the treatments of competitors, i.e., advertisers that sell similar products. Hence, in this application, experimenters can choose candidate exposures to be the number of treated advertisers that sell products of the same category, or an average of treatments given to other advertisers weighted by some product similarity metric. Formally, for each experiment k and each unit i, we use H i,k = h i (W −i,k ) ∈ R m to denote the candidate exposure. Here W −i,k is the treatments given to all other units except i in the k th experiment. We use the form h i (W −i,k ) to emphasize that the candidate exposure depends on other units' treatments. We also write H 1:n,k = (H 1,k , H 2,k , . . . , H n,k ) ∈ R n×m to reference the candidate exposures of all units. We want to emphasize that for all the tests introduced below, we do not require the candidate exposure to be correctly specified in order for the tests to be valid. However, the form of the candidate exposure matters for the power of the tests. We will then move on to test the hypothesis that no interference exists making use of the candidate 1 In the literature, a no anticipation effects assumption is often made in such potential outcome models. The assumption states that the outcome Y i,k depends only on the treatments assigned during and prior to the k th experiment. With this assumption, the potential outcomes can be written as Y i,k (w 1:n,1:k ) which satisfies Y i,k = Y i,k (W 1:n,1:k ). Here, for simplicity, we keep the original notation. exposure H i,k . In the following sections, we discuss different strategies to test for interference under different assumptions. Testing under general assumptions We start with a setting where we have access to a dataset from only one experiment. Suppose that we collect data on units indexed by i = 1, ..., n, where each unit is randomly assigned to a binary treatment W i ∈ {0, 1}, for some 0 ≤ π ≤ 1. For each unit, we observe an outcome of interest Y i ∈ R and some covariates X i ∈ R p . Athey et al. [2018] proposed a method to test for Hypothesis 1 in this setting. 2 We sketch the procedure in Algorithm 1. 1. Randomly split the data into two folds. Let I foc and I aux be the index set for the first fold (focal units) and the second fold (auxiliary units). Write the first fold of data as D foc = (W foc , X foc , Y foc , H foc ) and the second as D aux = (W aux , X aux , Y aux , H aux ). Compute a test statistic T Regenerate treatments for the auxiliary units: Recompute the candidate exposure for focal units: H Recompute the test statistic: End For Output: The p-value Algorithm 1 requires as input a test statistic T that captures the importance of the candidate exposure H in predicting outcome Y . As an illustration, assume for now that H i ∈ R. One plausible choice of the test statistic T (when H i ∈ R) is the following: we run a linear regression of Y foc ∼ W foc + X foc + H foc , extract the coefficient of H foc , and take the test statistic T to be the absolute value of the coefficient. We use this regression coefficient statistic as an example to explain the intuition of the algorithm. Under the null hypothesis, the candidate exposure H has no power to predict the outcome Y before or after regenerating treatments, and thus the distribution of the test statistic T will not change after regenerating treatments. Hence, the p -value will be stochastically larger than Unif[0, 1]. Under the alternative hypothesis, the behavior of the p -value can be very different. Consider a simple example where H i is the treatment assigned to the closest friend of unit i and Y i = α X i + βW i + θH i + i for some i.i.d. zero mean errors i . In this example, the original test statistic T (W foc , X foc , Y foc , H foc ) ≈ |θ| when the sample size is large. However, after regenerating treatments, for each focal unit i, if the closest friend of i is among the auxiliary units, 1. Let I nc = {i : W i,1 = W i,2 } be the set of units whose treatment didn't change over the experiments. Randomly sample a subset of I nc of size n/2. We call the subset I foc . Let Randomly permute treatments for the auxiliary units of the data: W Recompute the candidate exposure for the focal units: then H i is marginally a Bern(π) random variable, independent of Y i ; and hence the distribution of foc ) will not concentrate around |θ|. In this case, the p -value is far from the Unif[0, 1] distribution. In practice, experimenters can use any test statistic T that are suitable for specific applications. For example, if the covariate X is of high dimension, a lasso-type algorithm can be used. One can also run more complicated machine learning algorithms, e.g., random forest and gradient boosting, with Y as a response and X, W, H as predictors, and set the statistic T to be any feature importance statistic of H. Just like the choice of candidate exposure h, the choice of test statistic T will not hurt the validity of the test, but will largely influence the power of the test. Then a natural question to ask is whether we can make use of information from multiple experiments to further increase the power of the test. Suppose that we collect data from two experiments on the same n units indexed by i = 1, . . . , n. In order to increase the power of the previous testing procedure, a natural idea is to reduce the variance in the test statistic computed in Algorithm 1. To do so, instead of focusing on Y i,2 itself, we focus on Y i,2 − Y i,1 . This difference is helpful in removing variance of Y i 's that is shared by Y i,1 and Y i,2 but cannot be explained by the treatment and covariates. If a unit has some hidden individual characteristics, those characteristics could influence both Y i,1 and Y i,2 in a similar fashion but may not be well captured by the observed covariates. To make this intuition precise, we present Algorithm 2, which makes uses of information from two experiments and tests for the existence of interference effect. We have also included an illustration of the algorithm in Figure 2. Algorithm 2 has a few key differences from Algorithm 1. First, the choices of focal units are different. In Algorithm 1, the choice of the focal units cannot depend on the treatment assignments W 1:n , whereas in Algorithm 2, the focal units are randomly chosen from those whose treatment didn't change. This specific choice guarantees that the treatment of the i th unit will not influence the difference of Y i,2 and Y i,1 much. Second, as mentioned above, in computing the test statistics, Y diff is used instead of Y itself. As explained above, this helps reduce variance. Third, instead of regenerating treatment, Algorithm 2 permutes the treatment of the auxiliary units. This change is necessary to guarantee the procedure's validity; the choice of focal units depends on the treatment vector, and thus naively regenerating treatments will not give a valid procedure anymore. This will be demonstrated in Section 4. Testing with a time fixed effect model In the previous section, we allow the existence of "arbitrary time effect". In particular, Hypothesis 1 allows the outcome Y i,k to depend on the treatments in other experiments, and does not restrict the relationship among outcomes in different experiments. This brings flexibility and generality, but it could reduce the power of the testing procedures. In this section, we make additional assumptions on the structure of time effect and propose a different testing procedure. Assumption 1 states that the outcomes in experiment k depends only on treatments assigned in experiment k. In other words, the effect of treatment in one experiment will not carry over to the other experiments. Under Assumption 1, we can simplify the notation of potential outcomes: for any w 1:n ∈ {0, 1} n , we write Y i,k (w 1:n ) as the potential outcome and assume that the observed outcomes satisfy Y i,k = Y i,k (W 1:n,k ). Note the difference from the previous notation. Previously, we wrote the potential outcomes Y i,k (w 1:n,1:K ) for any w 1:n,1:K ∈ {0, 1} n×K . Here we focus on the potential outcomes Y i,k (w 1:n ) for any w 1:n ∈ {0, 1} n . Following this new notation, we make an additional assumption. Assumption 2 assumes a time fixed effect model. The term u k captures the time effect: some special events may happen when the k th experiment is implemented, and Assumption 2 assumes that the effect of such events is shared by all units in the experiments. The term α i (w) captures the individual effect, which could depend on the treatment of unit i as well as treatments of other units. Finally, the terms i,k (w 1:n )'s are errors that are i.i.d. across experiments. We also note that the commonly used no temporal effect assumption is a special case (stronger version) of Assumption 2. The no temporal effect assumption assumes that Y i,k (w 1:n ) = α i (w 1:n ) + i,k (w 1:n ), where the errors i,k (w 1:n )'s are zero mean and i.i.d. across experiments. This corresponds to Assumption 2 with all time fixed effects u k = 0. The no temporal effect assumption is particularly plausible when all the experiments are implemented within a short period of time, where the distribution of Y i,k (w 1:n ) is not expected to change by much. Assumption 1 and Hypothesis 1 together state that the outcome Y i,k depend only on the treatment of unit i in experiment k. Therefore, under Assumption 1 and Hypothesis 1, we can further simplify the notation of potential outcomes: for any w ∈ {0, 1} we write Y i,k (w) as the potential outcome and assume that the observed outcomes satisfy . 3 With this new notation, Assumption 2, together with Assumption 1 and Hypothesis 1, becomes a new hypothesis: such that the vectors 1:n,1 (w), . . . , 1:n,K (w) are independently and identically distributed, independently of functions α 1:n , vector u 1:K , treatments W 1:n,1:K , covariates X 1:n and other errors j,l (w) for l = k. This corresponds to the two-way ANOVA in statistics literature [Yates, 1934, Fujikoshi, 1993 and the two-way fixed effect model in economics literature [Bertrand et al., 2004, Angrist andPischke, 2009]. In the previous section, we conduct some permutation tests that permute the data "vertically", i.e., permute different units. Here with the additional assumptions, we can conduct permutation tests that permute the data "horizontally", i.e., permute different time points or experiments. To motivate the permutation test, consider two units i and j. Assume that i has been in the treatment group the whole time, while j has been in the control group the whole time. Under Hypothesis 1', we have for the first experiment, Algorithm 3 Testing for interference effect (two experiments, time fixed effect model). Let , which is the vector of differences between the outcomes of the treated units and those of the matched units. Compute a test statistic T (0) = T (Y diff I 1 ,1:2 , X Im , H Im,1:2 , X I 1 , H I 1 ,1:2 ). 4. For b = 1, . . . B: For each i ∈ I 1 : Randomly permute outcomes across experiments: To put it simply, under Hypothesis 1', Y i,1 − Y j,1 has the same distribution as Y i,2 − Y j,2 . However, when there is cross-unit interference, the two distributions could be different. Consider a simple model: where H i,k is the fraction of neighbors of unit i treated in experiment k, and i,k 's are some i.i.d. zero mean errors. Under this model, When the number of neighbors of unit i is large, by law of large numbers, we have H i,1 ≈ π 1 and H i,2 ≈ π 2 . We can then observe that Y i,1 − Y j,1 and Y i,2 − Y j,2 have different distributions; in particular, they have different means. Given the above observation, we can conduct a permutation test permuting pairs of (i, j) across experiments. We outline the algorithm in Algorithm 3. We also provide an illustration of Algorithm 3 in Figure 3. In Algorithm 3, we compare the value of a test statistic to the value of the statistic after permutation. One simple choice of test statistic is the difference-in-differences statistic: T (Y diff I 1 ,1:2 , X Im , H Im,1:2 , X I 1 , H I 1 ,1:2 ) = mean(Y diff where I 1 and I m are defined in the first step of Algorithm 3. We use the simple model (8) discussed above to explain why this choice of statistic is reasonable. Under model (8), the difference-indifferences statistic (without absolute value) will be mean(Y diff I 1 ,2 ) − mean(Y diff I 1 ,1 ) ≈ mean(H I 1 ,2 ) − mean(H I 1 ,1 ) ≈ π 2 − π 1 . However, after permutation, the difference-in-differences statistic (without absolute value) will be mean zero. Therefore, T (0) and T (b) will have different distributions and thus the p -value will be far from the Unif[0, 1] distribution. One advantage of this difference-in-differences test statistic is its simplicity. To compute this statistic, there is no need of constructing a candidate exposure or any interference graph, and thus the computation cost of the test statistic is very low. This test statistic is also very intuitive to understand. Recall the motivating example in Section 1.1: when the difference-in-means estimators are different, the difference-in-differences test statistic is large. With this test statistic, our algorithm formalizes the intuition of the motivating example in Section 1.1. The difference-in-differences statistic is not the only one we can choose. Indeed, just as for Algorithms 1 and 2, we have full flexibility in choosing the test statistic. For example, we can add covariate adjustment into the test statistics: instead of taking the difference of mean(Y diff I 1 ,2 ) and mean(Y diff I 1 ,1 ), we can take the difference of the fitted intercepts after regressing Y diff 1 (and Y diff 2 ) on X Im and X I 1 . We can also bring the candidate exposure H into the picture. For example, we can similarly define H diff Then one plausible test statistic (when H i,k ∈ R) is the following: T (Y diff I 1 ,1:2 , X Im , H Im,1:2 , X I 1 , H I 1 ,1:2 ) = Corr Y diff Finally, we want to comment on the matching algorithm m used in Algorithm 3. We would first like to stress that as long as the matching algorithm only looks at the covariates X, the test will be valid regardless of the quality of matching. In the most extreme case, we can simply conduct a random matching, and the test will remain valid. More ideally, we would hope each i is matched to an m(i) such that X i is close to X m(i) . This matching step helps reduce variance due to the covariates and thus increase the power of the test. In the causal inference literature, matching algorithms have been widely studied [Rubin, 1973, Stuart, 2010, and we recommend that experimenters choose from existing algorithms based on their needs and the computational resources available. Usage of graphs of experimental units In implementing the previously proposed algorithms, we often find it helpful to construct a graph of the n experimental units. Formally, let G = (V, E), with vertex set V = {1, 2, . . . , n} and edge set E = {E ij } n i,j=1 . We will discuss a few different ways of using graphs to test and learn interference structure. Interference graph. A graph can be constructed to model interference and to help compute candidate exposure. We call such a graph an interference graph. When experimental units are users, it is plausible to assume that a user's behavior is mostly influenced by friends in a social network. In this case, we can simply take the interference graph to be the social network, i.e., we set E ij = 1 if user i and j are friends on the social network. With this graph, many candidate exposures can be computed easily: number of treated friends H numFrds The interference graph can be constructed differently in other settings. When experimental units are advertisers, there is no natural social network. However, we can construct a "competition network" based on the similarity of the covariates. For a similarity measure s and a threshold , we can define E ij = 1 {s(X i , X i ) ≥ }. Such a graph reflects that an advertiser is mostly influenced by its competitors, especially those that are similar to it. Candidate exposures can then be computed based on this interference graph: number of treated competitors H numCpt i,k = j:E ij =1 W j,k , weighted average of competitors' treatments: The interference graph also helps experimenters to understand the nature of interference. Imagine we have two different interference graphs G 1 and G 2 and we apply the testing procedure separately using G 1 and G 2 . If we observe a much smaller p -value for the procedure using G 1 than that we obtain using G 2 , then we have some evidence suggesting that the interference in the form of G 1 is much stronger than in that of G 2 . In particular, the units that are connected to unit i in G 1 might be the most influential in impacting the outcome of unit i. This kind of analysis, though not fully rigorous, can help experimenters to build better intuitions for modelling in subsequent analysis. For example, once the interference effect is statistically significant, experimenters may consider re-running experiments with a cluster randomized controlled trial. Understanding the structure of interference can be helpful in constructing better clusters. Graph in matching. A graph can also be helpful in the matching step in Algorithm 3. In the causal inference literature, matched pairs are often constructed using a minimum cost flow algorithm on a bipartite graph with treated units on one side and control units on the other side [Rosenbaum, 1989, Hansen andKlopfer, 2006]. Here, the cost of flow from unit i to j can be defined as some dissimilarity metric between X i and X j . For example, the Mahalanobis distance is a common choice of such a dissimilarity metric [Rubin, 1980]. The bipartite graph may not always be a complete bipartite graph: sometimes a caliper can be applied to the graph resulting in the removal of edges. A caliper based on covariates limits with which a unit can be paired [Mahmood, 2018]. 5 For example, researchers may only want advertisers to be matched/paired with advertisers who sell products of the same category; in such cases, there is an edge between i and j only if they sell products of the same category. Interestingly, calipered graphs may correspond to the interference graph introduced in the above section, and thus we only need to construct the graph once and use it in both the step of computing candidate exposure and the step of matching. This is especially relevant in a market competition application: a company is expected to be mostly influenced by companies selling similar products, and thus we put edges in the interference graph; in the meantime, we would like to match companies selling similar products, and thus we put edges in the bipartite graph used in matching. Aggregating p-values One issue with the algorithms above proposed is that randomly splitting the data (Algorithms 1 and 2) or the random matching step (Algorithm 3) can inject randomness into the p -value. In order to derandomize the procedure, we can run the algorithms many times and aggregate the p -values. Since the p -values can be arbitrarily dependent on each other, we cannot use Fisher's method to aggregate the p -values, which requires independence [Fisher, 1925]. Some possible ways include, e.g., setting p = 2 p i /n (See Vovk and Wang [2020] for more details). In the previous section, we discuss the usage of an interference graph in constructing candidate exposure. In practice, experimenters may construct several interference graphs with different sparsity or structure. We can make use of information from different graphs and construct an "aggregated p -value". We can run the algorithms separately for each graph, and compute an "aggregated test statistic". For example, we can choose T aggre = m T (G m ), where G m is the m th interference graph considered. Then we can compute an aggregated p -value in the following way: Extension to three or more experiments More generally, experiments may be conducted more than two times. Formally, suppose that we run K experiments where treatments are randomly assigned according to (1) and (2). To test for interference, we can adopt a similar strategy as in Section 3.1. We outline the general algorithm in Algorithm 4. We note that Algorithm 2 is a special case of Algorithm 4. In practice, we recommend computing the test statistic using the difference of outcomes between experiments (as emphasized in Algorithm 2), since this helps remove common variance shared by outcomes in the experiments. One example of such statistic is the following. T (W foc,1:K , X foc , Y foc,1:K , H foc,1: If we assume a time fixed effect model as in Section 3.2, we can then extend Algorithm 3 to settings with more experiments. We outline the algorithm in Algorithm 5. Again, we note that Algorithm 3 is a special case of Algorithm 5. Algorithm 5 allows permutation over more experiments than 1. Let I nc = {i : W i,1 = · · · = W i,K } be the set of units whose treatment didn't change over the experiments. Randomly sample a subset of I nc of size n/2. We call the subset I foc . Let I aux = [n] \ I foc . 2. Compute a test statistic T (0) = T (W foc,1:K , X foc , Y foc,1:K , H foc,1:K ) that captures the importance of H in predicting Y . 3. For b = 1, . . . B: Randomly permute treatments for the auxiliary units of the data: W Recompute the candidate exposure for the focal units: H aux,k ), for i ∈ I foc and k ∈ {1, 2, . . . , K}. Recompute the test statistic: T (b) = T (W foc,1:K , X foc , Y foc,1:K , H (b) foc,1:K ). End For Output: The p-value Algorithm 3 does. In particular, if unit i is treated in experiments K 1 , K 1 + 1, . . . , K, then the algorithm permutes outcome for unit i and its matched unit over experiments K 1 , K 1 + 1, . . . , K. Permuting over more experiments helps the test to leverage information from more experiments and thus increases power of the test. We have included an illustration of this algorithm in Figure 4. Validity of the testing procedures In this section, we establish validity of the above proposed algorithms. We make use of the following theorem in Hemerik and Goeman [2018a,b, Theorem 2]. Theorem 1 (Random permutations). Let A 1 , A 2 , . . . , A n ∈ A be n random variables. Let S n denote the set of all permutations on [n]. Assume that If σ 1 , . . . , σ B are drawn independently uniformly from G, then for any test statistic T , the p -value for any α ∈ (0, 1). 6 Here we assume that |I0| ≥ n/2. Let , which is the vector of differences between the outcomes of the units in I 0 and those of the matched units. Compute a test statistic T (0) = T (Y diff I 1 ,1:K , X Im , H Im,1:K , X I 1 , H I 1 ,1:K ). 4. For b = 1, . . . B: For each i ∈ I 1 : Let S i = {k : W i,k = 1} be the set of experiments in which unit i is treated. Randomly permute outcomes across S i : Y I 1 ,1:K , X Im , H Im,1:K , X I 1 , H I 1 ,1:K ). End For Output: The p-value We start with establishing the validity of Algorithms 1, 2 and 4 under general assumptions. Theorem 2. Assume that the treatments are assigned according to rules defined in (1) and (2). Under Hypothesis 1, the p-values produced by Algorithms 1, 2 and 4 are valid in the following sense: for any α ∈ (0, 1), Proof. Algorithm 1 has been shown to provide valid p -values in Athey et al. [2018]. Since Algorithm 2 is a special case of Algorithm 4, it suffices to prove that the p -values produced by Algorithm 4 are valid. We will be making use of Theorem 1 to show the result. Theorem 3 (Time fixed effect model). Assume that the treatments are assigned according to rules defined in (1) and (2). Under Assumptions 1-2 and Hypothesis 1, the p-values produced by Algorithms 3 and 5 are valid in the following sense: for any α ∈ (0, 1), Proof. Algorithm 3 is a special case of Algorithm 5, and thus we will only work with Algorithm 5 here. We will again make use of Theorem 1 to show the result. By construction, the elements in Y diff,(b) I 1 ,1:K are a random permutation of the elements in Y diff I 1 ,1:K . The allowed permutations in Algorithm 5 clearly form a group. Specifically, the allowed permutations are defined by σ = (σ i ) i∈I 1 , where each σ i is a permutation of S i = {k : i,σ i (k) . Following this notation, by Theorem 1, it suffices to show that for any allowed permutation σ, σ(Y diff I 1 ,1:K ) | W 1:n,1:K , X 1:n , I m d = Y diff I 1 ,1:K | W 1:n,1:K , X 1:n , I m . Simulations In this section, we focus on a form of network interference. Specifically, we use a real-life social network to describe social interactions among units. We generate outcomes with some magnitude of network interference and evaluate our methods based on these generated outcomes. Our simulations can be viewed as semi-synthetic experiments-we use a real-life network, but we generate outcomes according to some model. We consider the Swarthmore network in the Facebook 100 dataset [Traud et al., 2012]. All networks in this dataset are complete online friendship networks for one hundred colleges and universities collected from a single-day snapshot of Facebook in September 2005. Here we focus on the Swarthmore college network in our simulation. To make the social network connected, we extract the largest connected component of the Swarthmore network. To summarize, the network we use is of size 1657 with 61049 edges. The diameter of the network is 6 and the average pairwise distance is 2.32. Throughout this section, we assume that we have access to the data of three randomized experiments. We take treatment probabilities π 1 = 10%, π 2 = 25% and π 3 = 50%. In the following simulation studies, we consider level of significance α = 0.05. Every dot on each plot is an average over 500 replications. We take B = 200. Under general assumptions We compare the power of the tests given in Algorithms 1, 2 and 4. We run Algorithm 4 using all three experiments, run Algorithm 2 using the second and the third experiments, and run Algorithm 1 using the third experiment, i.e., we always use experiments with the largest treatment probabilities. We discuss the choice of test statistics in Appendix A. In Figure 5a, we assume a linear model of the outcome Y ; in Figure 5b, we assume a nonlinear model. The details of the generating model can also be found in Appendix A. In Figures 5a and 5b, we plot the power of the testing algorithms 1, 2 and 4 at different levels of interference effects (signal strengths). In the figures, the fraction of common variance controls the correlation of the individual outcomes across experiments. We observe from Figures 5a and 5b that utilizing more experiments helps our algorithms become more powerful, especially when the fraction of common variance is high. As discussed in Section 1.2, our work is the first to consider testing interference with multiple randomized experiments. Therefore, we can treat the algorithm utilizing one experiment as the baseline method that represents the state-of-the-art. Our algorithms appear to have a clear advantage over the baseline in terms of the power. We also find that the regression statistic performs better than the correlation statistic, because the regression step helps reduce variance caused by the observed covariates. Time fixed effect model We compare the power of the tests given in Algorithms 4 and 5. We run both algorithms using all three experiments. We use a regression test statistic in both algorithms. We discuss the choice of test statistics and matching algorithms in Appendix A. In Figure 6a, we assume a linear model of the outcome Y , whereas in Figure 6b, we assume a nonlinear model. The details of the generating model can also be found in Appendix A. In Figures 6a and 6b, we plot the power of the testing algorithms 4 and 5 at different levels of interference effects (signal strengths). Algorithm 5 (testing with a time fixed effect model) appears more powerful than Algorithm 4 (testing under general assumptions). To understand this phenomenon, we recall that Algorithm 4 permutes data across experiments, whereas Algorithm 5 permutes data across units. Due to the nature of A/B tests, there is more variability in treatment allocation across experiments than across units. For example, assume that all units have around n ngb neighbors in the social network. Looking at the fraction of neighbors in the treatment group, we find that the variation of this quantity across units is of scale 1/ √ n ngb , whereas the variation of this quantity across experiments is of constant scale. By permuting over data points that are more different, Algorithm 5 gains extra power. Recall that there is a matching step in Algorithm 5. We find from Figure 6a and 6b that covariatebased matching outperforms random matching, especially under a nonlinear outcome model. In a linear model, the regression step has already removed almost all of the variance caused by observed covariates. In a nonlinear model, nevertheless, the regression step cannot fully remove all variance and the matching step can help further reduce variance. Applications In this section, we illustrate how the proposed procedure has been successfully implemented at LinkedIn as an add-on to their experimentation toolkit. Like other firms in the technology sector such as Google and Meta, LinkedIn makes business decisions in a data-driven manner and has a culture to "test everything". To support the needs to run concurrent A/B tests at scale, LinkedIn built an in-house experimentation platform, called T-REX (Targeting, Ramping, and Experimentation), which provides end-to-end experimentation supports [Xu et al., 2015, Ivaniuk, 2020. Regardless of the application, T-REX implements simple Bernoulli randomization and relies on t-test for readout without taking into account potential interactions among experimental units. This becomes a major limitation for experimentation in a marketplace environment, including the ads marketplace, where units on either side of the marketplace (advertisers and ad viewers) can interfere with each other [Basse et al., 2016, Pouget-Abadie et al., 2019a, Liu et al., 2021, Johari et al., 2022. For example, ad campaigns that share the targeting audiences interfere with each other by competing in auctions for ad slots; different ad viewers with similar attributes are connected through the finite budget of certain ad campaigns. To remove bias in experiments caused by interference, LinkedIn has implemented the Budget-split platform on top of T-REX for experimentation in their ads marketplace [Liu et al., 2021]. However, since Budget-split uses two halves of the marketplace to simulate the counterfactuals under different treatment variants, it does not support the classic factorial design. Under the current implementation, the platform only runs one experiment at a time, which is much smaller than the total number of experiments they need to run. This limitation in Budget-split capacity severely delays innovation: teams need to wait for weeks for a Budget-split slot in order to get an accurate measurement of their feature ramp before product launch. Nevertheless, not all ramps suffer from unit interaction, even in the ads marketplace setting. Running Budget-split experiments with negligible interference incurs a huge opportunity cost. Ideally, the Budget-split platform wants to prioritize tests that are impacted the most by the interference effects. At LinkedIn, all feature launches start with small percentage ramps for risk mitigation and gradually increase the treatment percentage (i.e., 1%, 5%, 10%, 25%) before reaching the iteration for treatment effect measurement (50%) [Xu et al., 2018, Mao and. Specifically, Budget-split amounts to a 50% ramp on the viewers' side. This increasing allocation scheme provides us information to detect potential interference. With the algorithms proposed in this paper, we implemented a screening step for each feature after the 25% iteration. The experiments are then ranked by the p-value in the interference test to determine their priority on the Budget-split platform. It is important to note that the screening module was designed as an add-on to the system without touching LinkedIn's existing experimentation solution such as T-REX. By default, the interference detector only requires experimentation data in two previous iterations and runs Algorithm 3. Users have the option to provide additional network information that characterizes the potential interference mechanism among units and run other algorithms in this paper. Because of this standalone nature, a similar interference detector can be readily added to any existing experimentation platforms to trigger alerts when interference might cause a problem. As an illustration, we consider an online controlled experiment implemented by LinkedIn. The treatment in this experiment corresponds to a new feature that improves the quality of LinkedIn members' attribute for ads targeting. We run a series of experiments with increasing allocation with the members as the randomization units. Interference effect is expected in these experiments: when the allocation percentage is small, only a small set of members have the updated attributes, making them easier to be targeted by ad campaigns. Thus, when comparing metrics such as total ad impressions, these members tend to have larger average results than members in the control group. When the treatment allocation increases, more members get the improved attributes. Since the total ad budget does not increase much, the average difference between treatment and control units becomes smaller. Figure 1 shows the average differences between treatment and control units in the experiment series. Figure 7 shows the output from the interference detector after running Algorithm 3 based on the 10% and 25% iterations with respect to two different metrics. The p-values of the permutation test confirm the strong interference effects in these experiments. Discussion Missingness. In this paper, we make the assumption that the dataset is complete. A natural future direction of work is to extend the current methods to scenarios with missing data. It is not hard to show that if the data is missing completely at random (MCAR), then the proposed testing procedures are still valid. When MCAR is unrealistic, it will be interesting to study whether our methods can still be applied under certain conditions. In practice, experimenters need to carefully examine the possible causes and consequences of missingness and make decisions correspondingly. Selective inference. We propose to use our testing procedure as a screening step for A/B testing: if the test suggests that no interference exists, then the experimenter can proceed with classical causal inference analysis. Strictly speaking, the data is used twice here-in the screening step and in the follow-up analysis. It would be of interest to understand the impact of the screening step on the follow-up analysis, and to develop valid statistical inference methods conditioning on the result of the screening step. Sequential Testing. Another question left open by this paper is whether the proposed methods can be extended to the sequential testing setting. Our current procedure fixes the number of experiments a priori and constructs a single p-value from the permutation test. In real life, the treatment probability increases gradually, and it would be of practical interest to end the experiment early as soon as we detect any interference. In that scenario, we need to take into account the randomness in stopping time and construct always valid p-values [Johari et al., 2017]. A.1 Under general assumptions In Section 5.1, we compare the power of the tests given in Algorithms 1, 2 and 4. A.1.1 Test statistics Here, we discuss the test statistics used by the algorithms. Let H i,k be the fraction of treated neighbors of unit i in experiment k. Let N i be the number of neighbors of unit i in the social network. One experiment. For Algorithm 1, we use the following test statistic: run a linear regression of Y foc ∼ W foc + X foc + N foc + H foc , extract the regression coefficient of H and take the absolute value of the coefficient. Two experiments. For Algorithm 2, we consider two different test statistics, a correlation statistic and a regression statistic. For the correlation statistic, we take T (W foc,1:2 , X foc , Y diff foc , H foc,1:2 ) = Corr Y diff foc , H foc,2 − H foc,1 . For the regression statistic, we run a regression of Y diff foc ∼ X foc + N foc + H foc,1 + (H foc,2 − H foc,1 ), extract the regression coefficient of (H foc,2 − H foc,1 ) and take the absolute value of the coefficient. Three experiments. Let T k,l be the test statistic (regression or correlation) defined above when only two experiments are utilized (the k-th and l-th experiments are utilized). We then simply use T 1,2 + T 2,3 + T 1,3 as the test statistic for Algorithm 4 with K = 3. A.1.2 Outcome models We consider two different outcome models. For the linear model, let H i,k be the fraction of treated neighbors of unit i in experiment k. We assume where k ∈ {1, 2, 3} and X i,1 ∼ N (0.5, 1), X i,2 ∼ Poisson(3) independently. The errors ε i,k 's are such that (ε i,1 , . . . , ε i,K ) is distributed as multivariate gaussian with E [ε i,k ] = 0, Var [ε i,k ] = 1 and Cov [ε i,k , ε i,l ] = (fraction of common variance) for k = l. A.2 Time fixed effect model In Section 5.2, we compare the power of the tests given in Algorithms 4 and 5. A.2.1 Test statistics Here, we discuss the test statistics used by the algorithms. Let H i,k be the fraction of treated neighbors of unit i in experiment k. Let N i be the number of neighbors of unit i in the social network. Algorithm 4. We use the regression statistic defined in Section 5.1. Algorithm 5. For Algorithm 5, we use an "anova" statistic. Let I 1 = {i ∈ I 1 : W i,1 = 1} and let I m = {m(i) : i ∈ I 1 }. We start with concatenate Y diff concat = Y diff I 1 ,1 , Y diff I 1 ,2 , Y diff I 1 ,3 . Similarly, let N concat = (N concat,1 , N concat,m ), where N concat,1 = N I 1 ,1 , N I 1 ,2 , N I 1 ,3 and N concat,m = N I 1 ,1 , N I 1 ,2 , N I 1 ,3 . We do the same concatenation for X and H. The reason we take the subset I 1 of I 1 in the first experiment is that we want Y diff concat to be a pure contrast of treatment group and control group. Without the subsetting step, Y diff contains both treatment-control differences and control-control differences. Let Ind 2 be the indicator of the second experiment and Ind 3 be the indicator of the third experiment. We then run two regressions: Model 1: Y diff concat ∼ X concat + H concat + N concat + Ind 2 + Ind 3 , Model 2: Y diff concat ∼ X concat + N concat . Finally, we let the test statistic be the F -statistic from the anova testing of contrasting Model 1 with Model 2. A.2.2 Matching algorithms Random matching. We sample m(i) uniformly at random without replacement. Covariate-based matching. We use optimal matching based on the Mahalanobis distance of observed covariates and N i [Sekhon, 2008]. A.2.3 Outcome models We consider two different outcome models. For the linear model, let H i,k be the fraction of treated neighbors of unit i in experiment k. We assume where k ∈ {1, 2, 3} and X i,1 ∼ N (0.5, 1), X i,2 ∼ Poisson(3) independently. The errors ε i,k 's are such that (ε i,1 , . . . , ε i,K ) is distributed as multivariate gaussian with E [ε i,k ] = 0, Var [ε i,k ] = 1 and Cov [ε i,k , ε i,l ] = (fraction of common variance) for k = l. For the non-linear model, let M i,k be the number of treated neighbors of unit i in experiment k.
2022-11-08T06:42:53.814Z
2022-11-07T00:00:00.000
{ "year": 2022, "sha1": "60853df39c80d51c01bde11f94aeb69fa95946f3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2ae27458ded1cedbfd711b77ee9868f6ed79343a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Mathematics" ] }
443267
pes2o/s2orc
v3-fos-license
S6 kinase signaling: tamoxifen response and prognostic indication in two breast cancer cohorts Detection of signals in the mammalian target of rapamycin (mTOR) and the estrogen receptor (ER) pathways may be a future clinical tool for the prediction of adjuvant treatment response in primary breast cancer. Using immunohistological staining, we investigated the value of the mTOR targets p70-S6 kinase (S6K) 1 and 2 as biomarkers for tamoxifen benefit in two independent clinical trials comparing adjuvant tamoxifen with no tamoxifen or 5 years versus 2 years of tamoxifen treatment. In addition, the prognostic value of the S6Ks was evaluated. We found that S6K1 correlated with proliferation, HER2 status, and cytoplasmic AKT activity, whereas high protein expression levels of S6K2 and phosphorylated (p) S6K were more common in ER-positive, and low-proliferative tumors with pAKT-s473 localized to the nucelus. Nuclear accumulation of S6K1 was indicative of a reduced tamoxifen effect (hazard ratio (HR): 1.07, 95% CI: 0.53–2.81, P Z 0.84), compared with a significant benefit from tamoxifen treatment in patients without tumor S6K1 nuclear accumulation (HR: 0.42, 95% CI: 0.29–0.62, P ! 0.00001). Also S6K1 and S6K2 activation, indicated by pS6K-t389 expression, was associated with low benefit from tamoxifen (HR: 0.97, 95% CI: 0.50–1.87, P Z 0.92). In addition, high protein expression of S6K1, independent of localization, predicted worse prognosis in a multivariate analysis, P Z 0.00041 (cytoplasm), P Z 0.016 (nucleus). In conclusion, the mTOR-activated kinases S6K1 and S6K2 interfere with proliferation and response to tamoxifen. Monitoring their activity and intracellular localization may provide biomarkers for breast cancer treatment, allowing the identification of a group of patients less likely to benefit from tamoxifen and thus in need of an alternative or additional targeted treatment. Introduction Breast cancer is the most common malignancy among women, with more than 75% of tumors positive for hormone receptors and thereby possibly responsive to adjuvant endocrine treatment. Active signaling in the phosphatidylinositol 3-kinase/mammalian target of rapamycin (PI3K/mTOR) cascade is a characterized mechanism of endocrine resistance, with aberrant stimulation promoting estrogen receptor (ER) a (ESR1) activation and tumor growth, despite estrogen antagonizing-or estrogen restriction treatments (Miller et al. 2011). Inhibition with MABs-binding tyrosine kinase receptors, or mTOR inhibition with rapamycin analogues, has been shown clinically to sensitize tumors to endocrine therapy and prolonging time to progression (Baselga et al. 2009, 2012, Johnston et al. 2009, Bachelot et al. 2012. However, the pathway involves negative feedback regulatory mechanisms, which may provide unwanted activation of signals, such as AKT and ERK1/2, upon inhibition (Soares et al. 2013). Inhibitors targeting signals at more than one level have been promising in vitro and it is probably necessary in many tumors to prevent cells from finding alternative routes to proliferation. The best treatment combination for each tumor or subset of tumors has yet to be discovered, although finding large groups of patients responsive to similar therapy will encourage development of new biomarkers and treatments. Everolimus (Afinitor) has been recently registered as a treatment for women recurring on endocrine therapy. It is a first-generation mTOR inhibitor, targeting the mTOR complex 1 (mTORC1). Two major downstream targets of mTORC1 are the S6 kinases (S6Ks) 1 and 2. Phosphorylation of these proteins reflects the activity of the mTORC1 (Hara et al. 1997). S6K1 and S6K2 are highly homologous proteins encoded by the RPS6KB1 gene in chromosomal region 17q23, and by the RPS6KB2 gene at 11q13, respectively. The S6Ks share several features, such as their ability to phosphorylate the 40S ribosomal protein S6 and the dependence of an activated mTORC1 for complete activation (Lee-Fruman et al. 1999, Park et al. 2002. The two genes are located in the commonly amplified regions in the genome, associated with poor prognosis in breast cancer (Barlund et al. 2000, Bostner et al. 2007, Perez-Tenorio et al. 2011. Gene expression of RPS6KB1 is upregulated in various cancers (Ip et al. 2011, Sridharan & Basu 2011, Li et al. 2012, and we have observed the prognostic value of RPS6KB2 gene expression in breast cancer . Results of studies have indicated increased protein expression of S6K1 and S6K2 in tumors when compared with normal or benign tissue (Filonenko et al. 2004, Ismail 2012, Li et al. 2012. S6K1 protein overexpression predicted poor prognosis in multiple studies (van der Hage et al. 2004, Perez-Tenorio et al. 2011, Ismail 2012. Measuring phosphorylated (p) threonine (t) at the amino acid residue 389 of S6K has become a method to evaluate mTORC1 kinase activity in vitro. Castellvi et al. (2006) detected increased pS6K expression in malignant ovarian tumors compared with benign ovarian tumors, and high pS6K expression was associated with worse prognosis in two breast cancer cohorts (Noh et al. 2008, Kim et al. 2011. S6K1 has been shown to phosphorylate the ER on serine (s) 167, inducing conformational changes in the receptor and making it less responsive to the inhibitory effects of tamoxifen (Yamnik et al. 2009). In addition, RPS6KB1 is a target gene for ER transcription (Maruani et al. 2012). S6K2 has recently been brought to light as a potentially important oncoprotein, with fewer essential roles in normal tissue than S6K1, making it an interesting treatment target (Pardo & Seckl 2013). To further delineate the treatment-predictive role and prognostic value of the S6Ks, we studied protein expression of S6K1, S6K2, and pS6K-t389 in tumors of patients participating in trials comparing adjuvant tamoxifen with no tamoxifen or 2-year with 5-year tamoxifen treatment. Patients and tumor samples This study was designed and presented with regard to the reporting recommendations for tumor marker prognostic studies (REMARK) guidelines (McShane et al. 2006). Ethical approval for cohort 1 was from Karolinska Institute Ethics Board. Ethical approval for cohort 2 was obtained from Linköping University Ethics Board. Cohort 1 (tamoxifen versus no tamoxifen) The Stockholm trial included a cohort of breast carcinoma patients with node-negative disease and a tumor size not exceeding 30 mm, randomized to tamoxifen or no adjuvant treatment. Radiotherapy was used for patients receiving breast-conserving therapy. No adjuvant chemotherapy was given to this group of patients. Demographic data and detailed information on the cohort have been previously described (Rutqvist & Johansson 2007. Tissue micro arrays (TMAs) with three individual cores were constructed from formalin-fixed paraffin-embedded (FFPE) tumors from 912 patients. ER, progesterone receptor (PgR), HER2, pAkt-s473, pER-s167, p-mTOR status, and mitosis were previously determined (Wrange et al. 1978, Rutqvist & Johansson 2007. Cohort 2 (tamoxifen 5 years versus 2 years) Tumor tissues of invasive primary breast carcinomas were obtained from a randomized trial of 5 years versus 2 years of adjuvant tamoxifen for postmenopausal early-stage breast cancer conducted by the Swedish Breast Cancer Group during 1983-1991 in five regional cancer centers in Sweden (Swedish Breast Cancer Cooperative Group 1996). The 130 tumors available for this study were represented into two TMA blocks, and patients had been diagnosed in the South-east Sweden region. The ER and PgR status had been determined previously through isoelectric focusing or an enzyme immune assay (Wrange et al. 1978, Fernö et al. 1986). In addition, S-phase fraction and HER2 status were later determined (Fernö et al. 2000, Stål et al. 2000. TMAs with three individual cores were constructed from FFPE tumors and 4 mm sections were used for immunohistochemistry. Immunostaining The two patient series were analyzed with the MAB against pS6K-t389 at a concentration of 1:100 (Cell Signaling Technology, Danvers, MA, USA; #9206). The tumors were additionally analyzed with the S6K1 antibody at a concentration of 1:100 (Cell Signaling Technology; #2708). The tumors in cohort 2 were stained with a newly synthesized S6K2 antibody at a concentration of 1:100 (kindly provided by Professor Filonenko) (Savinska et al. 2012). The TMAs of cohort 1 had been previously stained with an alternative S6K2 antibody (Perez-Tenorio et al. 2011). The PT-link system (Dako, Glostrup, Denmark; PT10126) with the Envision FLEX Target Retreival Solution, Low pH, was used for deparaffinization, rehydration, and epitope retrieval of the TMAs. The slides were washed in PBS/0.05% Tween-20, subjected to endogenous peroxidase inactivation in 3% hydrogen peroxide, washed in PBS/0.5% BSA, blocked for 10 min in serum-free protein block (Spring Bioscience, Freemont, CA, USA), and incubated with primary antibody in a moisturized chamber at 4 8C overnight. On day 2, the slides were washed in PBS/0.5% BSA, incubated with secondary antibody (Dako Cytomation EnvisionC HRP system; Dako) at room temperature for 30 min, washed, developed in PBS/3,3 0 -diaminobenzidine hydrochloride (DAB) for 8 min, counterstained with hematoxylin for 1 min, washed, dehydrated in an ethanol series, and mounted with Pertex (HistoLab, Västra Frölunda, Sweden). The images at 20! and 40! magnification were produced with an Olympus BX21 microscope and an Olympus DP70 camera, and whole-slide images of the first cohort p-S6K staining were generated using the ScanScope aT at 200! magnification (Aperio, Vista, CA, USA). Scoring Before scoring, the tumors were evaluated for scoring cutoffs appropriate for each stain, with regard to intensity levels and percentage of tumor cells with positive staining, when applicable. Scoring was thereafter conducted by two individual observers, blinded to clinical data. A consensus on the score for each tumor was reached following the individual scoring. Statistical analysis To compare high and low expression in two groups, the Pearson c 2 -test was performed. For rank order of expression levels in four groups, the Spearman's test was applied. Hazard ratios (HRs) with 95% CI were estimated using the Cox proportional hazards model and the time from diagnosis to any breast cancer recurrence was used as an endpoint. Recurrence-free survival (RFS), defined as time to the first of the following events, local or distant recurrence, or breast cancer-related death was compared with the log-rank test and Kaplan-Meier plots were drawn for visualization. Endocrine-treated patients were excluded from the prognostic analyses of the first cohort. All tamoxifen treatment prediction analyses were restricted to patients with ER-positive tumors, and in cohort 2 analyses were restricted to starting 2 years after diagnosis to avoid arbitrary differences in the two, up until 2 years, identically treated groups. A P value of !0.05 or !0.01 was considered significant. All statistical analyses of the patient cohorts were performed using Statistica 10 software. Cell culture The cell line ZR751 (Engel et al. 1978) (ATCC, Manassas, VA, USA; LGC standards, Teddington, Middlesex, UK) was cultured in Optimem without phenol red (Gibco, Life Technologies), supplemented with 4% heat-inactivated fetal bovine serum (Gibco) at 37 8C, 5% CO 2 . Cell authentication was done at ATCC using Short-Tandem Repeat Profiling analysis. All experiments were conducted with cells in the exponential growth phase and cell passage number was kept low. siRNA knockdown The cells were transfected with siRNA using the Nucleofector Kit V with the Amaxa nucleofection system (Lonza, Basel, Switzerland). Briefly, cells were detached, resuspended in 100 ml nucleofector solution with 300 nM siRNA (Silencer Negative Control No. 1; AM4611, S6K1; 110802 and s12284, and S6K2; 471, Ambion by Life Technologies), transferred to cuvettes and transfected in an Amaxa Biosystems Nucleofector II, program P20. The cells were transferred to cell culture media, counted using a Countess Automated Cell Counter (Life Technologies), and seeded at 100 000 cells/ml. After 24 h, cell culture media was changed. Western blotting analysis For protein preparation, cells were rinsed in ice-cold PBS, lysed in RIPA buffer, containing 150 mM NaCl, 2% Triton, 0.1% SDS, 50 nM Tris pH 8.0, Complete Mini Protease Inhibitor Cocktail (1836153; Roche), PhosSTOP phosphatase inhibitor cocktail (Roche Pharma), and phosphatase inhibitor cocktail 2 and 3 (P5726 and P0044, Sigma-Aldrich). The cell lysates were incubated on ice for 30 min and centrifuged at 20 800 g. Protein concentration of the supernatants was determined by the colorimetric BCA Protein Assay (Thermo Scientific Pierce, Rockford, IL, USA) and the lysates were stored at K70 8C. The samples containing 15 mg protein were denaturated with Laemmli sample buffer (Bio-Rad), b-mercaptoethanol (Bio-Rad), and heating at 95 8C for 5 min, separated on mini-PROTEAN TGX precast gels, 4-15% (Bio-Rad) at 90 V for 15 min and 150 V for 45 min. Separated proteins were transferred onto a PVDF membrane in a Trans-Blot Turbo system (Bio-Rad), program Mixed-MW kDa, for 7 min. The membranes were blocked according to the specification of the primary antibody before incubation with primary antibodies in a blocking buffer overnight at 4 8C. The membranes were washed in Tris (2.5 mM)-Glycine (19.2 mM)-SDS (0.01%) (Bio-Rad) with 0.1% Tween-20, and incubated for 1 h at room temperature with secondary antibodies. The protein bands were visualized using an Amersham ECL Prime Western Blot detection system (GE Healthcare Life Sciences, Little Chalfont, Buckinghamshire, UK) and a charge-coupled camera with the software Image Reader LAS-1000 Pro v.2.6 for detection of chemiluminescent signals (Fujifilm LAS-1000, Stockholm, Sweden). The antibodies used for western blotting were S6K1, pS6K-t389, pAkt-s473 (Cell Signaling Technology), S6K2 (kindly provided by Professor Filonenko) (Savinska et al. 2012), and a GAPDH antibody (rabbit monoclonal; Epitomics, Cambridge, UK; #5632-1) was used as internal control. The ladder MagicMark XP Western Protein Standard was used for molecular weight estimations (Life Technologies). Fixation and paraffin embedding of cells The cells at 80% confluence were washed in PBS, dissociated from the surface with Tryple express (Life Technologies) for 5 min in 37 8C, and centrifuged at 200 g for 5 min. The cells were washed in PBS and centrifuged again to form a pellet, and fixed with 4% formaldehyde (Sigma-Aldrich) at room temperature for 4 h, visualized with hematoxylin, centrifuged, and dehydrated with an ethanol to xylene gradient overnight. The cell pellet was paraffin embedded at 56 8C for 4 h by the addition of new paraffin every hour. The embedded cells were cut in 3 mm sections using a rotation microtome (Microm International GmbH, Waldorf, Germany). l-phosphatase assay Phosphorylation specificity of the pS6K-t389 antibody was determined by dephosphorylation of proteins in heregulin b1 (ImmunoTools, Friesoythe, Germany, 0.1 mM) (HRG)treated FFPE ZR751 breast cancer cells by l-phosphatase (New England Biolabs, Ipswich, MA, USA). The cells on slides were treated with 1000 U l-phosphatase or water (control) for 2 h at 37 8C followed by immunohistochemical staining according to the protocol used for the pS6K-t389 antibody. Results Immunohistochemical staining of tumors S6K proteins and phosphorylated kinase expression as potential biomarkers of tamoxifen benefit were analyzed in the two breast cancer cohorts. Distribution of protein expressions and cutoffs in the two cohorts are presented in a Supplementary Table 1 and a mechanistic scheme of the signaling pathways with treatment predictive and prognostic results are presented in Supplementary Figure 1, see section on supplementary data given at the end of this article. Successful staining for S6K1 was found in 849 tumors (cohort 1) and in 130 tumors (cohort 2), and detection of pS6K was possible in 807 tumors (cohort 1) and in 130 tumors (cohort 2). S6K1 and S6K2 correlate with separate tumor characteristics In cohort 1, pS6K was highly correlated with both high S6K1 and high S6K2 expression, supporting the antibody selectivity found in vitro toward the two homologs also in FFPE breast tissue (Table 1). In contrast, S6K1 and S6K2 were not significantly co-expressed. Nuclear pS6K correlated well with ER positivity (PZ0.001), as did nuclear S6K2 in the two sets, whereas high S6K1 expression was connected with HER2 positivity (Tables 1 and 2). The AKT protein is central in the PI3K/mTORpathway, with its activity being regulated upstream and downstream of the S6Ks. AKT stimulates S6K through mTORC1, and mTORC1 or S6K1 repression commonly results in AKT activation through the release of S6K1 inhibition of the IGF1 receptor regulator IRS1 , Tabernero et al. 2008. A connection between active expression of AKT (pAKT-s473) and all analyzed S6K variants was evident in the first set, with strongest correlations detected when proteins were expressed in the same location: cytoplasm and nucleus respectively (Table 1). We detected strong coexpression of pAKT and pS6K in the nucleus and of pAKT and S6K2 in the nucleus. A pAKT correlation with S6K1 was observed in the cytoplasm. S6K1 and AKT are both known to phosphorylate the ER on serine 167. pS6K correlated well with pER-s167 in the cytoplasm and in the nucleus. pAKT and pER-s167 were strongly associated (P!0.00001), as previously shown in cohort 1 ). The Spearman rank order test was applied for p-mTOR versus nuclear pS6K (PZ0.0002) and for p-mTOR versus nuclear S6K2 (PZ0.00002). We found high nuclear pS6K and high nuclear S6K2 to correlate significantly with small tumor size, but no obvious relationship between S6K1 and tumor size in the first set (Table 1). In the second set the pS6K cytoplasmic expression was correlated with large size of tumors ( Table 2). Results of a previous study indicated that overexpression of S6K2 but not S6K1 was important for cell proliferation in HEK293 cells (Goh et al. 2010). Proliferation in our two sets was represented by mitosis in the first set, and S-phase fraction in the second set (Tables 1 and 2). High levels of S6K1 expression indicated increased proliferation, while high levels of nuclear pS6K and S6K2 indicated reduced proliferation in the first set. In the second set, no significant correlations with proliferation were observed; however, a tendency toward increased proliferation was detected with high cytoplasmic S6K1, consistent with results from the first set. High nuclear S6K1 and high pS6K predict groups that do not benefit from tamoxifen Up to one-third of the tumors showed a distinct pattern of S6K1 nuclear staining with weaker cytoplasmic staining. A significant tamoxifen benefit was observed in cohort 1 for patients whose tumors did not show the S6K1 nuclear pattern (P!0.00001) (Fig. 1A). The benefit was not evident in patients with tumors showing nuclear dominance of S6K1 (Fig. 1B), and the difference in treatment efficacy between the groups was significant (Table 3). Similarly, patients in cohort 2 with tumors not showing the S6K1 nuclear pattern tended to benefit from prolonged tamoxifen therapy (Fig. 1C and D). No significant treatmentpredictive role was identified for S6K2 alone in the two sets (Perez-Tenorio et al. 2011). The role of nuclear S6K2 seemed dependent on the PgR status, showing a loss of treatment response when tumors expressed high levels of nuclear S6K2 and no PgR. S6K1 expression indicates worse prognosis High S6K1 protein expression, independent of localization, predicted a poor prognosis in the tamoxifenuntreated subgroup (Fig. 3A and B). This was also true in multivariate analysis, adjusting for tumor size, grade, HER2-, ER-, and PgR status (PZ0.00041 for S6K1 in the cytoplasm and PZ0.016 for S6K1 in the nucleus). The tamoxifen-treated group did not differ in recurrence rate in relation to cytoplasmic S6K1 expression (data not shown). The second cohort did not include tamoxifenuntreated patients and no significant relationship between recurrence rate and S6K1 expression was observed in the analysis of all patients (Fig. 3C). S6K2 amplification has previously been shown to be a marker of worse prognosis in stage II breast cancer (Perez-Tenorio et al. 2011). Strong S6K2 expression did not associate with prognosis in either of the two cohorts. Thus, pS6K did not qualify as a prognostic marker irrespective of intracellular localization, cutoff, HER2 status, or cohort. A tendency towards improved breast cancer survival was found for high levels of pS6K in the nucleus (HR: 0.72, 95% CI: 0.44-1.17, PZ0.18). In vitro validation shows antibody specificity The ER-positive breast cancer cell line ZR751 was transfected with siRNA to knockdown the S6K1 and S6K2 mRNAs, separately and in combination. Results of western blotting indicated almost complete knockdown of proteins at 72 h, indicating an efficient transfection and specific target detection with the antibodies (Fig. 4). The S6K2 antibody showed a weak non-specific band slightly larger than the specific band, which was not S6K1 as the band remained after S6K1 knockdown. The pS6K-t389 (pS6K) antibody detected both pS6K1 and pS6K2. The specific bands of the two phosphorylated homologs were distinguishable by western blotting. pS6K1 appeared at 70 and 85 kDa, whereas pS6K2 (p54 and p56) appeared as one non-separated band at 60 kDa, which was consistent with the sizes stated in the literature. AKT was additionally phosphorylated after S6K1 knockdown. This was not evident after S6K2 knockdown. Discussion Signaling of the S6Ks downstream of mTORC1 was retrospectively evaluated in two randomized sets of breast cancer patients. In this study, we have demonstrated S6K protein expression, activity, and location to be the useful markers for prediction of response to endocrine treatment, and for prognosis. Resistance to endocrine treatments remains the reality for a large group of patients with recurring breast cancer, and validated biomarkers for treatment prediction in primary tumors are scarce besides ER and PgR status. Aberrant PI3K/mTOR-signaling is a highlighted cascade in endocrine therapy resistance, leading to maintained proliferation despite inhibition of hormonal-stimulatory effects. Therefore, we examined S6K1 and S6K2 as markers for activity in the pathway. The protein pER-s167, a downstream target for phosphorylation by AKT, S6K1, and RSK, has been shown previously to increase ER transcriptional activity, and consequently affect the proliferation rate (Yamnik & Holz 2010). The mTORC1 inhibitor rapamycin as well as S6K1 knockdown retained osteosarcoma cells in G1 phase (Fingar et al. 2004), and S6K1 indirectly induced proliferation through ER stimulation in epithelial cells (Yamnik et al. 2009). A mutant active S6K2 in immune cells increased proliferation (Cruz et al. 2005), and in embryonic cells from S6K1/2-knockout mice mild proliferation defects were observed (Kawasome et al. 1998, Pende et al. 2004. Results of several studies have suggested diverse roles of the S6K isoforms in cellcycle progression (Lane et al. 1993, Reinhard et al. 1994, Fingar et al. 2004, Boyer et al. 2008. Consequently, the involvement of the S6Ks in proliferation is established in a variety of cells and systems. The common and separate roles of the two homologues for breast cancer patients are yet to be elucidated. This is the first study to our knowledge to show that a dominant pattern of nuclear S6K1 is associated with reduced benefit from tamoxifen in breast cancer patients, Benefit of tamoxifen in ER-positive patients grouped according to location of S6K1 expression. (A versus B) S6K1 accumulation in the nucleus predicted loss of tamoxifen benefit when compared with endocrine-untreated patients in cohort 1 (nZ641), P value for interaction was 0.025, and (C versus D) a reduction in the benefit from 5 years of tamoxifen treatment compared with 2 years of treatment in cohort 2 (nZ91), P value for interaction was 0.25. Table 3 Cox proportional hazard analysis of the benefit from tamoxifen in patients with ER-positive tumors in relation to S6K1 nuclear (n) and cytoplasmic (c) location, and phosphorylated S6K-t389 (pS6K) expression indicating that nuclear accumulation of S6K1 in primary biopsies may serve as a potential target to prevent treatment resistance. In addition, cytoplasmic and nuclear overexpression of S6K1 significantly indicated worse prognosis independent of HER2 gene amplification, consistent with results of previous studies on S6K1 protein expression and RPS6KB1 gene amplification and expression (van der Hage et al. 2004, Perez-Tenorio et al. 2011. The S6K1 antibody demonstrated high specificity and could serve as a clinical biomarker. Overactivated S6K has been suggested an upcoming prognostic and treatment predictive marker in ER-positive breast cancer (Noh et al. 2008, Kim et al. 2011, Beelen et al. 2014. Here, we did not detect worse prognosis with high activity of the S6Ks, instead a somewhat improved prognosis was observed in the tamoxifen-untreated subset of patients when pS6K was highly expressed in the nucleus. Recently, Beelen et al. (2014) have shown an improved prognosis with high levels of pS6K expression in the cytoplasm. It has been suggested that the p31 short isoform of S6K1 is the primary oncogenic protein in the S6K family (Rosner & Hengstschlager 2011). The pS6K-t389 antibody did not detect this short isoform, which could be an explanation as to why its prognostic value was not seen in our study. Regarding the treatment-predictive value of pS6K-t389, we found high simultaneous expression in cytoplasmic and nuclear compartments to indicate loss of benefit from tamoxifen in the two cohorts. These results highlight pS6K as one of the potential markers in the PI3K/ER crosstalk interfering with response to tamoxifen treatment. The antibody showed high specificity on western blots as well as on FFPE cells. Staining with the phosphorylation-specific antibody was evaluated in the two cohorts, separately. Slightly different staining patterns were observed between the cohorts, with nuclear staining only in the first set and variations in staining intensity, although the antibody concentration and method were identical. Therefore, we recommend the phosphorylation-specific S6K antibody to be further investigated before it is taken into consideration for clinical use. Instead, we suggest that this antibody may serve as a validated marker in cohort studies for further evaluation of mTOR inhibition and endocrine treatment response. S6K2 was found in protein complexes by the centrosome in the nuclear membrane (Rossi et al. 2007), and the results of an in silico study indicated domains of the S6K2 to connect to chromatin (Ismail et al. 2014). In vitro data indicated that S6K2, not S6K1 or AKT, binds histone 3 and that this was dependent on the C-terminal nuclear localization signal in the S6K2 protein. On the other hand, growth factor stimulation of S6K2 induced phosphorylation of the C-terminal nuclear localizing signal, retaining active S6K2 in the cytoplasm (Valovka et al. 2003). This indicates that S6K2 has active roles in the nuclear and in the cytoplasmic compartment, with possible involvement in proliferation. In contrast, mTOR-dependent growth factor stimulation led to nuclear localization of S6K1 in G1 phase, indicating that a high level of expression of S6K1 in the nucleus is a marker of extracellular growth stimulation (Rosner & Hengstschlager 2011). The S6Ks proliferative roles may also act through S6 in concert with 4EBP1, and consequently through translational upregulation of the key G1-to-S phase transition regulator, cyclin D1 (Averous et al. 2008). We observed a strong correlation of S6K1 with markers of high proliferation, and S6K2 correlation with markers of low proliferation. In contrast, Lyzogubov et al. (2005) reported that S6K2, not S6K1, correlated with the proliferation markers Ki67 and proliferating cell nuclear antigen (PCNA). We found that S6K1 correlated with HER2-positive tumors and S6K2 correlated with ER-positive status. This is probably in part a consequence of co-amplification, as RPS6KB1 is located in the 17q23 Figure 2 Benefit of tamoxifen in ER-positive patients grouped according to pS6K-t389 expression. High levels of expression of pS6K-t389 in the cytoplasm and in the nucleus predicted reduced response to tamoxifen in the two chromosomal region close to the HER2 gene, and RPS6KB2 is located to 11q13 close to the CCND1 gene coding for the cyclin D1 protein, an amplicon strongly connected with ER expression, reported in this study and others (Bostner et al. 2007, Aaltonen et al. 2009). The opposing correlations of the two S6Ks with proliferation markers may come from their strong connections with HER2 and ER, respectively, with HER2 positivity being associated with high proliferation and ER with low proliferation. However, the prognostic value of S6K1 overexpression found in this study was independent of HER2 status. In glioblastoma, pS6K and pS6 correlated with AKT activation, supporting our findings (Riemenschneider et al. 2006). High levels of expression of pAKT in this study correlated with pS6K in the cytoplasm, mainly S6K1, and in the nucleus, mainly S6K2. However, knockdown of S6K1, but not S6K2, increased the active form of AKT. Inhibition of S6K with rapamycin in pancreatic cancer cells upregulated pAKT, and mTORC1/2 inhibition abrogated the AKT phosphorylation, thus instead the MAPK pathway was activated (Soares et al. 2013). Metformin, on the other hand, inhibited mTOR without the regulatory feedback effects on AKT and MAPK. An effect of mTORC1/2 inhibition was also detected in multiple myeloma cells, with a RAF-dependent activation of ERK (Hoang et al. 2012). This indicates that the regulatory networks connecting the AKT with the S6Ks can be modulated, and with fine-tuned inhibitors the growth stimulation could be restrained. Upregulation of the HER family RTKs is a common resistance response to endocrine treatment in breast cancer. To circumvent this mechanism the RTK/PI3K/mTOR-pathways have been inhibited in various combinations. Axelrod et al. (2014) have recently described S6K as a critical node and a potential single target by describing similar responses to S6K1 inhibition alone as to double-targeting RTKs and PI3K/mTOR. We Antibodies showed specific epitope recognition in western blotting. S6K1 and S6K2 protein was downregulated upon siRNA transfection of ZR75-1 breast cancer cells. The pS6K-t389 antibody detected phosphorylated residues of S6K1 (70 and 85 kDa) and of S6K2 (60 kDa). AKT phosphorylation was induced by S6K1 knockdown. Removal of S6K2 did not induce AKT phosphorylation. GAPDH was used as a control for equal protein loading. The figure is a representative of three independent experiments. (nZ418) and (B) nuclear expression indicated a significantly worse prognosis in cohort 1 (nZ418). (C) S6K1 cytoplasmic and nuclear expression in all tumors from cohort 2 had no significant prognostic value, although the trend mirrored the results from cohort 1 (nZ130). suggest that detection of S6K1 and pS6K1/2 could be used to identify patients with de novo resistance to tamoxifen, thus in need of additional targeted inhibition. Phosphorylated epitopes are known to be unstable and highly dependent on fixation methods, which could be one reason that yet none has reached clinical use for prognostic or treatment predictive purposes. Therefore, we sought to thoroughly validate a well-known antibody targeting a potentially important site within the PI3K/ mTOR cascade; the pS6K-t389 antibody from Cell Signaling Technology. Results from western blot analyses indicate that the antibody detected phosphorylated residues seen as three separated bands, interpreted as the two isoforms of S6K1, p70 and p85, and a single band from the two S6K2 isoforms, p54 and p56. The corresponding bands disappeared in an expected pattern upon knockdown of the two proteins, separately. For clinical use, an antibody should be validated on fixed tissue. Therefore, we used formalin-fixed and paraffin embedded-cells after stimulation with HRG. We observed increased staining after HRG stimulation and the results of a phosphatase assay indicated phosphorylation specificity of the antibody. To test the S6K1 antibody, we induced downregulation of protein using siRNA transfection and showed high specificity of the S6K1 antibody on formalin-fixed and paraffin-embedded cells. In conclusion, it is a matter of urgency to identify subgroups of breast cancer patients who fail to respond to endocrine treatment. Biomarkers predicting the response to endocrine treatment and identification of patients likely to benefit from mTOR inhibition will improve survival rates and prolong time to recurrence. In this study, we provide data on the involvement of S6K in effects of tamoxifen treatment and resistance, with S6K1 protein localization as a potential clinical biomarker. Patients with tumors showing high expression of the biomarker may respond well to an additional targeted treatment along with endocrine treatment in the neoadjuvant or adjuvant setting, as this may reduce recurrence of endocrine-treated breast cancer. Supplementary data This is linked to the online version of the paper at http://dx.doi.org/10.1530/ ERC-14-0513. Declaration of interest The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. Funding Financial support was obtained from the Swedish Cancer Society, the Swedish Research Council, the Ö stergö tland County Council, and the Lions Research Fund. Author contribution statement J Bostner, E Karlsson, and O Stå l were involved in establishment of the project; J Bostner, E Karlsson, C B Eding, G Perez-Tenorio, and O Stå l were involved in scientific directions of the project; T Fornander, and B Nordenskjö ld contributed to the materials and clinical data; J Bostner, E Karlsson, H Franzé n, and A Konstantinell performed immunohistochemical analysis and scoring; J Bostner, E Karlsson, C B Eding, G Perez-Tenorio, and O Stå l performed statistical analysis and data evaluation; J Bostner, C B Eding, and H Franzé n performed in vitro antibody validation procedures; and all authors have read and approved the final manuscript.
2016-12-22T08:44:57.161Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "2a1de04a9e5b56ee7e51d33810b408f7eb44add4", "oa_license": "CCBY", "oa_url": "https://erc.bioscientifica.com/downloadpdf/journals/erc/22/3/331.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "85c02a4e4d333e0fc82416f865f9bba00858f4b9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119547506
pes2o/s2orc
v3-fos-license
Association of Staphylococcus nasal colonization and HIV in end-stage renal failure patients undergoing peritoneal dialysis Abstract Introduction: Staphylococcal infections can cause significant morbidity in patients undergoing dialysis. This study evaluated the effects of HIV infection on nasal carriage of Staphylococcus aureus, staphylococcal peritonitis, and catheter infection rates in patients with end-stage renal failure managed with continuous ambulatory peritoneal dialysis (CAPD). Methods: Sixty HIV-positive and 59 HIV-negative CAPD patients were enrolled and followed up for up to 18 months. S. aureus nasal carriage (detected by nasal swab culture), Staphylococcal peritonitis (diagnosed by clinical presentation, and CAPD effluent Staphylococcal culture and white blood cell count ≥100 cells/µL), and catheter infections (including exit site and tunnel infections) were assessed monthly. Results: At 18 months, S. aureus nasal carriage rates were 43.3% and 30.5% (p = 0.147) and the methicillin-resistant S. aureus (MRSA) nasal carriage rates were 31.7% and 13.6% (p = 0.018) for the HIV-positive and HIV-negative cohorts, respectively. The HIV-positive cohort was associated with increased hazards for staphylococcal peritonitis, (adjusted hazard ratio [AHR] 2.85, 95% confidence interval [CI] 1.19–6.84, p = 0.019) due to increased coagulase-negative staphylococcal (CNS) peritonitis rate in the HIV-positive cohort compared with the HIV-negative cohort (0.435 vs. 0.089 episodes/person-years; AHR 7.64, CI 2.18–26.82, p = 0.001). On multivariable analysis, CD4+ cell count <200 cells/µL, diabetes, and S. aureus nasal carriage were found to be independent predictors of S. aureus peritonitis. Conclusions: These findings suggest that HIV infection may be a risk factor for MRSA nasal colonization and may increase the risks of CNS peritonitis, while a CD4+ cell count <200 cells/µL and S. aureus nasal carriage may be important predictors of S. aureus peritonitis. Introduction Infection is a major challenge in patients with end-stage renal failure who are managed with continuous ambulatory peritoneal dialysis (CAPD), and it is an important source of morbidity and technique failure. Gram-positive bacteria, most notably coagulase-negative staphylococci (CNS) and Staphylococcus aureus, frequently cause CAPD-associated peritonitis [1][2][3]. Peritonitis caused by CNS infection tends to follow a benign clinical course that is easily treatable, while S. aureus peritonitis can be complicated with relapses and the need for catheter removal, particularly if it is associated with exit site or tunnel infections. An important risk factor for CAPD-associated exit site infections and peritonitis is S. aureus nasal carriage [4][5][6]. An association between HIV infection and increased rates of S. aureus nasal colonization in the general population has been reported, and increased likelihood of colonization has been suggested in the advanced stages of HIV infection [7,8]. Infective complications in HIV-positive patients on dialysis can cause significant morbidity and mortality, and they can result in the need to transfer to hemodialysis, which gives rise to greater cost burdens to health budgets. In poorly resourced regions, such as sub-Saharan Africa, where HIV is prevalent, but access to renal replacement therapy (RRT) is limited, CAPD can represent a cost-effective option. Africa has been estimated to have the lowest RRT access at between 9% and 16%, reflecting a significant unmet need [9]. Indeed, CAPD can be implemented with relative ease and without the need for complex equipment, and it is well suited for areas that are remote or have limited dialysis facilities [10][11][12]. This study aimed to evaluate the effects of HIV infection on S. aureus nasal carriage, staphylococcal peritonitis, and catheter infection rates in patients with end-stage renal disease (ESRD) who were managed with CAPD. Study population This prospective sub-cohort of 129 patients was drawn from a 140-patient cohort recruited from King Edward VIII Hospital and Inkosi Albert Luthuli Central Hospital (IALCH), Durban, South Africa, which has been described previously [13,14]. Consecutive patients aged 18 to 60 years who required dialysis and had newly inserted double-cuffed coiled Tenckhoff catheters were recruited between September 2012 and February 2015 stopping enrollment when each cohort had 70 participants. Sixty HIV-positive and 59 HIV-negative patients who had at least one nasal swab sample taken during follow-up were included in this sub-study. The study protocol was approved by the University of KwaZulu-Natal Biomedical Research Ethics Committee (BE 187/ 11), and informed consent was obtained from the patients before enrollment. The status of HIV infection was determined by two 4th generation HIV enzymelinked immunosorbent assays performed by the South African National Health Laboratory Service (NHLS) before enrollment; screening for HIV was performed using a HIV Ag/Ab Combo (CHIV) assay (ADVIA CentaurV R XP, Siemens Healthcare Diagnostics, Tarrytown, NY) and confirmation was done using HIV Combi and HIV Combi PT assays (Cobas e601, Roche Diagnostics, Mannheim, Germany). Antiretroviral therapy (ART) was left to the discretion of the local clinic. Y-sets, twin-bag systems, and conventional peritoneal dialysis (PD) solutions (DianealV R 1.5%, 2.5%, or 4.25% dextrose, icodextrin, or amino acid-based solutions; Baxter Healthcare, Deerfield, IL) were used in all CAPD patients. They generally performed four exchanges per day. All patients received approximately 40 h of practical and theoretical CAPD training in groups and individualized sessions conducted by the same nursing team of two senior nurses working together. A prophylactic intravenous antibiotic was administered to all patients prior to PD catheter insertion. Patients were prescribed 4% chlorhexidine Surgiscrub soap for hand washing and chlorhexidine 0.5% in ethanol 70% solution for hand rubs between hand washing. They were directed to use water and medicated soaps of their choice for exit site care. Enrollment and follow-up The patients' demographic, clinical, and biochemical data were documented on enrollment. The patients were followed up monthly at a central renal clinic in IALCH for 18 months or until the endpoints of catheter removal and subsequent transfer to hemodialysis or death. At each follow-up assessment, nasal swabs were taken, phlebotomy was performed for biochemical tests, and the details of infective complications and hospital admissions in the intervening periods were recorded on predefined questionnaires. Full blood counts were performed, and the serum concentrations of C-reactive protein (CRP), urea, creatinine, electrolytes, and albumin were measured at NHLS, and the results were periodically retrieved from the IALCH's electronic results database. Microbiology Swabbing of the anterior nasal vestibules with sterile swabs (Amies Agar Gel-No Charcoal Transport System; Copan Italia SpA, Brescia, Italy) was performed monthly by a research nurse, and the swabs were transported to the laboratory for processing. Colistin-nalidixic agar and mannitol salt agar media were used for the cultures. The CAPD nurse took PD effluent specimens for white blood cell (WBC) counts and culture when the patients' clinical presentations suggested peritonitis, and they were transported to the NHLS microbiology department in sterile specimen bottles for processing. The PD effluent WBC counts were determined using a 40Â microscope objective lens. The culturing was done on chocolate, blood agar, and brain-heart infusion broth. A VitekV R 2 system (bioM erieux, France) was used for identification and antibiotic susceptibility testing of the nasal swab and PD effluent specimens. Definitions A peritonitis episode was defined as a clinical presentation with a cloudy effluent or abdominal pain associated with a PD effluent WBC count of >100 cells/mL or a positive culture. All patients were treated for at least 2 weeks, and they initially received intraperitoneal vancomycin and amikacin empirically, with further therapy modified according to the culture results. Episodes with culture-confirmed Staphylococcus growth and the date of presentation, information about whether the patient was treated as an inpatient or outpatient, and the presenting PD WBC counts were included in this analysis. The infection rates were calculated as the total number of infectious episodes with an organism during the follow-up period divided by the dialysis-years' time at risk, and they were expressed as the number of episodes per year [15]. Exit site infections were diagnosed clinically and defined based on the presence of purulent drainage, with or without skin erythema, at the catheterepidermal interface. Tunnel infections were diagnosed clinically or using sonographic studies, and they were defined based on the presence of erythema, edema, or tenderness over the subcutaneous pathway [15]. Both infection types were referred to as catheter infections. The participants were classified as S. aureus nasal carriers if at least one culture from the monthly nasal swabs was positive for S. aureus, and they were classified as non-carriers if none of the cultures from the monthly nasal swabs was positive for S. aureus during follow-up. The S. aureus nasal carriers were further classified as intermittent S. aureus carriers if only one nasal culture was positive for S. aureus during follow-up or as persistent carriers if more than one monthly nasal culture was positive for S. aureus. Mupirocin exposure Exposure to mupirocin during the study was determined through evaluation of the electronic hospital database at the end of the study period for instances where mupirocin was dispensed during each patient's follow-up period. The date of the first documented prescription of mupirocin and the number of months prescribed were recorded for individual patients. Statistical analysis The continuous variables were expressed as the mean-± standard deviation (SD) or median and the interquartile range (IQR), and they were compared using Student's t-test or the Wilcoxon-Mann-Whitney test, as appropriate. The proportions and categorical variables were compared using the v 2 test or Fisher's exact test, as appropriate. The Mantel-Haenszel method was used to calculate rate ratios and to compare incidence rates in the two study cohorts and the subgroups divided according to CD4þ cell count and S. aureus nasal carriage. Logistic regression was used to assess the relationship between HIV and the detectability of S. aureus in the nares. A multivariable logistic regression model that included age, race, gender, smoking, alcohol use, diabetes, body mass index (BMI), baseline serum albumin, baseline CRP level, baseline CD4þ cell count, Tenckhoff catheter insertion parameters (site and method, whether laparoscopic or percutaneous), mupirocin exposure to the nose, type of primary residence (urban vs. rural), highest education level, employment (employed vs. unemployed), number of total peritonitis episodes experienced and total number of hospital days spent as inpatient in hospital during follow-up, was used to determine whether HIV independently predicts the detection of S. aureus in the nares. Cox proportional hazard analysis was used to estimate the associations between HIV infection and the peritonitis outcome events due to Staphylococcus species, S. aureus, and CNS, respectively. Multivariable Cox proportional hazard analysis was used to identify independent predictors of each Staphylococcal peritonitis event type. The covariates included in the all Cox models for the peritonitis outcome variables were age, race, gender, smoking, diabetes, BMI, baseline serum albumin, baseline CD4þ cell count, Tenckhoff catheter insertion parameters (site and method), mupirocin exposure, type of primary residence, highest education level, and employment. In the Cox model for Staphylococcus species peritonitis, further covariates of Staphylococcus species nasal carriage and catheter infection were added. In the Cox model for S. aureus peritonitis, additional covariates of the S. aureus nasal carriage and catheter infection were included. In the Cox model for CNS peritonitis, additional covariates of CNS nasal carriage and Staphylococcus species catheter infection were included. All the analyses were performed using Stata, version 15.0 (StataCorp LP, College Station, TX), and the significance level was set at p < 0.05. Patients' characteristics The study population of 119 CAPD patients included 59 HIV-negative and 60 HIV-positive patients with a median age of 39 years (IQR: 29-49 years) and 34 years (IQR: 30-41.5 years), respectively, (p ¼ 0.207). Fifty-two percent of the HIV-positive patients were either newly diagnosed with HIV or had recently been started on ART, less than six months before Tenckhoff catheter insertion. Sixty percent of the HIV-positive patients had a suppressed viral load of <150 copies/mL, which was the hospital laboratory assay's limit at the time of enrollment. While the median baseline viral load was 4 229.5 copies/mL (IQR: 817-88,294.5 copies/mL) for the patients with detectable viral loads, the median fell below the detectable limit (IQR: <150-2284.5 copies/ mL) when the patients with undetectable viral loads were included. The characteristics of the study population are outlined in Table 1. Study drop out After 18 months, 64.4% (38 of 59) of the HIV-negative patients and 33.3% (20 of 60) of the HIV-positive patients were alive with patent catheters (p ¼ 0.001). Twenty-two percent (13 of 59) of the HIV-negative Two HIV-negative and three HIV-positive patients were lost to follow-up due to live related renal transplantation, improved renal functions, or opted for private hemodialysis (Supporting Information Table 1). Coagulase-negative staphylococci peritonitis rates were 0.089 episodes/person-years in the HIV-negative S. aureus peritonitis rates were 0.129 episodes/person-years in the HIV-negative cohort and 0.136 episodes/person-years in the HIV-positive cohort (RR 1.05, CI 0.39-2.82, p ¼ 0.920). HIV was not significantly associated with S. aureus peritonitis events on both univariate Table 4). The S. aureus catheter infection rates were 0.199 episodes/person-years for the S. aureus nasal carriers and 0.056 episodes/person-years for the non-carriers (RR 3.57, CI 1.10-11.59, p ¼ 0.024). Mupirocin exposure Eighty-six percent (51 of 59) of the HIV-negative cohort had mupirocin ointment prescribed for exit site application compared to 60.0% (36 of 60) of the HIV-positive cohort (p ¼ 0.001), due to the earlier recruitment of a greater proportion of HIV-negative patients when .00 a Cox model for Staphylococci species peritonitis outcome included variables of HIV, age, race, gender, smoking, diabetes, body mass index, baseline serum albumin, primary residence, highest education level, employment, baseline CD4 count, Tenckhoff catheter insertion site, Tenckhoff catheter insertion method (laparoscopic vs. percutaneous), staphylococci species nasal carriage, staphylococci species catheter infection, and exposure to topical mupirocin at exit site. b One patient record excluded in analysis due to missing data on primary residence, education, and employment. cCox model for coagulase-negative staphylococci peritonitis outcome included variables of HIV, age, race, gender, smoking, diabetes, body mass index, baseline serum albumin, primary residence, highest education level, employment, baseline CD4 count, Tenckhoff catheter insertion site, Tenckhoff catheter insertion method (laparoscopic vs. percutaneous), coagulase-negative staphylococci nasal carriage, staphylococci catheter infection, and exposure to topical mupirocin at exit site. d Cox model for Staphylococcus aureus peritonitis outcome included variables of HIV, age, race, gender, smoking, diabetes, body mass index, baseline serum albumin, primary residence, highest education level, employment, baseline CD4 count, Tenckhoff catheter insertion site, Tenckhoff catheter insertion method (laparoscopic vs. percutaneous), Staphylococcus aureus nasal carriage, Staphylococcus aureus catheter infection, and exposure to topical mupirocin at exit site. BMI: body mass index; CI: confidence interval; CD: cluster of differentiation; HR: hazard ratio; HIV: human immunodeficiency virus. hospital policy favored routine mupirocin exit site prophylaxis. Furthermore, mupirocin ointment was prescribed for a median 5 (2-9) months in the HIV-negative cohort compared to 3 (2-6.5) months in the HIV-positive cohort (p ¼ 0.140). Fourteen percent (8 of 59) of the HIV-negative cohort had mupirocin nasal spray prescribed during follow-up compared to 15.0% (9 of 60) in the HIV-positive cohort (p ¼ 0.822). Discussion This prospective cohort study evaluated the effects of HIV infection on the S. aureus nasal carriage and CAPDassociated staphylococcal infective outcomes in patients with ESRD who required dialysis. HIV infection was associated with significantly higher MRSA nasal carriage and staphylococcal peritonitis rates, and HIV-positive patients with CD4 count !350 cells/mL had significantly higher S. aureus nasal colonization rates. However, our study failed to demonstrate any significant differences with respect to catheter infection rates in relation to HIV infection. CD4þ cell count <200 cells/ mL in the HIV-positive cohort and S. aureus nasal carriage, were found to independently predict S. aureus peritonitis. The difference between the HIV-positive cohort and HIV-negative cohort in relation to the S. aureus nasal carriage rate was not statistically significant, but HIVpositive patients with baseline CD4þ count !350 cells/ mL were associated with significantly higher S. aureus nasal colonization compared to the HIV-negative cohort. Furthermore, HIV and baseline CD4þ cell counts were found to independently predict the detection of nasal S. aureus in our multivariable logistic regression model, and S. aureus nasal carriage was detected significantly earlier during follow-up in the HIV-positive cohort compared to the HIV-negative cohort. S. aureus nasal colonization rates were lower in the HIV-positive patients with CD4þ counts <350 cells/mL compared to those with CD4þ counts !350 cells/mL. This observation may have been influenced by the increased dropout rate in the HIV-positive cohort attributed to death, which predominantly affected the HIV-positive subgroup with CD4þ counts <350. The decreased observation times may have undermined the detection of S. aureus in this subgroup compared to longer observation times among those with higher CD4þ counts and those in the HIV-negative cohort. These results suggest an association of HIV with S. aureus nasal colonization that may be better defined in a larger more adequately powered study. The MRSA nasal carriage rate in the HIV-positive cohort was significantly higher than that in the HIVnegative cohort, and that it was much higher than the pooled MRSA nasal carriage rate estimate of 6.9% (95% CI 4.8-9.3) reported in a meta-analysis of HIV-positive non-CAPD populations [16]. Furthermore, HIV positive status and baseline CD4þ cell count were significant factors associated with MRSA nasal colonization on both univariate and multivariable logistic regression analysis. These significant relations highlight the increased risk of MRSA colonies in the nares associated with HIV infection, which raises concerns about the subsequent development of more serious MRSA infections. Furthermore, the significantly higher proportion of episodes of methicillin-sensitive S. aureus peritonitis in the HIV-negative cohort than in the HIV-positive cohort highlights the relatively low burden of methicillin-resistant infections in the HIV-negative group. Infection with HIV has been positively linked to an increased risk of MRSA colonization and subsequent infection in the general population [17][18][19]. Colonization and infection by MRSA, which is commonly acquired through nosocomial contact, have been associated with exposure to antibiotics, prior hospitalization, illicit drug use, chronic skin disease, and risky lifestyle behaviors in the general population [18,20]. Healthcare-and antibiotic-associated exposures may have increased the risk of MRSA colonization in our HIV-positive CAPD cohort; however, these variables were not directly measured or controlled for in our study design. The community-associated acquisition could also have contributed to the earlier detection of MRSA colonization in our HIV-positive cohort, as compared to the later detection of MRSA colonization in the HIV-negative cohort, suggesting a more traditional nosocomial-associated acquisition in the HIV-negative cohort [21]. The HIV-positive cohort had a higher staphylococcal peritonitis rate than the HIV-negative cohort, because of the significantly increased CNS peritonitis incidence in the former compared with that in the latter, which highlights the increased vulnerability of HIV-positive patients to touch contamination. Coagulase-negative staphylococci is a common skin commensal found in many parts of the body (nose, axilla, groin, etc.) to various degrees [22,23]. It is also the most commonly isolated pathogen causing peritonitis in patients on CAPD [24]. Factors associated with the development of CNS peritonitis are access to the peritoneum via the catheter, bacterial characteristics allowing evasion of host defenses, immune depression induced by conventional PD fluids, and inherent host immune system dysfunction [22,25,26]. In this study, HIV was found to increase the risk of developing CNS peritonitis, reflecting the immunosuppressive state of HIV and the resultant impaired ability of local peritoneal immune defense mechanisms to combat the contaminating CNS organisms. Furthermore, CNS nasal carriage was found to be significantly increased in the HIV-negative cohort compared to that in the HIV-positive cohort, suggesting HIV-associated changes to the typical body commensal patterns favoring organisms such as MRSA, which are associated with greater healthcare exposure. However, CNS nasal carriage was not significantly associated with CNS peritonitis. Previous reports have also shown a disconnect between CNS strains colonizing the body and those causing infection, as peritonitis-cultured strains tended to differ from those isolated from other body sites before infection [22,23]. On multivariable analysis, HIV and diabetes were prominent independent predictors for the development of staphylococcal peritonitis, reinforcing the suggested risks attributed to impaired immunity. Furthermore, HIV and BMI were prominent independent predictors for the development of CNS peritonitis. BMI was found to be a protective predictor for this type of peritonitis. A result also reported by another South African study by Isla et al. [27] where BMI was protective for all-cause peritonitis. These results may suggest a nutritional protective effect, in contrast to some peritonitis publications that have reported an increased peritonitis risk associated with higher BMI [28][29][30]. This adverse risk associated with lower BMI probably reflects increased hazards due to undernutrition and the resultant immunocompromise hindering effective containment of advancing CNS organisms contaminating the CAPD system. Both these South African studies were likely influenced by a national rationing policy favoring ESRD patients with lower BMI levels (<35 kg/m 2 ) for access to scarce renal replacement therapy, thus limiting the ability to detect possible hazards associated with obesity [31]. The S. aureus peritonitis rate was not affected by HIV infection status, as reflected by similar peritonitis rates between the two cohorts. However, HIV-positive patients with CD4þ counts <200 cells/mL were noted to have a six-fold higher S. aureus peritonitis rate compared to the HIV-negative cohort. Moreover, this HIVpositive sub-group was associated with increased hazards for developing S. aureus peritonitis events both on univariate and multivariable Cox proportional hazard models suggesting a peritonitis risk profile influenced by changes in immunological state and likely resulting from impaired local immunity [25]. Further, on multivariable analysis, diabetes and S. aureus nasal carriage were found to be other independent predictors of S. aureus peritonitis. These associations support a role for decreased immunity and S. aureus nasal carriage in the risk profile for S. aureus peritonitis. Compared with the HIV-negative state, HIV infection was associated with higher S. aureus and all-cause catheter infection rates. However, these differences were not statistically significant, which was probably a consequence of the lower numbers of these outcomes. The study's protocol did not restrict the use of mupirocin prophylaxis, either at the exit sites or the nares for ethical reasons, because these prophylactic measures significantly reduce S. aureus-associated catheter infections [32]. However, mupirocin was not uniformly used by treating physicians. Concerns of resistance led to mupirocin being withdrawn from use in general CAPD by the hospital's therapeutics committee midway through the study period, thereafter reserved for nasal decolonization of S. aureus. Nevertheless, the sporadic use of mupirocin likely suppressed the incidence of exit site and tunnel infections. The main limitation of our study is that it was a single-center observational study, which limits the causation inferences that can be drawn. The sample size may have been too small for differences in the S. aureus nasal colonization and catheter infection outcomes to be fully appreciated and may have led to wide confidence intervals in some observed associations. The reported outcomes were secondary outcomes in the parent study and were not the primary determinants of the sample size calculations. Furthermore, the disproportionately higher mortality rate in the HIV-positive cohort contributed to a significantly higher dropout rate and a significantly shorter observation time compared to the HIV-negative cohort. This observation-time bias may have resulted in an underestimation of S. aureus nasal colonization and peritonitis, and catheter-associated infection rates in the HIV-positive cohort. However, it is not expected to have meaningfully altered observed associations, such as the CNS peritonitis risk associated with HIV, as this kind of potential bias is likely to have led to an underestimation of observed associations rather than enhance them. The differences in the S. aureus nasal colonization and infection rates require further investigation with additional research in prophylactic measures. This study's findings suggest that HIV infection adversely influences MRSA nasal colonization and that it may increase the risk of CNS peritonitis. Differences in the S. aureus peritonitis and catheter infection rates in relation to HIV infection were not significant. However, S. aureus nasal carriage and a CD4þ cell count <200 cells/mL associated with HIV were shown to adversely influence the risk for S. aureus peritonitis. These observations contribute to our understanding of the resistance profiles of S. aureus colonizers and the staphylococcal organism patterns that are likely to cause infection, which may assist in guiding appropriate antibiotic therapy and prophylaxis.
2019-04-18T13:03:31.011Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "80deca55f10e68c64b80d6f53ccc91cc02925fca", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/0886022X.2019.1598433?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "80deca55f10e68c64b80d6f53ccc91cc02925fca", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
9695463
pes2o/s2orc
v3-fos-license
Mother-Child Interactions and Externalizing Behavior Problems in Preschoolers over Time: Inhibitory Control as a Mediator Previous research has shown links between parenting and externalizing behavior problems in young children over time. Associations between inhibitory control, one of the executive functions, and externalizing behavior problems are widely established as well. Yet, the role of inhibitory control in the maintenance and change of externalizing behavior problems over time remains unclear. We examined whether inhibitory control could explain the link between mother-child interactions measured on a moment-to-moment timescale and preschoolers’ externalizing behavior problems as reported by teachers. With a sample of 173 predominantly clinically referred preschoolers (76.9% boys) we tested a longitudinal model proposing that affective dyadic flexibility and maternal negative affect predict as well as interact in predicting hyperactive/impulsive behavior and aggressive behavior, with preschoolers’ inhibitory control as a mediator. Our results provide support for this model for preschoolers’ hyperactive/impulsive behavior, but not for aggressive behavior. Hence, inhibitory control is identified as a mechanism linking the content and structure of mother-child interactions to preschoolers’ hyperactivity and impulsivity over time. Electronic supplementary material The online version of this article (doi:10.1007/s10802-016-0258-1) contains supplementary material, which is available to authorized users. Impulsivity, aggressive behavior, and noncompliance are the most frequently reported behavioral problems during early childhood (Keenan and Wakschlag 2004). These types of problems, also referred to as externalizing behavior problems, are the main reason for the clinical referral of preschoolers (Wilens et al. 2002). The presence of externalizing problems at an early age is predictive of maladjustment later in life (Denham et al. 2000). Despite the reported stability of these problems from preschool into the school-aged period (Keenan et al. 2011), recent findings point to changes in externalizing behaviors and related diagnoses (i.e., Attention Deficit Hyperactivity Disorder [ADHD], Oppositional Defiant Disorder [ODD], & Conduct Disorder [CD]) during this period as well . By identifying the mechanisms through which externalizing behavior problems develop over time, more specific directions could be provided for intervention programs aimed at reducing these types of problems in preschoolers and preventing the development of more persistent problems over time. In the present study we examined longitudinal links between mother-child interactions, inhibitory control, and preschoolers' externalizing behavior problems. Inhibitory Control and Externalizing Behavior Problems Executive functions in young children have increasingly gained attention in research on externalizing behavior problems (Schoemaker et al. 2013). Executive functions refer to the cognitive self-regulation of thought, action, and emotion (Séguin and Zelazo 2005). Generally, three different executive functions are identified, namely working memory, shifting, and inhibition (Miyake et al. 2000). In particular, inhibition is an executive function that is considered a requirement for successful self-regulation (Hofmann et al. 2012). Although the terms originally stem from different fields, executive functions and effortful control seem to show many commonalities, and inhibition or inhibitory control is considered an important component of both executive functioning and effortful control (Zhou et al. 2012). In this study we use the term inhibitory control, which refers to processes that enable children to actively inhibit or override a dominant response and initiate a subdominant response. The ability to inhibit a dominant response that is incompatible with a child's goal is essential for successful self-regulation that develops rapidly during the preschool years (Olson et al. 2009). The capacity to self-regulate is considered a cornerstone for positive development (Shonkoff and Phillips 2000). Consistent with this view, preschoolers with ADHD or Disruptive Behavior Disorder (DBD) symptoms are found to have weaker inhibitory capacities compared to typically developing preschoolers (Monette et al. 2015;Schoemaker et al. 2013;Schoemaker et al. 2012). Yet, improvements in inhibitory control over time are more distinct in clinically diagnosed preschoolers with ODD/CD or ADHD compared to typically developing children, as they seem to catch up a part of their delay . It is unclear, however, whether these improvements in inhibitory control are related to a decrease in externalizing behaviors. In their systematic review, Van Lieshout et al. (2013) state that inhibitory control is unrelated to the developmental course of ADHD in children and adolescents, but relatively little studies involved younger children. Therefore, more longitudinal research is needed on the role of inhibitory control in preschoolers' externalizing behavior problems. Inhibitory Control as a Mediator Rather than merely linking inhibitory control to externalizing behavior problems, it is suggested that children's inhibitory control may be an important mechanism underlying the often reported link between parenting and preschoolers' externalizing behavior problems. Examples of parenting dimensions associated with externalizing problems are responsiveness (Johnston et al. 2002), pro-active parenting and parental anger (Denham et al. 2000), psychological and behavioral control (Aunola and Nurmi 2005), and parental hostility (Harold et al. 2013). However, considerably less information is available on how parenting is related to preschoolers' externalizing behavior problems over time (Johnston and Mash 2001). Inhibitory control might be key to better understanding this relation. Indeed, some longitudinal studies offer support for the role of inhibitory control in explaining the link between parenting and externalizing behavior problems. However, these studies have been conducted in school-aged children (e.g., Valiente et al. 2006) and adolescents (e.g., Eisenberg et al. 2005). Examining the role of inhibitory control in young children seems additionally relevant since executive functions undergo the most rapid development during young childhood (Zelazo and Müller 2002). Unfortunately, longitudinal evidence for the mediating role of young children's inhibitory control seems inconsistent in different studies in preschool-aged children (Eisenberg et al. 2010;Spinrad et al. 2007). Though, these previous studies are limited by the use of questionnaire measures by the same informant (e.g., mother) to assess both inhibitory control and externalizing behavior problems. Subsequently, Sulik et al. (2015) are among the first to use independent methods for the different constructs measured over time. Based on coded parent-child observations, executive functioning tasks, and questionnaires they report that preschoolers' executive functioning mediates the relation between early parenting and externalizing behavior problems (i.e., operationalized as conduct problems) in a large community sample. The Present Study The aim of the current study was to further examine the role of inhibitory control in linking parenting and externalizing behavior problems, but in a sample of predominantly clinically referred preschoolers. Second, we extended the work of Sulik et al. (2015) by examining hyperactive/ impulsive behavior in addition to aggressive behavior, both of which are considered externalizing behavior problems. Despite reported similarities in inhibitory control in children with different externalizing diagnoses (i.e., ADHD, ODD/CD, or a combination), there appear to be differences as well (Schoemaker et al. 2012). For example, associations between inhibitory control and ODD/CD are more pronounced when motivational demands, such as reward and punishment, are high. This is true for adolescents (Fairchild et al. 2009), school-aged children (Matthys et al. 2004), and even for preschoolers (Schoemaker et al. 2012). Additionally, fewer studies have been conducted on the role of parenting in children's hyperactive/impulsive behavior as compared to aggression (Johnston et al. 2002;Stormshak et al. 2000). Therefore, we considered hyperactive/impulsive behavior and aggressive behavior as two separate constructs rather than one general construct of externalizing problems. Third, we used a micro approach in examining dyadic aspects of mother-child interactions. It has been argued that a dyadic interaction is more than just the sum of its parts, and therefore specific dyadic behaviors should be examined (Lunkenheimer and Leerkes 2015). Macro ratings are wellsuited for capturing overarching constructs and taking the broader context of behavior into account (Hawes et al. 2013;Heyman et al. 2014) and can even incorporate specific dyadic behaviors (e.g., Kochanska et al. 2008), yielding valuable information to the field. In contrast to macro ratings, however, micro ratings capture the specific sequential relations that characterize interaction patterns (Hawes et al. 2013;Heyman et al. 2014). Micro ratings that capture behaviors as they occur in real time could therefore give a more detailed understanding of dyadic parent-child dynamics (Dishion et al. 2016;Hawes et al. 2013). Consistent with this view, moment-to-moment interaction patterns are thought to reflect the proximal engines of child development (Snyder and Stoolmiller 2002). Hence, children are assumed to develop and maintain externalizing behavior problems through their day-to-day, moment-tomoment interactions with others. Likewise, real-time interchanges are used by clinicians to improve family dynamics (Lunkenheimer et al. 2011). By applying a Dynamic Systems (DS) approach (Granic and Patterson 2006), we were able to identify mother-child interactions based on their affective content, but also by their structural, dyadic pattern. Therefore, a more fine-grained understanding of mother-child interactions and preschoolers' externalizing behavior problems could be obtained. Maternal Negative Affect Since mothers continue to fulfill the role of primary caregiver in current Western societies (Yeung et al. 2001), it can be assumed that preschoolers often interact with their mothers. In 1983, Maccoby and Martin already pointed out the relevance of studying affective behavior during interactions. Although instances of negative affect during motherchild interactions are common in the preschool years (Keenan and Wakschlag 2000), high levels of maternal negativity towards her child are related to externalizing behavior problems in young children (Cole et al. 2003;Denham et al. 2000;Rubin et al. 2003). Rueger et al. (2011) further propose that parental affect states during interactions may underlie the large variety of parenting dimensions. Effective parent training programs, aimed at reducing externalizing problems in young children, already focus on promoting a positive parent-child relationship through altering parents' affective responses (e.g., Webster-Stratton 2011). While these previous findings are important, research still requires moment-tomoment assessments to specifically capture parental affect during parent-child interactions to obtain a more detailed understanding of their role in child development (Teti and Cole 2011). Inhibitory control is suggested to play a key role in explaining the link between maternal displays of affect and preschoolers' externalizing behavior problems. As argued by Hoffman (2000), for example, maternal negative affect is likely to produce affective overarousal in young children, which poses difficulties for using and developing higher-order cognitive processes such as inhibitory control. In addition to a diminished ability to learn, children might be less motivated to learn from interactions with mothers showing high levels of negative affect (Eisenberg et al. 2005). Concurrent links between the display of maternal negative affect towards children and children's maladjustment have indeed been explained through poor inhibitory control in preschoolers (Eisenberg et al. 2001), but more longitudinal research is still needed. Affective Dyadic Flexibility In addition to the content of mother-child interactions, interaction patterns can be identified by their dyadic structure. According to the DS theory, a mother and child can be seen as a dyadic system during interactions. The system is selforganizing in the sense that it is characterized by recurrent patterns of behavior to which the mother and child are Battracted^ (Granic and Patterson 2006). Therefore, a mother-child dyad tends to stabilize in only a subset of all behavioral patterns it can attain. This refers to the structure of a mother-child interaction (Bhow^) rather than its content (Bwhat^; e.g., affect). The structure of an interaction is often specified in terms of flexibility (vs. rigidity). Affective dyadic flexibility refers to the repertoire of affect states available to the dyad, the dyad's capacity to switch among different states, and the degree to which affect states are evenly distributed across all possible patterns a dyad can attain ). Thus, affectively flexible dyads show a larger range in affect states, switch more among different states, and display more evenly distributed patterns compared to dyads that are low in affective flexibility (i.e., rigid). Those advocating a DS approach argue that the expression of all affect states, including negative ones, are adaptive (Granic et al. 2007). It is the ability of a dyad to flexibly switch among a large range of different patterns that is crucial, as this dyad would also accommodate to contextual demands more easily (Thelen and Smith 1998). Previous studies on children aged 5 years old and older indeed support this notion. Affective dyadic flexibility during mother-child interactions is linked to fewer adjustment problems and specifically to fewer externalizing behavior problems (e.g., Hollenstein et al. 2004), even in clinically referred children (De Rubeis and Granic 2012; Granic et al. 2007). However, much less is known about the link between affective dyadic flexibility and adjustment in children younger than 5 years of age. In a few studies that have been conducted the findings seem inconclusive. On the one hand, Lunkenheimer et al. (2013) show that lower dyadic flexibility is related to higher levels of problem behavior in 3.5-year-old children. On the other hand, in contrast to the DS theory, two studies report more externalizing problem behavior in preschoolers when motherchild dyads are highly flexible in affect (Lunkenheimer et al. 2011;Van den Akker et al. 2013). These latter findings are in line with suggestions from research with mothers and their infants, who tend to show more negativity during the still-face paradigm when preceded by interactions with high levels of dyadic flexibility (Sravish et al. 2013). The inconsistency between studies may exist because Lunkenheimer et al. (2011) and Van den Akker et al. (2013) used multiple indicators of affective dyadic flexibility and examined externalizing problems specifically, whereas Lunkenheimer et al. (2013) only used one indicator for affective dyadic flexibility in predicting more general behavior problems (i.e., a combined measure of internalizing, externalizing, and child's negativity). Hence, the limited evidence available actually seems to indicate that, in contrast to DS theory expectations, higher levels of affective dyadic flexibility during mother-child interactions could be detrimental for preschoolers in terms of the development of externalizing behavior problems. Identifying mechanisms through which affective dyadic flexibility is related to externalizing problem behaviors in preschoolers could help to understand this relation more thoroughly. Children's inhibitory control might be a mechanism that links higher levels of affective dyadic flexibility to higher levels of externalizing problem behavior. During the preschool years, parents act as external regulators of their children's affect (Bernier et al. 2010;Calkins et al. 1998), which enables children to gradually develop the ability to self-regulate. Because more affectively flexible mother-child interactions are also less predictable and less stable, this might hamper children from acquiring adequate inhibitory control skills that are needed for the development of children's self-regulation (Hofmann et al. 2012;Sravish et al. 2013), eventually resulting in more externalizing behavior problems. Hypotheses The aim of our study was to examine whether preschoolers' inhibitory control mediates the relation between mother-child interactions (both the content and structure) and hyperactive/ impulsive behavior and aggressive behavior. Our first two hypotheses were that (1) higher levels of maternal negative affect and (2) higher levels of dyadic flexibility both relate to lower levels of preschoolers' inhibitory control 9 months later, which in turn predict higher levels of hyperactive/ impulsive behavior and aggressive behavior another 9 months later, when controlling for initial externalizing behavior problems. In addition to the proposed main effects for maternal negative affect and affective dyadic flexibility, results by Lunkenheimer et al. (2011) suggest that there is an interplay between the content and the structure of mother-child interactions in explaining externalizing problems as well. Hence, although there are benefits to examining these characteristics of interaction patterns separately, it is also proposed that maternal affect states should be interpreted within the structure in which they are imbedded (Lunkenheimer et al. 2013). Therefore, we explored whether (3) maternal negative affect and affective dyadic flexibility interact in predicting preschoolers' inhibitory control 9 months later, affecting hyperactive/impulsive behavior and aggressive behavior in preschoolers another 9 months later. A conceptual representation of our proposed model is depicted in Fig. 1. Method Participants In the current study we used a sample of 173 mother-child dyads, including clinically referred children (78%) and typically developing children (22%). The sample is part of a larger, longitudinal project Schoemaker et al. 2012;Schoemaker et al. 2014), including three assessments with a 9-month-interval. Children were referred by general practitioners, well-baby clinics, and pediatricians for clinical and psychological assessment to the Outpatient Clinic for Preschool Children with Behavioral Problems, Department of Child and Adolescent Psychiatry, University Medical Center Utrecht (UMCU). For inclusion in the study, referred children had to score at or above the 90th percentile of the Attention problems scale or Aggression scale of either the Children's Behavioral Checklist (CBCL/1.5-5) or Caregiver-Teacher Report Form (C-TRF/1.5-5; Achenbach and Rescorla 2000). Typically developing children, who were recruited at elementary schools and daycare centers, were excluded when they scored at or above the 90th percentile of either of these scales. From the original sample (N = 251), the following children were excluded to form the current sample: Children of whom observational data was not available due to missing or damaged materials (11.9%); children who were observed in interaction with their father (6.0%) or grandmother (0.4%) instead of their mother; children diagnosed with a disorder other than ADHD, ODD, or CD either at the first or third assessment (2.0%); children with an IQ below 80 (1.2%), as assessed by the average score of the Raven Color Progressive Matrices (Raven et al. 1998) and Peabody Picture Vocabulary Test-III-NL (Dunn and Dunn 2005;Schlichting 2005); children who dropped out of the study after the first or second assessment (6.0%); children with missing C-TRF/1.5-5 scores at the 18-month follow-up (3.2%); and children who had not participated in at least 2 out of 3 inhibitory control task at the 9-month follow-up (0.8%). There were no significant differences between the sample used in this study and children who either dropped out of the study or those who were excluded because of missing C-TRF/1.5-5 scores at T3 or inhibitory control scores at T2, in terms of age, sex, IQ, hyperactive/impulsive behavior as reported by teachers, and inhibitory control scores all measured at T1. In the current study sample (N = 173; 76.9% boys), children's ages ranged from 42 to 66 months (M = 54.76, SD = 7.63) at T1, from 50 to 76 months (M = 63.72, SD = 7.68) at T2, and from 59 to 86 months (M = 72.87, SD = 7.62) at T3. One-hundred nine of the children were diagnosed with ADHD (n = 44), ODD/CD (n = 27), or both (n = 38). Children were diagnosed on the basis of the strict application of the DSM-IV-TR criteria (American Psychiatric Association 2000), as further described in Schoemaker et al. (2014). Child psychiatrists and clinical child psychologists reached consensus using the following diagnostics: (1) scores on the Attention Problems scale and the Aggression scale of the CBCL/1.5-5 and the C-TRF/1.5-5 (Achenbach and Rescorla 2000); (2) symptoms reported on the Kiddie Disruptive Behavior Schedule (Keenan et al. 2007); (3) scores on the Child Global Assessment Schedule (C-GAS; Shaffer et al. 1983); and (4) the child's behavior as observed with the Disruptive Behavior Diagnostic Observation Schedule (Bunte et al. 2013a;Wakschlag et al. 2008a;Wakschlag et al. 2008b). Another 26 referred children, not initially diagnosed, but scoring above the 90th percentile on either the Attention problems scale or Aggression scale (Achenbach and Rescorla 2000), as well as 38 typically developing children were also part of the study in order to increase the variability in outcome measures. 1 Conceptual representation of proposed model. T1 = first assessment; T2 = 9-month followup; T3 = 18-month follow-up; Flex*Neg = interaction of affective dyadic flexibility and negative affect mother. H1, H2, and H3 correspond with our first, second, and third hypotheses in text, respectively SD = 11.36. With regard to the mothers' education levels, 1.7% had 'no completed education', 1.7% completed primary school, 33.0% completed high school, 28.9% completed vocational school and 34% completed (applied) university. The fathers' education levels followed a similar distribution. Prior to the study none of the children received medication for their behavioral problems. After the first assessment, 58 children (33.5%) received psychopharmacotherapy, of which most were prescribed methylphenidate (n = 54), one risperidone (n = 1), and others switched from methylphenidate to atomoxetine after the second assessment (n = 3). If children received methylphenidate parents were asked to withhold their child's medication for 48 h prior to the follow-up assessment. Also, 97 families (56.1%) received a form of psychosocial treatment: Individual parent counseling at home (n = 26) or at the outpatient clinic (n = 72) and/or participation in the Incredible Years Parent Program (n = 7; Webster-Stratton 2011). Procedure Each child's intellectual functioning and executive functions were assessed over the course of a single morning; a fixed order of tasks was maintained and lasted about 2 h, including breaks (Schoemaker et al. 2012). Executive functioning tasks were administered on a computer. The assessment also included a mother-child observation (i.e., DB-DOS; Bunte et al. 2013a;Wakschlag et al. 2008aWakschlag et al. , 2008b, and a parent interview (i.e., K-DBDS; Bunte et al. 2013b;Keenan et al. 2007). Parents and teachers were asked to fill in questionnaires. The intellectual assessment was only administered during the first session. The DB-DOS took place at both the first and third assessment. All other measures were administered three times with an interval of 9 months. Written informed consent was obtained from parents before participating in the study. The study protocol was approved by the Medical Ethical Review Committee of the UMCU. Parents received a nominal financial compensation for their participation and children received two small gifts. Measures Affective Dyadic Flexibility Observations of the motherchild interactions recorded at the first assessment were used to measure affective dyadic flexibility. Interactions were initially taped in order to administer the DB-DOS (Wakschlag et al. 2008a(Wakschlag et al. , 2008b. The DB-DOS is a 50-min structured laboratory observation, divided into three interactional settings: One parent context followed by two examiner contexts. Our focus was on the first part of the observation in which the mother and child interacted during tasks that were designed for active parent engagement. During the interaction, attractive toys were available at the table behind the mother and child. Mothers were instructed that their children were not allowed to touch or play with the toys, creating a possible stressor. Mothers also had to instruct children what task to do and when to switch tasks (i.e., based on a bell rang by the examiner behind a one-way mirror). In total, 7 min were coded on a moment-to-moment timescale (i.e., every 5 s), including 3 min of coloring, 2 min of clean-up, and 2 min of puzzling. This way we could capture the characteristics of mother-child interaction over a range of different situations. Based on facial expressions and voice tone, interactions were coded using the following affect codes of the Relationship Affect Coding System (RACS; Peterson et al. 2009): (1) Anger/Disgust: Open anger, irritation/constrained anger or the expressions of being repulsed and disgusted by something someone has said or done. (2) Distress: Decrease in energy and a passive, resigned countenance. It may also resemble fear, sound like whining or appear as sadness (e.g., crying). (3) Ignore: Children turning away from their mother and disregarding her directions. Mothers paying no attention to their children's pleas for attention, rewards, or social interaction. (4) Validation: Actively communicating that he/she is listening, tracking and is engaged in what the other person is saying or doing. Also, compliments in combination with a physical orientation towards the other person and a display of positive affect. (5) Positive affect: The display of happiness and surprise attributes (e.g., caring, laughter, enjoyment), characterized by a general appearance of positive emotion. (6) Neutral: Non-emotional in both content and voice tone. Both the affect state of the mother and that of the child were coded by the first author and a trained graduate student. Both coders were unaware of children's symptomology or diagnosis. They showed a good inter-rater reliability, with an agreement rate of 85.0% and a κ weighted of 0.62 (Sim and Wright 2005), covering 13.9% of the total amount of coded data. All possible affect states a system can attain were represented by a 6-by-6 state space grid (SSG; Hollenstein 2007), using the software program GridWare 1.15a (Lamey et al. 2004). A SSG allows for the visualization and modelling of dyadic interaction patterns as they unfold on a moment-tomoment timescale. The child's affect states are plotted along the y-axis and the mother's along the x-axis. As a result, the trajectory made up of sequential dyadic states (i.e., the combination of mother's and the child's affect states represents a unique dyadic state) can be mapped onto the grid. Based on previous studies, affective dyadic flexibility encompassed three measures: (1) the range of affect states visited by dyads (range), (2) the average number of transitions between states per minute (transitions), and (3) the average of all individual cell mean durations (duration entropy; Hollenstein 2007). A high level of flexibility is characterized by a large range of affect states, a high number of transitions, and high levels of duration entropy (i.e., a more even distribution of time spent in different affect states). Two examples (i.e., low versus high flexibility dyad) of each measure are depicted in Fig. 2. Maternal Negative Affect The observations were used to measure the total amount of maternal negative affect, in which Anger/Disgust, Distress, and Ignore were identified as negative. The number of events in which mother displayed any type of negative affect was summed, divided by the total number of coded events, and then multiplied by 100. This resulted in a percentage of negative affect displayed by the mother in each mother-child dyad. 1 Inhibitory Control Children's inhibitory control was measured at the second assessment through three executive function tasks: Shape School Inhibit, Modified Snack Delay, and Go-No-Go (Schoemaker et al. 2012). In the computerized Shape School-Inhibit task, children were asked to name the color of cartoon figures with happy faces, but suppress this color naming when a cartoon with a frustrated or sad face appeared. The number of correct answers was divided by the total number of 18 trials. The Modified Snack Delay, is a relatively newly developed task that incorporates the motivational aspect from the original Snack Delay paradigm (Kochanska et al. 1996) with the motor-inhibitory control demands of the NEPSY Statue task (Korkman et al. 1998). While being videotaped, children were told to stand still like a snowman while placing both hands on a mat, without talking or moving. A bell and a glass with a treat underneath was placed in front of the child. The examiner told the child that they could move again and eat the treat when the examiner rang the bell. The task lasted for 4 min, during which the child was progressively distracted by various activities by the examiner, such as dropping a pencil, knocking under the table, culminating in the examiner leaving the room for 90 s. Trained raters rated hand movements of the children every five seconds and with three categories (0 = no movement, ½ = some movement, 1 = lots of movement) for every event. In the computerized Go-No-Go task children had to press a button when a fish appeared on their screen (i.e., Go-stimuli, 75%), but they needed to suppress the urge to press whenever a shark appeared (i.e., No-Go stimuli, 25%). Incorrect No-Go trials were subtracted from the number of correct Go-trials, thus, a higher score indicates a better performance on the task. Previous research reports an adequate test-retest reliability (0.71) for the Shape School-Inhibit task. The Modified Snack Delay and the Go-No-Go both showed a good test-retest reliability (>0.80; Schoemaker et al. 2012). For the purpose of the current study, inhibitory control measured at the second assessment was represented by a latent variable, based on the three executive functioning tasks described above. Externalizing Behavior Problems Preschoolers' externalizing behavior problems were measured at the first and third assessment using the C-TRF/1.5-5 Attention problems (which we refer to as hyperactive/impulsive behavior since most items refer to hyperactivity and impulsivity) scale (9 items, Chronbach's α = 0.90) and Aggression scale (25 items, Chronbach's α =0.96). Kindergarten and daycare teachers reported on children's externalizing problems using a 3-point scale (0 = true, 1 = somewhat/sometimes true, 3 = very/often true; Achenbach and Rescorla 2000). T-scores on the Attention problems scale and Aggression scale represented the dependent variables. Data Analytic Plan The hypothesized model was tested using a path analysis in Mplus 7.4 (Muthén and Muthén 2015). A maximum likelihood estimator with robust standard errors (MLR) was used to account for the non-normally distributed data. Testing the hypothesized model included several steps. First, the measurement models for both inhibitory control and affective dyadic flexibility were tested. The factor scores of affective dyadic flexibility were saved in order to compute an interaction term with maternal negative affect for subsequent analyses. Centered scores were used to compute the interaction term. Second, the model fit of the hypothesized mediation model was examined. This model proposed that maternal negative affect, affective dyadic flexibility, and their interaction at T1 would predict child inhibitory control at T2, which would in turn affect child hyperactive/impulsive behavior and aggressive behavior at T3. In this model, we also identified possible direct effects from the predictors at T1 and dependent variables at T3, in order to circumvent possible bias of estimation of conditional indirect effects (Hayes and Preacher 2013). We controlled for initial hyperactive/impulsive behavior and aggressive behavior at T1. Also, received medication (yes/no) and psychosocial treatment (yes/no) after the first or second assessment were entered as control variables. Because inclusion of these latter control variables did not alter the patterns of our findings, they were omitted from the analyses. We evaluated the model fit with the Comparative Fit Index (CFI), the Tucker-Lewis Index (TLI) and the Root Mean Square Error of Approximation (RMSEA). According to Byrne (2012) CFI and TLI > 0.90 represent an acceptable model fit, with >0.95 indicating a good fit for both indices. Obtaining a RMSEA value <0.08 is indicative of an acceptable fit, but <0.05 is indicative for a good model fit. To determine significance of the estimates, an α-level of 0.05 was used. Third, we ran our model again using Bayesian estimation for a robustness check of the indirect effects. As computing indirect effects involves multiplying (assumed) normally distributed variables of which the product in itself is not normally distributed, this can yield inaccurate confidence limits and significance tests (MacKinnon et al. 2004). Bayesian estimation has advantages in validating indirect effects in studies with relatively small sample sizes in comparison to other methods (Yuan and MacKinnon 2009). Since the results from the analysis using a Bayesian estimator yielded similar results regarding the direction of the indirect effects, specifications of our Bayesian analysis and results are shown in the Supplementary material. Descriptive Statistics Descriptive statistics (means, standard deviations and intercorrelations) for each of the study variables are depicted in Table 1. Associations were in the expected direction. Small negative correlations emerged between maternal negative affect at T1 and inhibitory control measures at T2. All measures of affective dyadic flexibility at T1 were also negatively related to all inhibitory control measures at T2, with correlations varying from small to moderate. In turn, inhibitory control measures at T2 were negatively associated with both hyperactive/impulsive behavior and aggressive behavior at T3. There were also small and positive relations between flexibility measures and externalizing problems at T1 and T3, indicating that higher levels of maternal negative affect and higher levels of affective dyadic flexibility at T1 relate to higher levels of both types of externalizing problems measured at T1 and T3. Correlations between T1 and T3 externalizing behavior problems revealed stability levels of moderate and strong effect sizes for hyperactive/impulsive behavior and aggressive behavior, respectively. Also noteworthy were the strong, positive associations between maternal negative affect and affective dyadic flexibility measured at T1. Model Test Measurement Model Standardized factors loadings of the constructs affective dyadic flexibility and inhibitory control were examined in order to validate the hypothesized measurement model. Affective dyadic flexibility showed adequate factor loadings of 0.83, 0.85, and 0.97 for range, transition, and duration entropy, respectively. With a factor score determinacy of 0.98, our estimated factor scores were validated and could be saved for further analyses (Schreiber et al. 2006). Saved scores were used to compute the interaction term (affective dyadic flexibility*maternal negative affect). Regarding inhibitory control adequate factor loadings of 0.53, 0.56, and 0.73 were obtained for the Shape School Inhibit, the Modified Snack Delay, and the Go-No-Go, respectively. Structural Equation Model Based on the Mplus modification indices, and because both the Shape School Inhibit and the Go-No-Go were computerized tasks, measurement errors of these constructs were allowed to correlate in the final model. Another justification for this correlation can be found in that both tasks require cool cognitive skills in contrast to more hot cognitive skills (Hongwanishkul et al. 2005), which are needed in the Modified Snack Delay task. The hypothesized model was found to adequately fit the data, as χ 2 (17) = 22.63, p = 0.162, RMSEA =0.04, 95% CIs [0.00, 0.09], CFI = 0.98, TLI = 0.95. Parameter estimates, their standard errors, and associated betas are depicted in Table 2. First, the results supported the hypothesis that higher levels of maternal negative affect (H1) relate to lower levels of preschoolers' inhibitory control 9 months later, which in turn predict higher levels of hyperactive/impulsive behavior another 9 months later. The indirect effect of maternal negative affect on hyperactive/impulsive behavior was statistically significant, as B = 0.36, SE B = 0.17, β = 0.22, p = 0.036. However, no support was found for such an indirect effect on aggressive behavior, as inhibitory control was not related to elevated levels of aggressive behavior, as B = 0.09, SE B = 0.09, β = 0.06, p = 0.359. Second, similar results appeared for affective dyadic flexibility (H2): Higher levels of affective dyadic flexibility were associated with lower levels of inhibitory control 9 months later, which was predictive of more hyperactive/impulsive behavior another 9 months later. This indirect effect was statistically significant, as B = 0.82, SE B = 0.41, β = 0.18, p = 0.046. Again, this was not the case for aggressive behavior, as B = 0.21, SE B = 0.20, β = 0.05, p = 0.315. Whereas significant correlations existed between maternal negative affect and affective dyadic flexibility at the first assessment, and hyperactive/impulsive behavior and aggressive behavior at the third assessment (see Table 1), these direct relations were non-significant in the structural equation model that included inhibitory control and controlled for initial behavior problems. Third, when inspecting the estimated coefficient of the interaction term between affective dyadic flexibility and maternal negative affect (H3), results showed that the structure and the content of mother-child interactions indeed interact in predicting inhibitory control in preschoolers 9 months later. As shown in Fig. 3, higher levels of maternal negative affect were associated with lower levels of inhibitory control, but the relation was stronger for mother-child dyads who showed low levels of affective dyadic flexibility, hence more affectively rigid dyads. The indirect effect of this interaction was also significant (B = −0.11, SE B = 0.05, β = −0.17, p = 0.036). Discussion In the current study, we examined whether preschoolers' inhibitory control operates as a mechanism underlying the association between mother-child interactions and hyperactive/ impulsive behavior and aggressive behavior over time. By using a DS approach we were able to explore the role of both the content (i.e., maternal negative affect) of mother-child Significant estimates are in boldface. T1 first assessment; T2 9-month follow-up; T3 18-month follow-up; Flex*Neg = interaction term between maternal negative affect and affective dyadic flexibility. Paths a. b. and c' refer to the commonly used denotations for the different paths between the predictor, mediator and outcome in mediation models. R 2 = 0.33 for hyperactive/impulsive behavior. R 2 = 0.36 for aggressive behavior interactions, as well as its dyadic structure (i.e., affective dyadic flexibility). Hyperactive/Impulsive Behavior Our results indicated that the relation between maternal negative affect and children's hyperactivity/impulsivity was indeed mediated by preschoolers' inhibitory control even after taking into account their initial levels of hyperactivity/impulsivity. More specifically, mother-child interactions characterized by higher levels of maternal negative affect were associated with lower levels of inhibitory control in preschoolers 9 months later, which ultimately related to elevated levels of hyperactive/impulsive behavior another 9 months later. Our findings are in line with the cross-sectional study by Eisenberg et al. (2001), suggesting that children are less able to learn in a negative environment, and have trouble internalizing cognitive processes such as inhibitory control, resulting in more hyperactive/impulsive behavior problems. With our study we provided longitudinal support for this theory. Similar results were found for the indirect effect of affective dyadic flexibility: Mother-child interactions with higher levels of affective dyadic flexibility were associated with lower levels of inhibitory control in preschoolers 9 months later, which ultimately related to more hyperactive/impulsive behavior as reported by teachers another 9 months later. This was after controlling for initial hyperactivity/impulsivity of preschoolers. These findings emphasize the role of mothers as an external regulator of affect during the preschool years (Bernier et al. 2010;Calkins et al. 1998), through which children can acquire the needed cognitive skills (i.e., inhibitory control) to gradually develop the ability to self-regulate. The relation between affective dyadic flexibility and children's adjustment, however, depends on children's age and their cognitive development, since older children seem to benefit from more flexible mother-child interactions (De Rubeis and Granic 2012;Granic et al. 2007;Hollenstein et al. 2004), whereas our findings suggest that preschoolers show less hyperactivity/impulsivity when mother-child interactions are rigid. Based on this finding, we believe that a change in conceptualization of affective dyadic flexibility in mother-child dyads during the preschool years is appropriate. Rather than referring to affectively flexible versus rigid mother-child interactions, we suggest to use the term affective dyadic instability versus affective dyadic stability. Future research should examine whether there is a specific age or developmental stage at which affectively stable mother-child interactions switch from predicting less to more adjustment problems in children and more importantly, why this might be the case. Furthermore, affective dyadic instability was found to interact with maternal negative affect in predicting inhibitory control, and indirectly also predicted preschoolers' hyperactive/impulsive behavior. The negative association between maternal negative affect and children's inhibitory control is stronger for dyads that are highly stable. Thus, on the one hand the results support the idea that preschoolers would benefit from more affectively stable interactions with their mother. On the other hand, the detrimental effect of maternal negativity might become more pronounced when this occurs in highly stable, predictable interaction patterns between mothers and their preschool children. Although more research is needed, these findings emphasize the interplay between the content of a mother-child interaction and the structure in which it is imbedded. Inhibitory Control Affective Dyadic Flexibility Low (-2 SD) Medium High (+2 SD) Fig. 3 Plot of the interaction effect of 'affective dyadic flexibility x negative affect mother' on inhibitory control in preschoolers 9 months later. Note. Simple slopes for −1 SD and mean were significantly different from zero, as b = −0.08, p = 0.011, and b = −0.05, p = 0.016, respectively. The simple slope for +1 SD was not significantly different from zero: b = −0.02, p = 0.167 Aggressive Behavior In contrast to hyperactive/impulsive behavior, our hypothesized predictors did not directly nor indirectly relate to preschoolers' aggressive behavior after controlling for initial aggressive behavior. One explanation for this could be that aggressive behavior may be too stable over time to reveal statistical significant predictors, as initial aggressive behavior scores were strongly correlated with aggressive behavior 18 months later, whereas hyperactivity/impulsivity showed a moderate association between the first assessment and at the 18-month follow-up (see Table 1; Cohen 1988). A second explanation for our inability to predict aggressive behavior could be that regulating and inhibiting this behavior requires a different type of inhibitory control than the one needed to inhibit hyperactive/impulsive behavior. Suppressing aggressive behavior could demand cognitive control in a more emotionally-laden situation, whereas the inhibition of hyperactive/impulsive behaviors would require emotionally neutral cognitive processes. Previous research supports the need to differentiate between hot and cool cognitive aspects of inhibitory control (Hongwanishkul et al. 2005). As we already noted, associations between inhibitory control and aggressive behavior are more profound when motivational demands, such as reward and punishment are high (Fairchild et al. 2009;Matthys et al. 2004;Schoemaker et al. 2012). In the current study, inhibitory control tasks predominantly required cool cognitive skills, which could explain the inability of our model to predict preschoolers' aggressive behavior. Third, the lack of significant findings regarding preschoolers' aggressive behavior might be explained by the way aggressive behavior was measured in our study. Tremblay (2000) has already pointed out that a number of items on the CBCL/TRF Aggression scale (Achenbach and Rescorla 2000) do not specifically refer to aggressive behavior (e.g., wants attention, selfish). This may have affected the results. Conclusions Taken together, our findings are in line with the recent work of Sulik et al. (2015) and demonstrate that inhibitory control acts as a mechanism linking mother-child interactions to preschoolers' hyperactivity and impulsivity over time. That is, longitudinal associations between both the content and structure of mother-child interactions and later hyperactive/impulsive behavior problems were mediated by preschoolers' inhibitory control. Our use of SSGs (e.g., micro approach) in unraveling mother-child interactions adds to the strength of the study, as it seems to be an improvement over and above using global measures (e.g., macro approach). By disentangling the affective content from the affective dyadic structure in mother-child interactions, this study adds to previous knowledge by demonstrating that both characteristics are important for child development. Moreover, the role of maternal negative affect seems to be dependent on the structure of the mother-child interaction it is imbedded in. This underscores the unique contribution of using micro ratings in unraveling dyadic parentchild dynamics in relation to child development. Hence, based on independent measures, our findings provide support for a process model in which affectively stable mother-child interactions that are low in maternal negative affect promote young children's inhibitory control, which in turn reduces children's hyperactive/impulsive behavior problems. These results were found even when accounting for initial hyperactive/impulsive behavior problems and after controlling for children's medication intake or psychosocial treatment. This conclusion is based on a sample of predominantly clinically referred preschoolers, thus children who experience severe levels of hyperactivity and impulsivity. Our results also emphasize the importance of differentiating between hyperactive/ impulsive behavior and aggressive behavior when targeting externalizing problem behaviors in preschool children. Limitations Our findings provide relevant information for children who show hyperactive/impulsive behavior problems at the clinical level. However, the conclusions should also be considered in the light of the following limitations. First, due to a small number of girls in our sample, we were unable to test whether the examined relations might vary across gender, which should be addressed by future research. Second, the operationalization of aggressive behavior in preschoolers was not optimal (e.g. Tremblay 2000). Third, future studies might consider specifically targeting inhibitory control tasks that require hot cognitive skills in order to examine preschoolers' aggressive behavior. Fourth, in the current design we were unable to test for bidirectional effects of mother-child interaction patterns, inhibitory control, and children's problem behavior. Moreover, it should be noted that we did not control for previous inhibitory control skills of preschoolers. Future research should test such a Bfull^longitudinal model (i.e., with all constructspredictor, mediator and outcomeassessed at all measurement moments), with a more appropriate sample size for such a complex model. Clinical Implications Our findings emphasize the relevance of mother-child interactions in predicting preschoolers' hyperactivity/impulsivity. In this study we demonstrated how affectively stable motherchild interactions and low levels of maternal negative affect are important in the promotion of preschoolers' inhibitory control, and indirectly in reducing hyperactive/impulsive behavior problems in children that display these problems at a clinical level. Intervention programs aimed at reducing externalizing behavior problems in young children already target the affective responses of parents (i.e., PCIT, Zisser and Eyberg 2010;and Incredible Years, Webster-Stratton 2011). Our results further support the clinical relevance of this for hyperactive and impulsive behavior problems. This is especially noteworthy as the effect of parent training programs in the treatment of ADHD and ADHD symptoms has not been convincing in previous research (e.g., Daley et al. 2014). The current study thus supports the need for further examination of parent training programs for the treatment of ADHD symptoms in young children diagnosed with ADHD and/or ODD/CD, under the condition that the intervention also focuses on achieving affectively stable mother-child interactions that are low in maternal negativity. Lastly, our findings also indicate interventions should give distinct attention to the development of inhibitory control in preschoolers as it operates as a mechanism that links the interactive behavior between mothers and their preschool children to positive child development.
2018-04-03T01:05:59.950Z
2017-01-31T00:00:00.000
{ "year": 2017, "sha1": "b9ff7787459819b9d4970156e9e219fc2fcb5cb1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10802-016-0258-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c480fce04a54f52769850fc81f6f49bdc281ac63", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
247335511
pes2o/s2orc
v3-fos-license
ASSESSMENT OF THE IMPACTS OF OIL PRICES ON NIGERIA ECONOMY USING COBB-DOUGLAS PRODUCTION FUNCTION Fluctuations in oil prices have been a global issue over the years. Although many studies have been carried out the majority of those studies relating to oil prices focused more on its effects on oil-consuming nations than oil-producing ones. This study, however, examines the vulnerability of the economy of the oil-producing country to oil price changes using Nigeria being an OPEC member as a case study. The Cobb-Douglas production function was used to formulate the appropriate model that relates oil prices with the economy of Nigeria. However, the close to close (standard deviation) volatility method was used to measure the amount of variability in oil prices. Nevertheless, the perpetual inventory method was used to estimate the accumulated physical capital of Nigeria and the problems of multicollinearity inherent in the data were attenuated using ridge regression techniques as capital cannot be left out while dealing with production. Introduction Crude oil has been the major source of Nigeria income after its discovery in 1956, which led to less concentration on agricultural products that used to be contributing about 70% to the growth of Nigeria gross domestic product (GDP) in the early 1950s (Alley et al., 2014 andNBS, 2018). Though Nigeria has achieved tremendous growth due to the high production of oil which has led to higher revenue for the country when the oil price was moving to its peak, and that contributed about 8.4% to the growth of GDP in 2009(World Bank, 2018. Nevertheless, it is quite unfortunate to see Nigeria as one of the richest countries in Africa, and still among the country battling extreme poverty and economic mess. Without a doubt, Nigeria is a country currently in dire need of high and sustainable economic growth that is capable of engendering rapid economic development and reducing poverty but, its monoproduct (crude oil) economic practice has allowed its growth to be highly vulnerable to the movements in oil prices over time. (Nwoba et al., 2017). Hence, there is a need for the Nigerian government to diversify the economy. In 2015, the Federal Government of Nigeria came up to diversify the economy of the country to the agricultural sector. After about four years of flagging off the diversification policy, it is pertinent to examine the responsiveness of the economy concerning international oil price changes. This study, therefore, attempts to verify the vulnerability of Nigeria's economy to oil price through production function, putting into consideration, some economic indicators such as Real GDP, Gross National Income (GNI), Physical Capital (Capital Stock), and Working Age Population (Labour). Traditionally, there are specific approaches used in analyzing the effect of oil price changes which include non-structural models which solely depend on the theory of exhaustible resources as the root for understanding the oil market. Structural supply/demand framework that uses behavioral equations and factors that link oil demand and supply to its various determinants and the informal approach that can be studied by analyzing oil price movements within specific contexts and episodes of oil market history (Bassam, 2007). Most studies on oil prices literature have been using structural methods with the use of Augmented Dickey-Fuller, Cointegration, and Granger Causality and high numbers of them have used Gross fixed capital formation as a proxy for capital accumulation (stock). For example, Taofik (2018), Birouke et al. (2012), Aljebrin (2013), Kathleen et al. (2012), Maku et al. (2018), and Alley et al. (2014) Likewise, the informal approach which only describes some economic indicators during the volatility period has been used by a set of economic organizations among which include International Monetary Fund (IMF, 2000) and US Energy Information Administration (EIA, 2018). While few studies have used the non-structural approach, for example, Bassam (2007), in his article "the drivers of oil prices: the usefulness and limitations of non-structural models" describe oil price based on historical, projected prices, and future speculations. Therefore, this paper adopted structural methods and has a paired distinct contribution to Nigeria's government, stakeholders, and citizens in general to have a view of the recent performance of the economy. This will allow the government to know their stands whether they are getting it right or there is a need to re-strategize. On the other hand, the paper makes a great contribution to research methods on oil price, production function with the usage of estimated capital stock, and addressed the collinearity issues while dealing with time series. Methodology Given the Cobb-Douglas production function (1) where, t Y = Total production (GDP) of Nigeria for year t. = Physical capital of Nigeria for year t, t L = Total labor employed in year t, t = Time (Year), A = Efficiency of the function, α = Elasticity of output relative to the physical capital, β = Elasticity of output about workers. Hence, α + β = 1, would suggest that an increase in production would be by the increasing rate of inputs. α + β > 1, would suggest that an increase in production would supersede proportional inputs increase. α + β < 1, would mean that an increase in production would falloff to inputs increasing rate. Assuming, Oil Price = P where P is not directly related to GDP. Because neoclassical production theory only described the general production function as ( ) , = Y F K L meaning that the output is the function of capital and labor. Due to scarcity of data on physical capital "Kt", is estimated using investment capital "It" of the country by Perpetual Inventory Method used by Berlemann (2014). And from (1), if = t A P , then by linearizing (1) it gives (2) where t P is Oil Price in year t, likewise 1 a and 2 a are constants and coefficients of parameter P in equation (2) respectively. Although from the literature, at the beginning of desired period t, the net capital stock t K , can be written as a function of the net capital stock at the beginning of the previous period t-1, 1 − t K , gross investment capital in the current period, t I , and consumption of fixed capital, If the geometric depreciation of fixed capital D at a constant rate is denoted by δ, then equation (3) becomes where t K = Physical capital of year t, 1 − t K = Physical capital of previous year t-1, δ = Capital depreciation rate, and t I is the investment (gross fixed capital formation) of year t. But (4) can be estimated such that It is the investment (gross fixed capital formation) of year t and gGDP is the growth rate of GDP. Because neoclassical growth theory assumed that; if the economy under consideration is in its steady-state, the gross domestic product will be rising at the same rate as the capital stock and the growth rate of the capital stock can be approximated by the growth rate of investments. Therefore, looking at the trend of Nigeria's growth rate which has been somehow like a wave around equilibrium over decades World Bank (2019), the 1 − t K was estimated by gross fixed capital formation growth rate (investment growth rate) with the assumption that the economy is not in steady-state. Such that = (6) Volatility Volatility has been the word used to describe the fluctuations in the price of commodities over time and it is usually in form of a comparison of historical changes in the price of such commodity and current price. It should be noted that high variability does not connote high price but rather give the yearly amount of variability in oil price. Hence, the close-to-close volatility method is considered the simplest and most used for the test. It can be estimated by multiplying standard deviation by the square root of working days or number sample. That is: where 'n' is the number of trading days in a year and is the log return of OPEC daily oil price (Pt), which is calculated by: for Pt-1 = oil price of the previous day. Cursed Economy An economy is said to be cursed if the growth of per capita income is negatively correlated with the share of labor in the primary sector (Thirlwall, 2011). The scattered plot is used to determine the situation of this and as well determine the accountability of the share of labor in agriculture to the growth of Gross National Income (GNI) by testing through SPSS at a 5% significant level. Effects of Oil Price on Nigeria Economy In many cases, time-series data always have some problems such as multicollinearity, nonstationary, autocorrelation among others which is contrary to the assumptions of ordinary least square (OLS) regression. Due to this fact, performing (OLS) regression may lead to the superiority of the model and make some important variables insignificant. Therefore, Ridged regression shall be used as a substitute for the OLS during the analysis. Note: overall regression being significant and insignificant of almost the entire individual test will signify the existence of multicollinearity. Likewise, the high correlation between the independent variables may also be a signal of multicollinearity. For more about the condition of multicollinearity see Gujarati (2004). Ridge regression Ridge regression is like the OLS method because it assumes three out of the BLUE assumptions of the OLS except for an unbiased estimator, as it introduces a small amount of bias to the model. Theoretically, Hoerl et al. (1970), define ridge regression as a regression technique used to solve the problems of multicollinearity in the data set under study. The advantage of this technique is it is capable of producing a result with a minimum standard error by standardizing all of the variables included in the model, Normalize the data towards zero which will automatically take care of non-stationary problems, and as well accounting for the effect of outliers in the data sets (NCSS, 2018). In general, we can say that ridge regression is the best substitute for OLS when the variables are suffering from multicollinearity (Hoerl et al. 1970). Mathematically, OLS regression coefficients are usually estimated using (8) Here,  β is a p×1 coefficient matrix of the least-squares, ' X X is p×p matrix of the square of independent variables and ' X Y is a p×1 matrix resulting from the multiplication of dependent and independent variables. The ridge regression coefficients estimation is a little bit different due to standardized variables used to perform the regression and it added a very small value say λ to the correlation matrix of the independent variables in equation (8) such that if ' = X X R , where R denotes design matrix of independent variables, then the estimate of the coefficient for ridged regression is given by Where are the coefficients of ridge regression, R is the design matrix of the independent variables, I is p×p identity matrix and ' X Y is p×p matrix of the square of independent variables likewise, λ is a positive quantity usually ranges between 0 and 1 but, the lower the value of λ, the smaller the introduction of bias to the model. Although, there are several ways of determining the value of λ which can be used to conduct the ridge regression to have better results. One of the popular ways is to plot the Ridge Trace and choose the value of λ which the coefficient line settled. It is done by plotting the coefficients of ridge regression against each value of λ assumed. But in this study, since we have prior knowledge of coefficients of the model, we will select the best minimum value of λ that produces the accurate results based on the theory and with the technological improvement in the field of machine learning and statistical computing, STATA shall be used to run the regression for this study with (ridgereg) function and the results are interpreted by OLS method. Nigerian Economic Data Between 1988 and 1999 as shown in Figure 1, the oil price has been floating around $14.24 per barrel to $22.26 per barrel, which seems unfavorable to the economy of Nigeria. However, since 2003 Nigeria's economy continues to grow steadily until 2013 when the oil price has again been fallen till it dropped to $40.68 per barrel in 2016. Since 2017, Nigeria has also been experiencing economic growth as the oil price rose to USD 52.51 per barrel. In summary, Nigeria's economy (GDP) virtually moves together with the oil price. Hence, it is very imperative to check if Nigeria's economy is among the cursed economy in the world. (See Figures 1 & 2). In terms of employment, Nigeria's employment has mostly been from the agricultural and services sector over time and her unemployment rate has reached 23.1% National Bureau of Statistics (NBS, 2018). Meanwhile, an industry which oil sector fell into has not been creating employment opportunities for the population under the labor force even though much attention of government is in this sector since the late 1980s and based on a survey conducted by the national bureau of statistics in 2013 on job creation, it was reported that oil sector only accounts for about 0.01% of total employment in Nigeria. As depicted from Figure 3, since the late 1980s agricultural sector take close to 60%, the service sector covers over 30% and the industrial sector accounted for less than 10% of the total employment in Nigeria. Due to the oil prices boom that started in 2001 (as shown in Figure 1), many Nigerians who are in the agricultural sector begin to lose their jobs because of government negligence of the agricultural sector and that reduced the percentage of employment in the agriculture sector meanwhile the sector is currently accounting for less than 40% of the total employment in Nigeria. Nigeria Physical Capital Nigeria's capital stock has been increasing over time. It was estimated to be 66.92 trillion Naira in 1988 and 170.70 trillion in 2017 if assumed that Nigeria's capital depreciation rate is 4%. For the uniqueness of the results, the analysis was tested with different depreciation rates which are 7%, 8%, 10%, 20%, 25%, and 30% respectively. Here the higher the depreciation rate of the country, the lower the amount of physical capital and vice versa. These results are presented in Table 1. Results and Discussion Based on the analysis, it was discovered that Nigeria's economy is not cursed and the share of labor in agriculture does account for GNI growth in Nigeria. This is due to the insignificant negative correlation that exists between GNI growth and the share of labor in agriculture (evident from Figure 4). On the other hand, Nigeria economy is highly vulnerable to oil price changes especially in the years 2008, 2009, 2015, and 2016 where oil price was highly varied with the amount of 41%, 34%, 35%, and 42% respectively, except in 2013 where the number of oil prices changes was only 12%. The values of variance inflation factors of the collinearity diagnosis are all less than 5 and the respective R 2 for different models conducted has been minimized. Thus, the models are no more biased, and the collinearity has been eliminated and we can say that the independent variables (factors of productions) account for about 61% on average to changes in Nigeria economy. Also, going by the assumption made by Berlemann et al. (2014), with a hundred-year interval of depreciation rate for physical capital, if we assume 4% as the depreciation rate of Nigeria capital, then the resulting equation of the Nigeria production function as shown in Table 2 Meaning that the elasticity of output relative to capital (α) is 0.5879 and the elasticity of output relative to labor employed (β) is 0.5238. Therefore, since α+β = 0.5879 + 0.5238= 1.1117 > 1, then we conclude that the rate of increase in production (Nigeria economy) will be more than the rate of increase in the input of both capital and labor. That is, a 10% increase in both capital and labor will increase Nigeria's real GDP by 111%. Similarly, both capital and oil prices are highly significant to the economy of Nigeria at a 1% significance level, but the labor is significant at 5%. This means that Nigeria is a capital-intensive country and is largely driven by the oil price. However, Nigeria has not been able to utilize its labor to its fullest like most developed countries. Conclusion Based on the results obtained from the analysis, it can be deduced that the contribution of oil to the growth of Nigeria's economy has not anyway been limited, and is still largely driven by oil price and capital. Therefore, it is highly recommended for the government to put more effort towards economic diversification and make sure oil revenue is reinvested into the economy by the building of refineries, sustainable infrastructures, and the development of other sectors of the economy. Because in the long run, if the developed oil buyers (countries) eventually implement the use of solar power, electric cars, and automobile, the economy of Nigeria will be in trouble and worse-off even more than its current situation. Also, with the low rate of the manufacturing industry in Nigeria which has not been providing employment opportunities to many citizens under the working-age population, the government of Nigeria should use the opportunity of belt and road initiatives of the People's Republic of China to improve the employability of Nigerians, as China has emerged as the most frequent country Nigeria is trading with. Although, there has been a lot of conspiracy on to whether the belt and road initiatives of China is a symptom of colonialism, nevertheless, it is economically recommended for the government of Nigeria not to only look at the financial aspect of these initiatives but also, find a way of attracting productive industries for it to maximally utilize its working-age population to improve the productive capacity of the country. And that will not only increase the employability of Nigerians but also increase its revenue generation from other sectors of the economy. Furthermore, Nigeria needs to focus on real productive industries which can make use of the raw materials produced from the agricultural sector after the diversification has been effective. This is the way to achieve a balance within the industrial and agricultural sectors as there will be an increase in return not only to labor but also to the industry. Lastly, based on economic theory, many scholars agreed that for the economy to move away from stagnating, there is a need for having the right institution to set up political and economic policy and as well as to install the restriction to the political power and controlling the behavior of the citizens. Therefore, it is greatly recommended to emulate fast-growing economy countries such as China in terms of restructuring of policies and constitutions, as much of the Nigeria capital has sunk into the pocket of political leaders over time.
2022-03-10T16:08:26.063Z
2022-02-07T00:00:00.000
{ "year": 2022, "sha1": "bdd1d45759a3ee08c8ea8cfa5946f6c965ff4231", "oa_license": "CCBYSA", "oa_url": "https://myjms.mohe.gov.my/index.php/mjoc/article/download/11839/8928", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "81235cab5505d855e07e98ca011baf9f536596e0", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
119311781
pes2o/s2orc
v3-fos-license
Dynamic Rupture, Fault Opening, and Near Fault Particle Motions along an Interface between Dissimilar Materials Dynamic rupture propagation along an interface between two different elastic solids under shear dominated loading is studied numerically by a 2-D lattice particle model (LPM). The configuration of the lattice particle model consists of two solid blocks of different elastic properties connected along a planar interface. Each block is characterized as an isotropic elastic material and the interface strength is described as a composite elastic modulus of a mismatch function of the elastic properties of two dissimilar materials. The particle interaction between the two blocks with pair inter-particle potential also takes account of normal stress variations. 1 Dynamic rupture propagation along an interface between two different elastic solids under shear dominated loading is studied numerically by a 2-D lattice particle model (LPM). The configuration of the lattice particle model consists of two solid blocks of different elastic properties connected along a planar interface. Each block is characterized as an isotropic elastic material and the interface strength is described as a composite elastic modulus of a mismatch function of the elastic properties of two dissimilar materials. The particle interaction between the two blocks with pair inter-particle potential also takes account of normal stress variations. Numerical simulations illustrate that, when an initiated rupture direction is the same as the slip direction of compliant material (softer), the dynamic rupture propagates with a self-sustaining slip pulse along the fault at the speed close to the slower Rayleigh wave speed, accompanied by a temporary and localized interface separation (or fault opening). The interface separation at a point on the fault is indicated by the fault normal displacement discontinuity between two blocks, while local dynamic shear stress drops to zero instantaneously. The normal particle motions at the two sides of the fault have the same direction, towards the softer material, and the particle velocity in the fault normal direction are much larger than that in the fault parallel direction. The observed particle motions are consistent with the foam rubber experiments and are very similar to the results predicted by the Weertman's dislocation theory, and the Schallamach waves as the material contrast exceeds 40%. Moreover, corresponding to the fault trace, the near fault particle motions are strong asymmetrical between soft and hard blocks, and the particle velocity in the softer material is larger than that in the harder material. In addition, the synthetic seismograms also revealed that the large particle motions both in the fault normal and parallel directions are contributed from the surface wave energy excited by the source rupture process. The radiated seismic energy comes from the particle slip/opening motions and healing process (stopping phase). Therefore, the shear stress variation on the fault behaves as a partial stress drop during the rupture process. The major frictional energy is associated with the work done in pulling the contact points apart as the rupture wave propagates. Introduction: In spite of three more decades of study and speculation, the heat flow paradox, which is the lack of any indication of frictional heat generation along the San Andreas faults (Brune et al., 1969;Lachenbruch and Sass, 1980), remains unsolved. At present the most frequently discussed mechanism for explaining the heat flow paradox is the reduction of effective normal stress by near-lithostatic pore pressures (Rice, 1992), or by pods of high pore pressure generated and locked in by permeability valves (Byerlee, 1990(Byerlee, , 1992. A major problem with the high pore pressure explanation is the question of how the pore pressure can be maintained at such high levels for hundred, up to tens of thousands of years. New concepts for the process of stick-slip have the potential of not only explaining the heat flow paradox, but also explaining other puzzling features of earthquake slip, some of which are potentially very important to earthquake hazard research. One explanation of the heat flow paradox that has recently been suggested is that the actual mechanism of stick-slip observed on small rock samples in the laboratory (which originally led to the heat flow paradox, as a consequence of high stresses determined to be necessary for stick-slip) may not correspond to the mechanism of stick-slip in the earth, in part because of scaling problems, sample-machine interaction, and the existence of large variations in normal stress, and possibly even fault opening during the stick-slip process (Brune et al., 1990(Brune et al., , 1992Anooshehpoor and Brune, 1994;Anooshehpoor and Brune, 1999). The normal vibrations mechanism was suggested by modeling of stick-slip between large blocks of foam rubber. In these models opening of the fault during stick-slip is clearly observed, resulting in a consequent reduction of frictional heat generation (Anooshehpoor and Brune, 1994). Reduction in friction as a result of normal vibration has been investigated by many researchers over the past three decades (Tolstoi, 1967;Oden and Martins, 1985;Soom, 1991a, 1991b). Melosh (1996), based on the acoustic fluidization model, (Melosh, 1979), has also indicated that normal vibration may be a mechanism to reduce normal compressive stress and cause the dynamic weakening of the fault. If there is normal vibration on the fault, this may lead to anomalous P-wave radiation. Haskell (1964) 3 suggested that tensile-like normal interface vibrations on the fault were required because the radiation of P-wave energy from large earthquakes was too high for pure shear faulting. Evidence from the ANZA array (Vernon et al., 1989) and Guerrero strong motion array (Castro et al., 1991) give some indication of anomalous P-wave excitation. Normal compressive stress reduction with related normal motion along an interface between two different elastic media was first investigated by Weertman (1963;1980). Based on the dynamic dislocation theory, a gliding edge dislocation along the interface could produce a change of tensile normal stress due to the material difference below and above the interface, while a change of shear stress was zero at a certain subsonic dislocation velocity. Usually, this subsonic dislocation velocity is limited between the Rayleigh-wave and shear-wave velocities of the softer material. The induced normal tensile stress adding to the normal compressive stress applied on the fault causes the reduction of normal compressive stress at the fault and sustains the slip dislocation propagation accompanied by a normal displacement motion. However, if the materials at the two sides of the interface are uniform, the normal stress induced by slip dislocation disappears immediately (Weertman, 1963;Aki and Richards, 1980). Physically, the coupling between the slip dislocation and change of normal stress is due to the asymmetry of material at the two sides of the fault (Andrews and Ben-Zion, 1997). Weertman (1963) pointed that this type of moving dislocation only exists in a narrow range of the material difference (a 19% difference in the wave speeds of the two media). Andrews and Ben-Zion's (1997) calculations of a self-sustaining slip pulse on a fault between elastic media with wave speeds differing by 20% confirms the prediction of Weertman (1980). Based on the Weertman's dislocation approach, in this paper we use a 2-D lattice particle model to simulate the self-sustaining propagation of slip pulse involving interface separation on a planar fault between two elastic materials with different shear wave speeds. At first, by including climb dislocation (allowing opening of the fault) in Weertman's formulation, we find that self-sustaining dislocation pulses can propagate along the interface even if the difference in shear wave speeds in the two adjacent half-spaces exceeds 19%. This result is important because in laboratory experiments we 4 observe dynamic slip pulses that propagate along a planar fault between two large blocks of foam rubber with a shear wave speed difference of about 40% (Anooshehpoor and Brune, 1999). We also find that by incorporating fault normal displacement discontinuity, the Weertman dislocation model degenerates into a dislocation model (Haskell, 1964) for identical half-spaces on both sides of the fault; and for a large difference in wave speeds in the two materials the slip pulse is somewhat similar to Schallamach detachment waves (Schallamach, 1971). Dislocation Theory: Dislocation theory from Weertman (1980) predicted that a steady state slip pulse can propagate along a dissimilar material interface governed by Coulomb friction. In his analysis, the fault normal motion was continuous across the material interface, that is, there is no interface separation was permitted. In this study, however, we demonstrate that, in the presence of interface separation, a self-sustaining slip pulse can propagate along an interface between two materials with arbitrarily different shear speeds. where 1  , 2  and   are the composite elastic modules and are mismatch functions of the shear modulus μ i , density ρ i , Poisson ratio v i , and the dislocation velocity c ( i = 1,2) (Weertman, 1980). The boundary conditions require that     , which implies that there is no extra shear stress produced when fault interface separation, gives ) decrease as the dislocation velocity increases and reach to zero between 1 R V , the slower Rayleigh-wave velocity and 1 s V , the slower shear wave velocity. Also 2  decreases faster for the normal dislocation than that for the slip dislocation. The changes of the long-range shear and normal stresses arising from their corresponding dislocations are equal to zero at the zero points of 1  and 2  . The value of   increases as c increases and grows rapidly as the c approaches 1 and is limited between the slower Rayleigh-wave and the slower shearwave velocities. From equation (2), it is obvious that the condition required to sustain the interface separation for a given slip dislocation is (Adams, 1998 2-D Lattice Particle Model: In accordance with the objective of modeling rupture propagation along an interface between two elastic isotropic materials, we consider a two-dimensional triangular lattice particle model characterized by a pair potential. Particles interact with each other according to modified Lennard-Jones potential: where r 0 is the rest length of equivalent Hooke's spring, r b is the cut-off distance equal to 1.112r 0 , k i (i=1,2) are the linear spring constants related to the Lame's constants in which with the Poisson's ratio of 0.25 (Hoover et al., 1974), and . Throughout the paper all results are expressed in terms of reduced units: the lattice length of r 0 is taken to be 1, the spring constant of k 2 equals 1 for a harder material and 1 0 1   k for softer material, respectively. With the particle mass taken as the unit of mass, the triangular lattice particle has the density of With the assumption of material densities between the two elastic isotropic materials the same, k , the interface elastic modulus (spring constant) of a mismatch function of k 1 and k 2 , is taken as (Comninou, 1977b, Weertman, 1980 , ν i is the Poisson ratio. Figure 2 shows that k varies as a function of ratio of . Therefore, the fault interface strength in this range is relative weaker than that of materials on the both sides of the fault. As we know that natural fault systems have interfaces that separate different materials. These are generated by damaged fault zone material or sometimes also by the existence of different rock bodies across the fault. Material interfaces are especially prominent in plate-bounding continental and subduction zone faults along which the largest earthquakes occur. The ranges of the typical material contrasts across the fault in the real earth depicted by ratio of shear speed differences are from 0.7 to 1.0. In our current model, because V i , the shear speeds, are proportional to the square root of k i , the spring constants, the ratios of 1 5 . 0 2 1   k k approximately correspond to the ratios of shear speed differences of 0.7 to 1.0. Apparently, the fault interface properties, such as k , the interface strength, and material contrasts described here by lattice particle approach are appropriated in describing real fault systems. Related to k , the interface elastic modulus, the cohesive strength of the interface under the shear dominated deformation can be calculated as (Gao, et al., 2001) where τ c and σ c are the shear and normal stresses along the interface at cohesive limit. , denoting the cohesive strength of a single bond. It is obvious that there is a strong coupling between shear and normal stresses during cohesive failure. Modeling: The fault model is composed of two blocks with different elastic properties shown in The interface separation is indicated by the difference of normal displacements at the two sides of the fault. The peak value of the normal displacement in the harder material is only 50% of the peak value of the normal displacement in the softer material. If the material difference increases to 40% or beyond, the normal particle motion in the harder material is so small that the particle motion is exactly the same as the Schallmach wave. The general features of these particle motions are also observed from foam rubber experiments (Anooshehpoor and Brune, 1999). From Figure 4, it also shows that the particle velocities and accelerations on the upper block (soft) are much larger than that in 9 the lower block (hard). The fault normal components of particle velocities and accelerations are much larger than that of the fault parallel components too. Additional calculation was carried out to explore the evolution of rupture propagation along the fault. The pulse-like particle motions grow sharper with a small increase in their amplitude as the rupture propagates away from the left edge of the rupture source as seen in Figure 5. This result is compatible with the results of Andrews and Ben-Zion (1997). A plausible explanation of such rupture evolution through the fault was discussed by Andrews and Ben-Zion (1997), through analysis of traveling waves along the fault, they found the rupture behaviors is influenced by travel waves radiated by rupture process. Close to the rupture initiation, the slower P-wave and the faster shear head wave affect the rupture motion, so that the P-wave prohibits the tensile separation of the fault. As the rupture propagates away from the initial source area, the different types of waves are separated from each other, and the rupture motion is controlled by the slower S-wave which promotes the tensile separation of the fault. In addition, our numerical study shows that pre-stored shear energy releases gradually to encourage the particle motion both in the tangential and normal directions as the rupture propagates away from the rupture initial point. The dynamic frictional stress usually drops to zero during the rupture, but the net shear stress required to initiate the rupture is much larger and remains on the fault plane. The net effect is to further accelerate the particle motion through the fault. Obviously, during the rupture process, the particle motion configuration along the fault shows the particle displacement in the softer block having a larger displacement than that in the harder block. This instantaneous particle configuration also shows a wrinkle-like moving picture near the rupture front. This wrinkle-like surface pattern does not extend across the entire length of the fault but are rather localized. The particle arrays along the fault indicate a 10 tensile motion involved in the rupture propagation. The calculated local rupture length of the particle pulse is about 20 ~ 25 lattice lengths. Rupture Mechanism: Corresponding to the particle motion around the fault, the particle velocities are approximately pulse and sinusoidal functions in the slip and normal directions, respectively. These are consistent with the result of Andrews and Ben-Zion (1997) and the result predicted from dislocation theory (Weertman, 1980). Figure 7 displays a particular instantaneous particle velocity field around the fault. The larger solid arrows indicate the relative motion direction of the fault, the lighter arrows indicate particle velocity field of the harder material, and darker arrows give particle velocity field of the softer material. The circle indicates the particle position, and the magnitude of each particle velocity is indicated by its vector length. From Figure 7, detailed analysis of the particle velocity field shows that, at the rupture front, the particle velocity is almost perpendicular to the fault, which indicates that the particles move upwards toward the softer medium. At the same time it is just at these points that the interface separation occurs. Behind the rupture front, the opening process remains steady, the particle velocity is almost parallel to the fault, and the motion direction is asymmetric between the two blocks. In addition, the absolute value of the velocities between the two blocks is different; and the motion direction in softer medium is the same as the rupture direction. Also, the vectorized velocity field indicates that the maximum value of velocity is at the rupture front. When the interface is re-contacted, which corresponds to the healing phase, the interface particles move towards the fault plane, and, later on, towards the hard medium. The absolute value of velocity in the softer medium is larger. The slip motions start when interface separation occurs and stop when the interface re-contacts. From the particle velocity picture, it is clear that the tensile motion at the rupture front and the compressive motion after the rupture correspond to the interface separation and reconnecting, respectively. The stress distribution also gives the same picture. Figure 9 also indicate that, as the distance away from the fault increases, the particle velocities both in the fault normal and fault parallel directions decrease rapidly. The physical mechanism has been discussed by Dunham and Archuleta (2004) based on the Fourier decomposition principle in which increasing the distance between the observer and the fault filters the high frequency components of the wavefield excited by the source process for sub-Rayleigh (sub-shear) ruptures. Moreover, as pointed out by Dunham and Archuleta (2004), a fundamental difficulty in source inversion we have to face is a resolvable problem when using records from subshear ruptures, even without the finite bandwidth limitation from the instrumental response and scattering along the ray path. 4. Rupture velocity: Figure 10 shows the x-component particle velocity profile along the fault as a function of time. The strip pattern indicates the rupture propagates steadily along the entire fault. The slope of this time-distance curve gives the rupture velocity. P, S and R indicate the time-distance relations with the P-wave, S-wave and Rayleigh-wave velocities, respectively, of the slower medium. Measuring the slope of the particle velocity profile, we can see the rupture velocity lies between the Rayleigh-wave and Swave velocities. The result is absolutely consistent with the result derived from dislocation theory (Weertman, 1980), in which the dislocation velocity is limited between 12 the Rayleigh-wave and S-wave velocities of the slower medium when V s1 /V s2 varies from 1.0 to a smaller value. Figure 11 shows a typical shear stress variation along the fault before, during and after the rupture. From this figure we can see that, at the rupture front, there is a strong shear stress concentration before the rupture; the peak value could reach to 0.012. As the distance from the rupture front increases, the concentrated shear stress undergoes a 5 . 0  r decay, rapidly tending to a static stress level. The dynamic stress drop is described by a shear stress decreasing from its critical value to zero temporarily, and locally, due to the opening behavior. The static stress drop is the difference of the shear stress before and after the rupture. In this figure we represent the dynamic stress drop and static stress drop as Δσ d and Δσ p , respectively. Obviously, the variation of the shear stress through the rupture process shows a partial-stress-drop behavior, and the static stress drop is only about 20% to 25% of the dynamic stress drop. The particle motion along the fault undergoes a locking or stick → slip + opening → re-locking, or healing, during rupture. The partial stress drop feature of our model is a direct consequence of the fact that the particles re-connect after the rupture when they approach other particles at the maximum shear stress is localized in a narrow region before the rupture front; and a shear stress concentration occurs just before the rupture front. At the rupture front, the shear stress pattern indicates a shear stress variation cross the fault with a small reduction in the softer medium. This phenomenon indicates the pre-rupture particle motion across the fault is very asymmetrical (England, 1965;Williams, 1959). In fact, dislocation theory predicted an asymmetrical shear distribution before the rupture front along the fault. Behind the rupture front, accompanied by fault opening, the shear stress has a big decrease. Dynamic stresses: 6. Ratio of V s1 /V s2 and interface separation: Figure 13 shows Conclusions: The study reported in this paper shows that a dynamic rupture along a dissimilar material interface is a rich phenomenon. The numerical results are, in general, consistent with the foam rubber experiments by Anooshehpoor and Brune (1999). Our main results may be summarized as follows: 1 3. The pulse-like particle motion derived from the inter-particle interaction The large solid arrows indicate the relative motion of the fault, and the rupture propagates from the left to right. The small arrows indicate the directions of the particle motion. At the rupture front, the particle motions tend to move upwards toward the softer medium, and the interface separation occurs at the same time. Behind the rupture front, the particle velocity is almost parallel to the fault, and the interface separation remains steady. Corresponding to the interface reconnecting, the particles move toward the fault.
2019-04-12T17:53:31.261Z
2008-07-14T00:00:00.000
{ "year": 2008, "sha1": "ed72f3df0a2918748adf5c167014a605e54d04ff", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ed72f3df0a2918748adf5c167014a605e54d04ff", "s2fieldsofstudy": [ "Geology", "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
40073280
pes2o/s2orc
v3-fos-license
Spin Response of the Nucleon in the Resonance Region I discuss recent results from Jefferson Lab on the measurement of inclusive spin structure functions in the nucleon resonance region using polarized electron beams and polarized targets. Results on the first moment of the spin structure function for protons and neutrons are discussed, as well as the Bjorken integral. I will argue that the helicity structure of individual resonances plays a vital role in understanding the nucleon's spin response in the domain of strong interaction QCD, and must be considered in any analysis of the nucleon spin structure at low and intermediate photon virtuality. Introduction For more than 20 years, measurements of polarized structure functions in lepton nucleon scattering have been a focus of nucleon structure physics at high energy laboratories. One of the surprising findings of the EMC experiment at CERN was that only a small fraction of the nucleon spin is accounted for by the spin of the quarks 1 . The initial results were confirmed by several follow-up experiments 2 . This result is in conflict with simple quark model expectations, and demonstrated that we are far from having a realistic picture of the nucleon's internal structure. These experiments also studied the fundamental Bjorken sum rule 3 which, at asymptotic momentum transfer, relates the proton-neutron difference of the first moment Γ 1 = g 1 (x)dx to the weak axial coupling constant: Γ p 1 − Γ n 1 = 1 6 g A . This sum rule has been evolved to the finite Q 2 values reached in the experiments using pQCD, and has been verified at the 5% level. While these measurements were carried out in the deep inelastic regime, the nucleon's spin response has hardly been measured in the low Q 2 regime and in the domain of nucleon resonances, which is the true domain of strong QCD. Our understanding of nucleon structure is incomplete, at best, if the nucleon is not also probed and fundamentally described at medium and large distance scales. This is the domain where current experiments at JLab have their greatest impact. While the Bjorken sum rule provides a fundamental constraint at large Q 2 , the Gerasimov-Drell-Hearn (GDH) sum rule 4,5 constrains the evolution at very low Q 2 . The GDH sum rule relates the differences in the helicity-dependent total photoabsorption cross sections to the anomalous magnetic moment κ of the target where ν 0 is the photon energy at pion threshold, and M is the nucleon mass. The GDH sum rule also defines the slope of Γ 1 (Q 2 = 0), where the elastic contribution at x = 1 has been excluded: The sum rule has been studied for photon energies up to 2.5 GeV 6 , and in this limited energy range deviates from the theoretical asymptotic value by less than 10%. A rigorous extension of the sum rule to finite Q 2 has been introduced by Ji and Osborne 7 . Measurement of the Q 2 -dependence of (1) allows tests of the low energy QCD predictions of the GDH sum rule evolution in ChPT, and shed light on the question at what distance scale pQCD corrections and the QCD twist expansion will break down, and where the physics of confinement will dominate. It will also allow us to evaluate where resonances give important contributions to the first moment 11,12 , as well as to the higher x-moments of the spin structure function g 1 (x, Q 2 ). The moments need to be determined experimentally and calculated in QCD. The well known "duality" between the deep inelastic regime, and the resonance regime observed for the unpolarized structure function F 1 (x, Q 2 ), needs to be explored for the spin structure function g 1 (x, Q 2 ). This will shed new light on this phenomenon. The first round of experiments has been completed on polarized hydrogen (N H 3 ), deuterium (N D 3 ), and on 3 He. On the theoretical side we now see the first full (unquenched) QCD calculations for the electromagnetic transition from the ground state nucleon to the ∆(1232) 9 . Results for other states, and coverage of a larger Q 2 range may soon be available. This may provide the basis for a future QCD description of the helicity structure of prominent resonance transitions. Expectations for The inclusive doubly polarized cross section can be written as: where A 1 and A 2 are the spin-dependent asymmetries, ψ is the angle between the nucleon polarization vector and the q vector, ǫ the polarization parameter of the virtual photon, and σ T and σ L are the total absorption cross sections for transverse and longitudinal virtual photons. For electrons and nucleons polarized along the beam line, the experimental double polarization asymmetry is given by where D and η are kinematic factors, ǫ describes the polarization of the virtual photon, and R = σ L /σ T . The asymmetries A 1 and A 2 are related to the spin structure function g 1 by where F 1 is the unpolarized structure function, and τ = ν 2 /Q 2 . The GDH and Bjorken sum rules provide constraints at the kinematic endpoints Q 2 = 0 and Q 2 → ∞. The evolution of the Bjorken sum rule to finite values of Q 2 using pQCD and the twist expansion allow to connect experimental values measured at finite Q 2 to the endpoints. Heavy Baryon Chiral Perturbation Theory (HBChPT) has been proposed as a tool to evolve the GDH sum rule to Q 2 > 0, possibly to Q 2 = 0.1 GeV 2 , and to use the twist expansion down to Q 2 = 0.5 GeV 2 8 . If this is a realistic concept, and if lattice QCD can be used to describe prominent resonance contributions to Γ 1 (Q 2 ) in the range Q 2 = 0.1 − 0.5 GeV 2 , this could provide the basis for a description of a basic quantity of nucleon structure physics from small to large distances within fundamental theory, a worthwhile goal! Using the constraints given by the two endpoint sum rules we may already get a qualitative picture of Γ p 1 (Q 2 ) and Γ n 1 (Q 2 ). There is no sum rule for the proton and neutron separately that has been verified. However, experiments have determined the asymptotic limit with sufficient confidence for the proton and the neutron. At large Q 2 , Γ 1 is expected to approach this limit following the pQCD evolution from finite values of Q 2 . At small Q 2 , Γ 1 must approach zero with a slope given by (2). Heavy Baryon ChPT in the lowest non trivial order predicts 18 Unfortunately, the large coefficients of the Q 4 terms make the convergence of this expansion unlikely for Q 2 > 0.1 GeV 2 . However, for the proton-neutron difference the situation is quite different 19 The Q 4 coefficient is a factor 4-5 smaller than for the proton or neutron, and one might expect convergence up to considerably higher Q 2 than for proton and neutron separately. This may be due to the absence of the ∆(1232) in Γ (p−n) 1 , and may hint at difficulties in describing the ∆(1232) contributions in HBChPT. Results for Protons and Neutrons. Inclusive double polarization experiments have been carried out for energies of 2.6 and 4.3 GeV using N H 3 21 as polarized hydrogen target with CLAS. After subtracting the nuclear background measured in separate data runs, and using a parameterization of previous unpolarized measurements for R, equation (4) is used to determine the asymmetry A 1 + ηA 1 . More details of the analysis can be found in ref. 13 . Two of several Q 2 bins are shown in Figure 1. In the lowest Q 2 bin, the asymmetry is dominated by the excitation of the ∆(1232) giving a significant negative contribution to A 1 . At higher Q 2 the asymmetry in the ∆(1232) region remains negative, but at higher W the asymmetry quickly becomes positive and large, reaching peak values of about 0.6 at Q 2 = 0.8 GeV 2 and W=1.5 GeV. Evaluations of resonance contributions show that this is largely driven by the S 11 (1535) A 1/2 amplitude, and by the rapidly changing helicity structure of the D 13 (1520) state. The latter resonance is known to have a dominant A 3/2 amplitude at the photon point, but is rapidly changing to A 1/2 dominance for Q 2 > 0.5 GeV 2 . The helicity asymmetry A 1 (D 13 (1520)) is shown in Figure 2. Using a parameterization of the world data on F 1 (x, Q 2 ) and A 2 (x, Q 2 ), we can extract g 1 (x, Q 2 ) from (5). Results are shown in Figure 3. The main feature at low Q 2 is due to the negative contribution of the ∆(1232) resonance. With increasing Q 2 , however, the absolute strength of the ∆(1232) contribution decreases, while contributions of higher mass resonances increase and become more positive. Note, that higher mass contributions at fixed Q 2 appear at lower x in this graph. The graphs also show a model parameterization of g 1 (x, Q 2 ) which is used to extrapolate to x → 0. The model is based on a parametrization of the resonance transition formfactors and also describes the behavior of the spin structure functions in the deep inelastic regime. Is Quark-Hadron Duality valid for g 1 of the Proton? More than 3 decades ago, Bloom and Gilman 14 found that parametrizations of inclusive unpolarized structure functions, measured in the deeply inelastic regime, approximately describe the resonance region provided one averages over the resonance bumps. This phenomenon is known as local duality. By comparing g 1 at various Q 2 we can infer if such a behavior is also observed for the polarized structure function. For the relatively low Q 2 measured in this experiment the Nachtmann variable ξ = 2x/(1 + 1 + 4x 2 m 2 /Q 2 ), which accounts for target mass effects, is a more appropriate scaling variable than the Bjorken variable x. Figure 4 shows g 1 (ξ, Q 2 ) for the proton in comparison with the scaling curve describing the deeply inelastic behavior. The negative contribution of the ∆(1232) obviously prevents a naive "local duality" to work for Q 2 < 1.1 GeV 2 . Recently, Close and Isgur discussed in a simple harmonic oscillator model 15 that local duality is expected to work only if one integrates over states belonging to certain multiplets within the SU (6) symmetry group. In this case, for local duality to work for the ∆(1232), one would also need to include contributions from the proton ground state, which belongs to the same multiplet [56, 0 + ] as the ∆(1232). The positive contribution of elastic scattering to g 1 could therefore offset the negative ∆ contribution. Detailed duality tests for the higher mass resonances will require a separation of overlapping states belonging to the same multiplet, and measurement of their transition amplitudes. Such a program is currently underway at JLab 16 . The First Moment of Structure Function g 1 In order to obtain the first moment, the integral 1 0 g 1 (x, Q 2 )dx is computed using the measured data points and the parameterization to extrapolate to x = 0. The elastic contribution has not been excluded in the integral. The results for Γ p 1 (Q 2 ) of the proton are shown in Figure 5. The characteristic feature is the strong Q 2 dependence for Q 2 < 1 GeV 2 , with a zero crossing near Q 2 = 0.25 GeV 2 . The zero crossing is largely due to an interplay between the excitation strength of the ∆(1232) and the S 11 (1535), and the rapid change in the helicity structure of the D 13 (1520) from helicity 3 2 dominance at the real photon point to helicity 1 2 dominance at Q 2 > 0.5 GeV 2 . The latter behavior is well understood in dynamical quark models 20 . A similar helicity flip is also observed for the F 15 (1680). Measurements on N D 3 have been carried out in CLAS 22 , and on 3 He in Hall A 25 to measure the corresponding integrals for the neutron. Here I only discuss the Hall A results. Data were taken with the JLab Hall A spectrometers using a polarized 3 He target. Since the data were taken at fixed scattering angle, Q 2 and ν are correlated. Cross sections at fixed Q 2 are determined by an interpolation between measurements at different beam energies. Both longitudinal and transverse settings of the target polarization were used. After correcting for nuclear effects and accounting for the deep inelastic part of the integral, the first moment of g 1 (x, Q 2 ) for neutrons can be extracted, and is shown in Figure 6. The data deviate from the trend seen for the pQCD-evolved asymptotic behavior for Q 2 < 1 GeV 2 . This is largely due to the contribution of the ∆(1232). The data are well described by a model 12 that includes resonance excitations and describes the connection to the deep inelastic regime assuming vector meson dominance. Another parametrization of the Q 2 dependence is from Soffer and Teryaev 26 . The Bjorken Integral Using the results on Γ 1 (Q 2 ) for protons and neutrons one can determine the Q 2 dependence of the Bjorken integral Γ In this integral, all contributions of isospin 3 2 resonances, such as the ∆(1232), drop out, and contributions of other resonances may be reduced as well. Also, since the GDH sum rule for the proton-neutron difference is positive, no zero-crossing is necessary to connect to the asymptotic behavior. The preliminary data are shown in Figure 7. Since the CLAS data and the Hall A data were measured at somewhat different Q 2 values, the data in each set were connected with a smooth interpolating curve and then subtracted. The resulting curve is the centroid of the shaded error band. The band at higher Q 2 corresponds to the O(α 3 s ) evolution of the Bjorken sum rule. At low Q 2 the HBChPT curve seems to describe the trend of the data up to Q 2 ≈ 0.2 GeV 2 . A recent ChPT calculation 27 in O(p 4 ) predicts values significantly above the HBChPT The band below Q 2 = 1 GeV 2 parametrizes the data and error for both data sets. curve. The model with explicit resonance contributions gives a good description of the global behavior for both proton and neutron targets, and for their difference. Conclusions First high precision measurements of double polarization responses have been carried out at Jefferson Lab in a range of Q 2 not covered in previous high energy experiments. Spin structure functions and spin integrals Γ 1 (Q 2 ) have been extracted for protons and neutrons. The proton and neutron data both show large contributions from resonance excitations. Γ p 1 (Q 2 ) shows a dramatic change with Q 2 , including a sign change near Q 2 = 0.25 GeV 2 , while Γ n 1 (Q 2 ) remains negative, however is strongly affected by the ∆(1232) contribution. Qualitatively, the strong deviations from the trend of the deep inelastic behavior for Q 2 < 1 GeV 2 mark the transition from the domain of single and multiple parton physics to the domain of resonance excitations and hadronic degrees of freedom. New data have been taken both on hydrogen and deuterium with nearly 10 times more statistics, and higher target polarizations, and cover a larger range of energies from 1.6 GeV to 5.75 GeV. The year 2001 data cover a Q 2 range from 0.05 to 3 GeV 2 , and a larger part of the deep inelastic regime than the data presented here. This will allow a reduction of the systematic uncertainties related to the extrapolation to x = 0. Moreover, since data are available at fixed Q 2 taken at different beam energies, a separation of A 1 (Q 2 , W ) and A 2 (Q 2 , W ) will be possible. The new data will also give much better sensitivity to resonance production in exclusive channels, such as ep → enπ + , that have been measured previously 23 . Finally, at the higher energies, CLAS will be able to study single and double spin asymmetries in various exclusive and semi-inclusive reactions currently of great interest to access the transverse quark distribution functions 24 . There is a program underway in JLab Hall A to measure the GDH integral for neutrons down to extremely small Q 2 values 28 , near the real photon point, and to measure the asymmetry A 1 (x, Q 2 ) for the neutron at high x 29 . High precision data for A 1 and A 2 at Q 2 = 1 GeV 2 are also expected to come from experiment E-01-006 in Hall C 30 . The Southeastern University Research Association (SURA) operates JLab for the U.S. Department of Energy under Contract No. DE-AC05-84ER40150.
2017-09-17T20:38:05.672Z
2002-08-01T00:00:00.000
{ "year": 2002, "sha1": "eb901326e6a2ff962913995838b8e230b2771d84", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0211185", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "20437d4c4b52dc7323fda11004edb175dc88bef4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118596988
pes2o/s2orc
v3-fos-license
Complete Analysis on the Short Distance Contribution of $B_s\to \ell^+\ell^-\gamma$ in Standard Model Using the $B_s$ meson wave function extracted from non-leptonic $B_s$ decays, we evaluate the short distance contribution of rare decays $B_s\to \ell^+\ell^-\gamma(\ell=e,\mu)$ in the standard model, including all the possible diagrams. We focus on the contribution from four-quark operators which are not taken into account properly in previous researches. We found that the contribution is large, leading to the branching ratio of $B_s\to \ell^+\ell^-\gamma$ being nearly enhanced by a factor 3 and up to $1.7\times 10^{-8}$. The predictions for such processes can be tested in the LHC-b and B factories in near future. I. INTRODUCTION The standard model (SM) of electroweak interaction has been remarkably successful in describing physics below the Fermi scale and is in good agreement with the most experiment data. One of the most promising processes for probing the quark-flavor sector of the SM is the rare decays. These decays, induced by the flavor changing neutral currents (FCNC) which occur in the SM only at loop level, play an important role in the phenomenology of particle physics and in searching for the physics beyond the SM [1,2]. The observation of the penguin-induced decay B → X s γ, B → X s ℓ + ℓ − (ℓ = e, µ) are in good agreement with the SM prediction, and the first evidence for the decay B s → µ + µ − was confirmed at the end of 2012 [3], putting strong constraints on its various extensions. Nevertheless, these processes are also important in determining the parameters of the SM and some hadronic parameters in QCD, such as the CKM matrix elements, the meson decay constant f Bs , providing information on heavy meson wave functions. Thanks to the Large Hadron Collider (LHC) at CERN we have entered a new era of particle physics. In experimental side, in the current early phase of the LHC era, the exclusive modes such as B s → ℓ + ℓ − γ (ℓ = e, µ) are among the most promising decays due to their relative cleanliness and sensitivity to models beyond the SM [1,2]. In theoretically side, since no helicity suppression exists and large branching ratios as B s → µ + µ − are expected. There are mainly two kinds of contributions of B s → ℓ + ℓ − γ in the SM: the short distance contribution which can be evaluated reliably by perturbation theory [4] while the long distance QCD effects describing the neutral vector-meson resonances φ and J/Ψ family [5][6][7]. As for the short distance contribution, it is thought in previous works that a necessary work is only attaching real photon to any charged internal and external lines in the Feynman diagrams of b → sℓ + ℓ − with statement that contributions from the attachment of photon to any charged internal propagator are regraded as to be strongly suppressed and can be neglected safely [1,2,8,9], i.e., one can easily obtain the amplitude of B s → ℓ + ℓ − γ by using the effective weak Hamiltonian of b → sℓ + ℓ − and the matrix elements γ|sO i b|B s O i = γ µ P L , σ µν q ν P R directly. Therefore contributions from the attachment of real photon with magnetic-penguin vertex to any charged external lines are of course omitted [1,2] or stated to be negligibly small [9]. Another contribution from loop insertion of the lower order four-quark operators are also always neglected. We note that the complete contribution seems to have been done in [5], however it mainly concentrated on the long distance effects of the meson resonances, whereas the short distance contribution was indeed incompletely analyzed. A complete examination included all contribution to the processes in the SM is needed. As being well known, only short-distance contribution can be reliably predicted, and it is more important than the long-distance contribution from the resonances which is actually excluded partly by setting cuts in experimental measurements. Recently we showed that the contributions from the attachment of real photon with magnetic-penguin vertex to any charged external lines can enhance the branching ratios of B s → ℓ + ℓ − γ by a factor about 2 [10]. In this letter, we will extend our previous studies and use the B s meson wave function extracted from non-leptonic B s decays [11] to revaluate the short distance contribution from the all categories of diagrams of B s → ℓ + ℓ − γ decays. Special attention will be payed on the contribution from the four-quark operators, and a comparative study with previous work will be discussed. The paper are organized in the following, in sec. II, we analyse the full short distance contribution and present detail calculation of exclusive decays B s → ℓ + ℓ − γ. The numerical results and the comparative study are given in sec. III, and the conclusions are given in sec. IV. II. COMPLETE ANALYSIS ON SHORT DISTANCE CONTRIBUTIONS In order to simplify the decay amplitude for B s → ℓ + ℓ − γ, we have to utilize the B s meson wave function, which is not known from the first principal and model depended. Fortunately, many studies on non-leptonic B [12,13] and B s decays [11] have constrained the wave function strictly. It was found that the wave function has form where the distribution amplitude φ Bs (x) can be expressed as [14]: with x being the momentum fractions shared by s quark in B s meson. The normalization constant N Bs can be determined by comparing (3) with N c being the number of quarks and the B meson decay constant f Bs is thus determined by the condition Let us start with the quark level processes B s → ℓ + ℓ − γ which are subject to the QCD corrected effective weak Hamiltonian. The general effective Hamiltonian that describes b → s transition is given by where O j (j = 1, . . . , 6) stands for the four-quark operators, and the forms and the corresponding Wilson coefficients C i can be found in Ref. [15]. Generally, to describe all the short distance of the process B s → ℓ + ℓ − γ, new effective operators for b → sγγ which are not included in (6) 2, respectively. When connect di-lepton line to one γ, operator b → sγγ may contribute to Contributions from the such kind of diagrams with a photon attaching from internal charged lines to B s → ℓ + ℓ − γ are usually regraded as to be strongly suppressed by a factor m 2 b /m 2 W thus can be neglected safely [1,2,8,9]. However, as pointed out in [16], the conclusion of this is correct, but the explanation is not as what it is described. Here [16]. Therefore we can use the effective operators listed in Eq. (6) for on-shell quarks to calculate the total short distance contributions of The Feynmann diagrams contributing to B s → ℓ + ℓ − γ at parton level can then be classified into three kinds as follows: 1. Attaching a real photon to any charged external lines in the Feynman diagrams of b → sℓ + ℓ − ; 2. Attaching a virtual photon to any charged external lines in the Feynman diagrams of b → sγ with virtual photon into lepton pairs; 3. Attaching two photon to any charged external lines in the Feynman diagrams of fourquark operators with one of two photon into lepton pairs. Note that the third contribution is not considered in the previous studies except for Ref. [5] which is the focus of this paper and will show the detail in the following. We also will discuss these contributions seperatly. (b) are always regarded as the dominant contributions, and they have been considered by using the light cone sum rule [1,2], the simple constituent quark model [8], and the B meson distribution amplitude extracted from non-leptonic B decays [9]. We rewrite the amplitude of B s → ℓ + ℓ − γ at meson level as [10]: The form factors in Eq. (7) are found to be: The expression in Eq. (7) can be compared with Ref. [9]. FIG. 4: Feynman diagrams of b → sγ with virtual photon into lepton pairs . B. External virtual photon contributions The Feynman diagrams of the second kind of contributions are shown in FIG. 4. Contributions from the kind of diagrams are always neglected [1,2] or stated to be negligibly small [9]. Note the B s meson wave functions used in this work and Ref. [9] are both from non-leptonic B s decays. However, as mentioned in the introduction the authors of Ref. [9] did not present the expression of the contribution from FIG. 4 and only stated the contribution is numerical negligibly small. But such statement seems to be questionable, for that the pole of propagator of the charged line attached by photon may enhance the decay rate greatly which make some diagrams can not be neglected in the calculation. In these two diagrams, photon of the magnetic-penguin operator is real, thus its contribution to B s → ℓ + ℓ − γ is different from the first kind contributions. We get the amplitude [10]: with coefficients C + obtained by a replacement: where z = q 2 2p Bs ·q and the first, second term in (11) since m Bs ≪ ω Bs (see next section) which is very easily understood in sample constituent quark model [8], i.e., φ Bs (x) = δ(x − m s /m Bs ). However, the contributions from Fig.4 (a) and (b) are comparable, and pole in C + corresponds to the pole of the quark propagator when it is connected by the off-shell photon propagator. Thus the C + term may enhance the decay rate of B s → ℓ + ℓ − γ and its analytic expression reads where p b,s , k 1,2 denotes the momentum of quarks and photon respectively, ǫ is the vector polarization of photon. We split the tensor T µν into the momentum odd and even power terms for simplification. Keeping our physics goal in mind, Without loss of generality we assume that photon with momentum k 1 is virtual and drop the terms proportional to k ν 2 in the expressions. After straight calculation, we obtain T even µν (q) = −2 where q denotes the quark in the internal line which two photons are attached and and e q is the number of electrical charge of the quark. The loop functions appearing in (15) have forms as are inserted. It is also easily obtained the similar result for the on-shell photons as in Ref. [17] by setting u q = 0 . With the amplitude of b → sγ * γ and B s wave function ready, we write the total contribution from FIG. 5 to exclusive decay of B s (p Bs ) → γ(k)ℓ + ℓ − as with the form factors given by where q 2 in Eq. (17) is the invariant mass square of lepton pair. The functions can be obtained directly from (16) by redefined parametersz q = m 2 q /m 2 Bs , t = q 2 /m 2 Bs , with explicit formula needed in calculation given by for v > 4, , From Eq. (17) it is clear that the contribution from four-quark operators to B s → γℓ + ℓ − has the similar expression as that from magnetic-penguin operator with real photon to B s → ℓ + ℓ − γ. Thus the total matrix element for the decay B s → ℓ + ℓ − γ including contributions from three kinds of diagrams can be obtained easily by a shift to the form factors: Finally, we get the differential decay width versus the photon energy E γ , III. RESULTS AND DISCUSSIONS The decay branching ratios can be easily obtained by integrating over photon energy. In the numerical calculations, we use the following parameters [19]: Table I together with results of B d,s → γℓ + ℓ − from this work and our previous research for comparison. The errors shown in the Table I comes from the heavy meson wave function, by varying the parameter ω B d = 0.4 ± 0.1, and ω Bs = 0.5 ± 0.1 [9]. Note that, the predicted branching ratios receive errors from many parameters, such as parameters meson decay constant, meson and quark masses. From the numerical results we conclude that unlike in decay B → X s γγ where the fourquark operators just contribute a few percent to the branching ratio [17], our numerical result shows that contribution from the four-quark operators to B s → γℓ + ℓ − is large. It can be understood as follows: 1. As pointed out in Ref. [9], the radiative leptonic decays are very sensitive probes in extracting the heavy meson wave functions; 3. From Eq. (18), one can infer easily that the four-quarks contribution to the form factors in (23) have coefficient (N c C 1 +C 2 )T 2 2 f Bs /e d while the contribution from magneticpenguin operator with real photon, m b m Bs /(p Bs · q)C ef f 7 C + . Note (N c C 1 + C 2 )/e d and C ef f 7 can be comparable and have the same sign in C 1 and opposite sign in C 2 . However, with T j 1 = 0 for j = 1, 2 thus comparable contribution studied in this work and in Ref. [10] is expected, leading to enhancement of branching ratios of B s → γℓ + ℓ − when new diagrams are taken into account. 4. The predicted short-distance contributions from quark weak annihilation as well as the magnetic-penguin operator with real photon to the exclusive decay are large, and the branching ratios of B s → ℓ + ℓ − γ are enhanced nearly by a factor 3 compared with that only contribution from magnetic-penguin operator with virtual photon and up to 1.7 × 10 −8 , implying the search of B s → ℓ + ℓ − γ can be achieved in near future. 5. Due to the large contributions from magnetic-penguin operator with real photon and quark weak annihilation , the form factors for matrix elements γ|sγ µ (1 − γ 5 )b|B s and γ|sσ µν (1 ± γ 5 )q ν b|B s as a function of dilepton mass squared q 2 are complex and not as simple as 1/(q 2 − q 2 0 ) 2 where q 2 0 is constant [18]. The B s → γ transition form factors predicted in this works have also some differences from those in Ref. [5][6][7]. For instance, Ref. [6] predicted the form factors F T V (q 2 , 0), F T A (q 2 , 0) induced by tensor and pseudotensor currents with direct emission of the virtual photon from quarks are only equal at maximum photon energy, whereas the corresponding formula in this work have the same expression as − e d Ncm Bs p Bs ·k C + ∝ 1/(q 2 − q 2 0 ) in Eq. (7). Furthermore, the form factors are larger than previous predictions. To clarify things more clear, we think it is necessary to present a few more comments about the calculation of Ref. [5], as mentioned in introduction. In order to estimate the contribution of direct emission of the real photon from quarks, the authors of Ref. [5] calculated the form factors F T A,T V (0, q 2 ) by including the short-distance contribution in q 2 → 0 limit and additional long-distance contribution from the resonances of vector mesons such as ρ 0 , ω 0 for B d decay and φ for B s decay. Obviously, this means the short-distance contributions were not appropriately taken into account. Moreover, if F T A,T V (0, q 2 ) = F T A,T V (0, 0) stands for the short distance contribution, it seems to double counting since in this case photons emitted from magnetic-penguin vertex and quark lines directly are not able to be distinguished. We also note that for contribution from the weak annihilation the authors of Ref. [5] only took into account u and c quarks in the loop by axial anomaly as the long distance contribution, they concluded that the anomalous contribution is suppressed by a power of a heavy quark mass. We believe that only anomalous contribution to account contribution from the weak annihilation is insufficient. Our numerical result shows that the contributions from weak-annihilation diagrams are large and can not be neglected. IV. CONCLUSION In summary, we evaluated short distance calculation of the rare decays B s → γℓ + ℓ − in the SM, including contributions from all four kinds of diagrams. We focus on the contribution from four-quark operators which are not taken into account properly in previous researches. We found that the contributions are large, leading to the branching ratio of B s → ℓ + ℓ − γ being nearly enhanced by a factor 3. In the current early phase of the LHC era, the exclusive modes with muon final states are among the most promising decays. Although there are some theoretical challenges in calculation of the hadronic form factors and non-factorable corrections, with the predicted branching ratio at order of 10 −8 , B s → µ + µ − γ can be expected as the next goal after B s → µ + µ − since the final states can be identified easily and the branching ratios are large. Experimentally, B s → µ + µ − γ mode is one of the main backgrounds to B s → µ + µ − , and thus it is already taken into account in B s → µ + µ − searches [3]. Our predictions for such processes can be tested in the LHC-b and B factories in near future.
2013-03-04T09:59:02.000Z
2013-03-04T00:00:00.000
{ "year": 2013, "sha1": "1b8ca184974d1b8084c851ff9abaaa5ee21fef0b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1303.0660", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1b8ca184974d1b8084c851ff9abaaa5ee21fef0b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
207409137
pes2o/s2orc
v3-fos-license
Antibacterial activity of two phloroglucinols, flavaspidic acids AB and PB, from Dryopteris crassirhizoma The antimicrobial effect of solvent extracts from the rhizome of a thick-stemmed wood fern (Dryopteris crassirhizoma) was evaluated and its phloroglucinol components, flavaspidic acids PB and AB. Flavaspidic acids PB and AB were isolated from the D. crassirhizoma rhizomes by methanol extraction, followed by silica gel and Sephadex LH-20 column chromatography. The chemical structures were characterized by spectral techniques, including ESI-MS, UV, 1H- and 13C-NMR spectrum analysis. When the antimicrobial activity of the extracts and compounds was tested by the paper disc method, the extracts and compounds were highly active against Gram-positive bacteria, such as methicillin-resistant Staphylococcus aureus KCTC 1928 (a MRSA bacterium), Streptococcus mutans and Bacillus subtilis. The extracts and compounds were not active against fungi and chlorella. Our study revealed that the antibacterial activity of samples from D. crassirhizoma was mainly related to the flavaspidic acids. INTRODUCTION The thick-stemmed wood fern (Dryopteris crassirhizoma Nakai, Dryopteridaceae) is a semi-evergreen plant that grows on the deciduous forest floor as a pteridophyte. Two ferns, Dryopteris crassirhizoma and Osmunda japonica, are commonly used as antiinfection agents, especially for the common cold and flu, and are frequently collectively referred to as the fern (Dharmananda, 2003). Recently, the fern has also been used as the major plant with six Chinese herbs (Astragalus, Atractylodes, Red Atractylodes, Pogostemon, Adenophora, Lonicera), a combination that was recommended as a prescription formula to prevent SARS. Its rhizomes have also been used in a vermicide (Namba, 1993). In the search for natural products with antimicrobial activity, methanol extracts of the D. crassirhizoma rhizome exhibited antimicrobial activity against some bacteria. Phloroglucinols (albaspidin, aspidin, flavaspidic acids and dryocrassin) and kaempferol acetyl rhamnosides (crassirhizomosides AC and sutchuenoside A) have been isolated from D. crassirhizoma (Noro et al., 1973;Widen et al., 1996;Min et al., 2001). Recently, acylphloroglucinols isolated from D. crassirhizoma were reported to inhibit fatty acid synthase (Na et al., 2006). In addition, the compound showed a cytoprotective effect against oxidative stress-induced cell damage via catalase activation (Kang et al., 2006). The phloroglucinol composition of 18 species (including subspecies) that belong to Dryopteris Adanson sect. Fibrillosae Ching has been investigated on a world-wide basis (Widen et al.,, 1996). Phloroglucinols were observed to have anti-tumorpromoting activity (Govind et al., 1996), nitric oxide inhibitory effect (Rie et al., 2001), anti-reverse transcriptase activity (Hideo et al., 1991) and antioxidant activity (Lee et al., 2003). Since Namba (1982) (Do, 1993). Antimicrobial activity of phloroglucinols was once reported by Abbey et al. (2000). However, these phloroglucinols were acylated phloroglucinols from Helichrysum caespititium. The objectives of this study were to evaluate the antimicrobial potential of rhizome extracts and flavaspidic acids PB and AB from D. crassirhizoma against Gram-positive and -negative bacteria, fungi and chlorella. Samples, extraction and isolation of phloroglucinols Rhizome of Dryopteris crassirhizoma was collected in Mt. Sulak, Korea in July 2002 and identified by Prof. Bae, College of Pharmacy, Chungnam National University. A voucher specimen (CNU 1011) was deposited in the herbarium of the College of Pharmacy, Chungnam National University. As shown in Fig. 1, the dried rhizomes (1 kg) of Dryopteris crassirhizoma were extracted with methanol (3 L, 48 h ×2) at room temperature, and the extract was concentrated to dryness in vacuo to yield a dark brown syrupy residue (150 g). The methanol extract (150 g) was suspended in H 2 O (1 L) and then partitioned successively with hexane (1 L × 2), ethyl acetate (1 L × 2), and BuOH (1 L × 2). The dry ethyl acet-ate extract (80 g) was subjected to column chromatography on silica gel (70-230 mesh, Merck, Germany). A step gradient was used for elution; each step constituted a 10% increase in acetone (in 1 L volumes) with hexane (10% acetone in 1 L volume), up to 80% acetone. Eleven 1-L fractions were collected. These fractions were tested by in vitro antimicrobial assays, and the active fractions (fr. 5, 6) were combined. Using methanol as a solvent, the active fraction (8 g) was subjected to column chromatography on Sephadex LH-20 to afford two active compounds: flavaspidic acid PB (1, 300 mg) and flavaspidic acid AB (2, 150 mg), which were characterized by spectral methods (Noro et al., 1973;Do, 1993). Determination of the chemical structure of phloroglucinols Melting point was measured on an Electrothermal instrument (Dubuque, IA, USA). UV spectra were obtained on a Milton Roy 3000 spectrometer (Ivyland, PA, USA). 1 H-and 13 C-NMR spectra were recorded on a DRX 300 MHz (Bruker, Karlsruhe, Germany) with CDCl 3 as a solvent. ESI-MS spectra were measured on JMS 700 mass spectrometer (JEOL, Tokyo, Japan). Determination of antimicrobial activity The antimicrobial activity of the crude extracts and purified materials (flavaspidic acids AB and PB) was tested by the paper disc method. All samples were dissolved in trace ethanol. Sterile filter paper discs (Whatman No. 1, 8 mm diameter) were impregnated with 200 µg of each sample (50 µL, 4 mg mL -1 ) per paper disc and dried under the laminar flow cabinet KCTC 1940, and Chlorella regularis EML-CR02. The medium for Bacillus subtilis and Staphylococcus aureus was Nutrient agar (pH 7.0) with 5 g peptone, 3 g meat extract and 15 g agar per liter. The medium for Candida albicans was YMPG agar with 3 g yeast extract, 3 g malt extract, 5 g soybean peptone, 10 g glucose and 15 g agar per liter. The medium for Aspergillus flavus was YpSs agar consisting of 4 g yeast extract, 15 g soluble starch, 1 g K 2 HPO 4 , 0.5 g MgSO 4 × 7 H 2 O and 15 g agar per liter. The medium for Chlorella regularis was Arnon's A5 medium (pH 6.5) consisting of 1 mL Arnon's A5 solution, 1 g KH 2 PO 4 , 1 g MgSO 4 ·7H 2 O, 0.005 g FeSO 4 ·7H 2 O, 5 g yeast extract, 20 g glucose and 20 g agar per liter. All assays were performed in duplicate, so that four inhibition zone measurements were obtained for each test combination. These values were averaged to obtain the final inhibitory activity results. For each assay, two control plates were inoculated with ethanol, but without actual extracts, and were treated in the same manner as the test plates. , and δ C 8.1 were indicative of C-11' and C-11, respectively. δ H 1.40 [6H, s] and δ C 24.7 indicated gem-dimethyl at C-7, 8. In addition, δ H 2.05 [3H, s], δ C 7.4 and δ H 3.55 [2H, s] were indicative of C-11' and C-12. Thus, the structure of 1 was determined to be flavaspidic acid PB (Fig. 2). This was confirmed by comparison of the physiochemical and spectral data with published data (Noro et al., 1973). RESULTS AND DISCUSSION Flavaspidic acid AB (2) corresponded to gem-dimethyl at C-7, 8. δ H 1.87 [3H, s] and δ C 7.9 indicated an aromatic methyl (C-7'). The spectral data mentioned above were similar to flavaspidic acid PB (1), but the secondary methyl (C-10) signals in 1 did not appear in 2. Thus, the structure of 2 was determined to be flavaspidic acid AB (Fig. 2). This was confirmed by comparison of the physiochemical and spectral data comparison with published data (Noro et al., 1973;Do, 1993). The methanol and ethyl acetate extracts from the Dryopteris crassirhizoma rhizome were highly active against bacteria. As shown in Table I on kinds of microorganisms tested (data not shown). Interestingly, the flavaspidic acids were considerably more active against the MRSA bacterium, Staphylococcus aureus KCTC 1228, than against Staphylococcus aureus KCTC 1916. Also, flavaspidic acid PB was somewhat more active against Bacillus subtilis than flavaspidic acid AB. However, both compounds were moderately to slightly active against a Gramnegative bacterium, E. coli, and were not active against a fungus, Aspergillus flavus or an alga, Chlorella regularis. This study reports the antibacterial activity of plant-derived phloroglucinols. We found that the ethylacetate fraction of the Dryopteris crassirhizoma rhizome exhibited antimicrobial activity and yielded two phloroglucinols, flavaspidic acid PB and flavaspidic acid AB. The identity of the compounds was first confirmed through interpretation of their spectral characters in comparison with reported data (Noro et al., 1973). Do (1993) once reported that flavaspidic acids PB and AB had an MIC value of 12.5 µg per ml toward Streptococcus mutans OMZ 176. However, detailed antimicrobial activities against other bacteria were not investigated. Recently, MRSA bacteria have become more resistant to vancomycin antibiotics. Interestingly, the flavaspidic acids were highly active against Grampositive and MRSA bacteria including Staphylococcus aureus, but not against fungi. Our study revealed that thick-stemmed wood fern extracts may be applied to development of natural functional products with antibacterial activity. More studies on the antibacterial spectrum, the susceptibility of various bacteria to the compounds, and the mode of action are now under way.
2018-04-03T01:13:59.895Z
2009-05-27T00:00:00.000
{ "year": 2009, "sha1": "6fa662876b941a4b29b07b397b83b3fe8646bfe8", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc7091015?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a3eeaa271941b86bb7865a44add8d480ec3b2c9f", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251959433
pes2o/s2orc
v3-fos-license
The Impact of COVID-19 on Older Adults’ Perceptions of Virtual Care: Qualitative Study Background In response to the COVID-19 pandemic, older adults worldwide have increasingly received health care virtually, and health care organizations and professional bodies have indicated that virtual care is “here to stay.” As older adults are the highest users of the health care system, virtual care implementation can have a significant impact on them and may pose a need for additional support. Objective This research aims to understand older adults’ perspectives and experiences of virtual care during the pandemic. Methods As part of a larger study on older adults’ technology use during the pandemic, we conducted semistructured interviews with 20 diverse older Canadians (mean age 76.9 years, SD 6.5) at 2 points: summer of 2020 and winter/early spring of 2021. Participants were asked about their technology skills, experiences with virtual appointments, and perspectives on this type of care delivery. Interviews were digitally recorded and transcribed. A combination of team-based and framework analyses was used to interpret the data. Results Participants described their experiences with both in-person and virtual care during the pandemic, including issues with accessing care and long gaps between appointments. Overall, participants were generally satisfied with the virtual care they received during the pandemic. Participants described the benefits of virtual care (eg, increased convenience, efficiency, and safety), the limitations of virtual care (eg, need for physical examination and touch, lack of nonverbal communication, difficulties using technology, and systemic barriers in access), and their perspectives on the future of virtual care. Half of our participants preferred a return to in-person care after the COVID-19 pandemic, while the other half preferred a combination of in-person and virtual services. Many participants who preferred to access in-person services were not opposed to virtual care options, as needed; however, they wanted virtual care as an option alongside in-person care. Participants emphasized a need for training and support to be meaningfully implemented to support both older adults and providers in using virtual care. Conclusions Overall, our research identified both perceived benefits and perceived limitations of virtual care, and older adult participants emphasized their wish for a hybrid model of virtual care, in which virtual care is viewed as an addendum, not a replacement for in-person care. We recognize the limitations of our sample (small, not representative of all older Canadians, and more likely to use technology); this body of literature would greatly benefit from more research with older adults who do not/cannot use technology to receive care. Findings from this study can be mobilized as part of broader efforts to support older patients and providers engaged in virtual and in-person care, particularly post–COVID-19. Introduction As a result of efforts to limit the spread of the virus that might occur through in-person appointments, the COVID-19 pandemic accelerated the shift to virtual health care.Virtual health care, subsequently, was widely adopted across Canada and beyond [1][2][3][4][5].Simultaneously, policies at the institutional, national, and international levels flexed to accommodate recommendations on the use of virtual care within existing health care models [6,7].Virtual care can be defined as "any interaction between patients and/or members of their circle of care, occurring remotely, using any forms of communication or information technologies, with the aim of facilitating or maximizing the quality and effectiveness of patient care" [8].Virtual care is not limited to a particular technology or platform (eg, it can include the telephone) and is often used interchangeably with "telemedicine" or "eHealth" [8,9].Prior to the COVID-19 pandemic, virtual care activities, although possible, were not common in Canada [10,11].Although the COVID-19 pandemic sparked a dramatic increase in virtual care in Canada [1,2] and worldwide [4], questions remain about the quality and role of virtual care in practice [6,12], particularly with older patient populations. Although Bhatia et al [1] found that older patients were the highest users of virtual care during the pandemic, Senderovich and Wignarajah [13] expressed concerns about the maintenance of the therapeutic alliance between physicians and older patients receiving virtual care (a therapeutic alliance being a patient-doctor relationship that supports positive health outcomes).Prepandemic research in the United Kingdom by Hammersley et al [12] found that older patients were less likely to choose virtual care than were younger patients.The experience of older patients with virtual care is thus of continued interest, both during and after the pandemic.Despite common misconceptions about older adults and technology, a national survey conducted in July 2020 found that 72% of older Canadians feel confident about their ability to use existing technologies, such as smartphones or video calls [14].In the 3 months prior to the July 2020 survey, 52% of older Canadians accessed virtual care and 79% were satisfied with the virtual care received [14]; the bulk of the virtual care they received was over the telephone.Although studies have investigated the use of virtual care with older adults before (eg, [12,[15][16][17][18]) and during (eg, [19,20]) the COVID-19 pandemic, this evidence is largely quantitative; there is a lack of qualitative data that reflect the perspectives and experiences of older Canadians accessing virtual care throughout the pandemic.Lopez et al's [21] analysis of older adults' use of technology during the pandemic found a notable increase, including broader adoption of videoconferencing software/video calls.Teti et al [22] emphasize the importance of reflecting qualitative data throughout the COVID-19 pandemic to understand how COVID-19 impacts populations as a social event as well as a medical pandemic.Qualitative approaches play a vital role in understanding social responses to pandemics, as they allow us to understand the lived experiences of those who are disproportionately impacted, including older adults [22]. The aim of this study was to use a systematic qualitative study to understand how older adults experienced virtual care during the pandemic and to include their perspectives on virtual care as an alternative or supplement to in-person care.Organizations, such as the Canadian Medical Association (CMA), have indicated that virtual care is "here to stay," even if/when no longer necessitated by the pandemic.If virtual care is indeed here to stay, our interviews with older adults will contribute to broader discussions on how and when to use virtual care in a manner that reflects their experiences, wishes, and perspectives. Study Design This research is part of a larger study [21] in which we used a longitudinal qualitative study [23] approach to listen to older adults speak about their social connections and experiences of digital connectivity early (summer of 2020) and later (winter and early spring of 2021) in the COVID-19 pandemic.Our research team is situated in Ontario, Canada, and eligible participants included any older Canadian (aged 65 years or more) able to complete an English-language telephone/video interview and provide informed consent. Ethical Considerations We received ethics clearance from the University of Waterloo's Office of Research Ethics (ORE #42265). Recruitment A purposive sampling strategy [24] was used to recruit a diverse sample of older adults (eg, rural/urban; community/assisted living; diverse abilities, socioeconomic profiles, genders, and ethnicities).Recruitment during the beginning of a global pandemic that was disproportionately impacting older adults was challenging, and we used several recruitment approaches to access diverse older adults.We recruited using social media (eg, Twitter), emails to large established groups with older adult members (blinded for review), telephone calls to older adults within our personal networks (ie, asking our personal contacts to share study materials within their networks), and promotion of our study via teleconferences with older adult participants.In total, 20 older adults completed the baseline in-depth interviews in the spring of 2020, which coincided with the first wave of COVID-19 in Canada.In the spring of 2021, follow-up interviews were conducted with 12 (60%) participants from the baseline sample, coinciding with the second wave of COVID-19 in Canada.Of the 12 participants, 8 (67%) did not participate in the follow-up interviews because of death (n=1, 12.5%), they could not be reached (n=3, 37.5%), or they declined to participate in a second interview (n=4, 50%).Recruitment for follow-up interviews coincided with a particularly challenging period of the pandemic (ie, stringent lockdowns; rising case XSL • FO RenderX counts and deaths, especially among older adults; and the darker, bleaker winter months); 3 (75%) of the 4 participants who declined to follow-up specifically expressed that this was because of the challenging period and timing.Participant characteristics are summarized in Table 1. Data Collection All interviews were conducted over the telephone or via videoconferencing software.Baseline interviews lasted an average of 53 minutes (minimum 24 minutes; maximum 74 minutes); follow-up interviews lasted an average of 60 minutes (minimum 21 minutes; maximum 112 minutes).The interview questions (Multimedia Appendix 1) were developed in consultation with older adults from our "Seniors Helping as Research Partners" group and informed by our interdisciplinary research team, which includes experts in systems design engineering for older adults, recreational therapy, social gerontology, and designing health care systems for older patients.The first two-thirds of the interview focused on participants' use of and access to technology, comfort with technology, etc, and the final third of the interview focused specifically on virtual health care.Interviews and analytic debriefs were digitally recorded, run through otter.aitranscription, and then cleaned and anonymized by research assistants using protocols established by our team.Anonymizing the transcripts included the assignment of a pseudonym for each participant.Additional details about the overarching study, recruitment, and data collection may be found here [21]. Data Analysis and Strategies Our team-based analysis (ie, multiple members of the research team, drawing on different disciplinary perspectives to collectively analyzing the data; see Guest and MacQueen [25]) process used a framework analysis approach [26] that included the following steps: • Step 1 (familiarization): Each transcript was read by 1 of 3 coauthors (LA, CT, and AW), who were the same coauthors who conducted the interviews. • Step 2 (development of a coding framework): All coauthors used the initial read of the data, field notes, and debriefs to develop an initial set of thematic codes. • Step 4 (summarizing and synthesizing): The coding structure was further refined through team analysis meetings and shared coding memos to consolidate the most salient themes presented later. Rigor strategies included reflexive memoing, an audit trail within NVivo (QSR International) [28], and team-based examination of the data and each step of the analysis [25].We also reviewed our findings and interim analysis with 4 participants (ie, member checking and reviewing our interpretations of the data) via an online focus group that were recorded and transcribed to inform the analysis. In discussing their experiences with virtual care during the pandemic, older adults broadly shared 3 high-level themes: (1) their experiences accessing health care during the pandemic, (2) their perceived benefits and limitations of virtual care, and (3) their perspectives on when virtual care is acceptable and appropriate.In the quotes presented later, the suffixes included after the patient pseudonym and biographical information (B and F) refer to baseline and follow-up interviews, respectively.Participants often shared their perspectives on virtual care in the first interview and in the second replied, "Like I said last time…"; thus, more of the presented quotes are from the baseline interviews than from the follow-up interviews.There was not a notable change in participants' perspectives on virtual care across the 2 time points. Experiences With Health Care During the Pandemic Participants described their health care experiences during the pandemic in terms of issues with accessing care, and their pandemic experiences of in-person and virtual care.Although we did not specifically probe for issues with accessing care, many participants mentioned that they had not seen their primary care providers for months; some had not contacted their providers since the start of the pandemic: No, because I haven't been in touch with them since well… [Richard, 76 years, male, B] Since March. [Joan, 66 years, female, B] No, I haven't seen my doctor this year.[Richard, 76 years, male, B] At baseline, Susan, aged 82 years, expressed that older adults are reluctant to access in-person care because they are weighing the risk of contracting COVID-19 against the risk of missing an appointment: And, and the other thing is that people hesitated maybe too much sometimes to go to the hospital.Like you said, people full of coughing in an emergency room.But, you know, there are situations where people might have delayed going and they needed to go…Yeah, they're really…it's assessing the risk.Like, you know, maybe I'm better to stay home than to get COVID.[Susan, 82 years, female, B] The disruption and discontinuity of care resulting from the COVID-19 pandemic caused participants to feel fear and anxiety about the frequency and quality of care they received.However, in comparison to the risk of contracting COVID-19 when accessing in-person care, they seemed to fear the virus more than the potential complications that could result from missing so many in-person appointments. Participants felt that they accessed in-person care less regularly than they would have prior to the pandemic.In-person care was mostly accessed for emergencies, specialist services (eg, oncology, physiotherapy), and services that could not be accessed online (eg, diagnostic imaging, blood work).When in-person care was accessed, some aspects of the care were organized virtually.At baseline, Nancy, aged 66 years, had an x-ray performed in-person, with the results of the x-ray communicated virtually: I've had one X-ray and that's about it, I think.[Nancy, 66 years, female, B] Okay. And then you've got the results of the x-ray over the phone? [AW] That's right, yeah.[Nancy, 66 years, female, B] Virtual Care When asked about their experiences accessing virtual health care during the pandemic, most participants were able to discuss a time when they accessed care virtually either at baseline or follow-up.Virtual care usually involved phone calls for completing intake, scheduling, and accessing appointments; text or email messages for sending photos of health concerns; and emails/phone calls for receiving requisitions and test results.Few participants accessed virtual care in the form of video calls; most virtual care had been provided over the telephone, with some referrals or results (eg, of bloodwork) being communicated over email.In general, participants were satisfied with the virtual care they received from their family physicians.Participants also felt their relationships with regular providers were not negatively impacted by the pandemic; participants maintained their patient-doctor relationships virtually despite changes in care delivery and frequency.Participants were less comfortable with certain tasks being performed virtually, such as being prescribed a new medication, diagnosis, or meeting with a specialist for the first time. Perceived Benefits of Virtual Care When prompted about the benefits of virtual care, all participants identified at least 1 positive aspect of virtual care compared to in-person care.The most common perceived benefit was convenience, which was discussed by most participants.Other commonly cited benefits were improved safety due to the avoidance of unsafe situations associated with in-person care (eg, contracting a virus) and the efficiency of the health care provider.Table 2 summarizes the perceived benefits of virtual care identified by participants. Quote Description Benefit Virtual care is more convenient than in-person care due to the ease of communication, including the capacity to communicate by a phone or video call instead of making a trip to the provider's office; the ability to send and receive documentation, including referrals, requisitions, and test results virtually; and the time that is saved when not sitting in waiting rooms. Convenience • "…something like a requisition for an x-ray, that certainly…I didn't have to worry about handling the requisition.It was just transferred electronically.And, and when I appeared at the x-ray lab, I just, it was all already there. Perceived Limitations of Virtual Care Participants identified many aspects of virtual care that they perceived to be more challenging or less effective compared to in-person care.Many identified limitations they believed would impact others (eg, the challenges less tech-savvy older adults would face while accessing and using technology, lack of access to technology for all older adults) but maintained that virtual care was ideal for themselves and had few negative aspects.The most cited limitations of virtual care describe a lack of nonverbal communication (eg, facial expressions and body language) and limited opportunities for physical examination. Other limitations included challenges with older adults accessing and using technology, challenges with patients' and doctors' ability to express themselves verbally (eg, in telephone-only appointments), negative impacts on care coordination and continuity, and the potential exacerbation of the social isolation of older adults (ie, for some isolated older adults, in-person visits to primary care are an essential piece of their limited social lives).One participant expressed concerns about accommodations for older adults who require language interpretation services while accessing health care.Participants' perceived limitations are presented in Table 3. Participants highlighted an important caveat to our interpretation of the data: a benefit of virtual care for one older adult can be a limitation of virtual care for another.For example, although one person may appreciate the efficiency of a virtual care appointment, another may deeply miss the interpersonal and social interactions that accompany an in-person visit. Nonverbal communication and body language • "…you do miss some of the eye contact and the body language, and I make the point that when people communicate, they often talk about the words only being about 7%, the tone being 38% of the…body language being 55%, so email or a phone, you might get the tone but you don't get the body language and that's, and, sometimes that's very important.I know, how many times that I noticed body language, that I would ask another question, and bingo, the real problem would come out, where it wouldn't have come up if you hadn't been able to observe the body language" [James, 75 years, male, B a ] Virtual care can be difficult to access for older adults who either do not have access to sufficient technology at home or do not know how to use the technology they have to engage in virtual care.Abdallah et al JMIR AGING Perspectives on When Virtual Care Is Acceptable and Appropriate Subthemes related to participant perspectives on the acceptability and appropriateness of virtual care included receptiveness to virtual care in some scenarios, preferences for the future of virtual care, and the willingness of older adults to adapt to virtual care and the supports required for them to do so. The Future of Virtual Care and the Preference for a Hybrid Model (Some In-Person) When participants were asked how they would like the health care system to operate postpandemic, participants presented 2 main preferences: approximately half expressed their preference to return to a health care system that provides the majority of services in-person, while the other half preferred to retain some aspects of the COVID-19 era virtual care and reintroduce aspects of in-person care to create a hybrid system of health services. Participants who preferred to return to an in-person model of health care were not necessarily opposed to the use of virtual care.Some agreed that, although virtual care was useful during the COVID-19 pandemic, they would prefer to access in-person care whenever possible: Something like prescription renewals will be convenient to have them continue through the pharmacy, to my doctor, that would be very convenient.But aside of that, I'd rather see my physician in person.[Nancy, 66 years, female, B] Many participants expressed support for a hybrid health care model that includes aspects of both virtual and in-person care: But if it went back, it went back but in a modified way, like it's not all one or all the other.It's not all phone or all office, like it could be a mix.[Patricia, 82 years, female, B] Helen, aged 77 years, expressed that although she preferred a hybrid model of care, it would need to be carefully organized and implemented to be effective: But it needs to be carefully thought of. And I've always been hesitant about virtual care, 'cause I don't want to see that as an instead…yes, virtual care yes, but it has to be in addendum. It has to be something in between. It's very useful to check up on something. [Helen, 77 years, female, B] Although many participants supported a future that incorporated aspects of both virtual and in-person care models, they were concerned about how this would be funded at a system level, whether doctors would find it useful or difficult to manage, and how virtual care would be organized and regulated in practice. Adapting to Virtual Care Many participants felt that older adults would be proactive in learning the technologies necessary to support themselves while accessing virtual care, as health care is viewed as a "priority" or "essential" and not an option like other technologies that might be used for entertainment, etc.However, participants felt strongly that a shift toward virtual care must include meaningful and senior-friendly training and supports that will allow older adults to learn to use the technologies required, as well as enable access to the system using technologies with which older adults feel more comfortable.Although many participants noted a need for technology training and supports for older adults, several noted that efforts aimed at improving virtual care should also be focused on training for providers, not just patients: I think the…the…to take advantage of those types of situations I think technology use, I think somebody should actually encourage the GP and their registered…their nurses or their receptionist to be more proficient in these technologies.I think seniors when there's a need, they'll do anything to learn it. [Geraldine, 72 years, female, B] Although many participants were reconciled to virtual care being a major component of their health care in the future, they saw a need for related training and support for both providers and patients. Principal Findings We identified 3 high-level themes in our interviews with older adults about their virtual care use during the pandemic.Older adults shared (1) their experiences with virtual and in-person health care during the pandemic, (2) their thoughts on the benefits and limitations of virtual care, and (3) their opinions on when virtual care is most appropriate.Consistent with the results of the AGE-WELL [14] survey, most of the participants in our study experienced some form of virtual care access, primarily via telephone or online, with fewer participants having accessed care via video.Participants expressed reluctance to attend in-person visits during the pandemic, with in-person care accessed mainly in emergencies or for services that were not available virtually. Comparison With Prior Work Prepandemic studies of virtual care (eg, [18]) have found both benefits and limitations; this was also the case for the participants in this study.Importantly, most participants felt they were able to maintain their patient-doctor relationship despite the change in the mode of care delivery, thus alleviating some of the concern raised by Senderovich and Wignarajah [13] about maintaining the quality of the therapeutic alliance.Our study participants described the convenience of virtual visits as well as increased safety, including the avoiding of unsafe travel conditions, similar to findings by Elliott et al [29].Participants also felt virtual care would be more time efficient for the provider, but we note that some studies of virtual care have not found cost-saving benefits [30].In contrast, our study participants recognized the lack of physical, hands-on examinations in a virtual care appointment, as was also found in studies by Breton et al [31] and Mao et al [32].Participants also noted limitations in terms of compromises in both verbal and nonverbal communication.This is consistent with the finding of Hammersley et al [12] that there was less information sharing in virtual visits, although these authors also noted that XSL • FO RenderX the virtual care visits they studied revealed somewhat greater efforts toward building rapport with the patient.Some limitations of virtual care might be mitigated if video-and-voice appointments are used.Although video appointments are not a complete substitute for a tactile examination of the body, they enable synchronous visual examination, which may help alleviate patient concerns that providers may miss something.Video calls may also enable both patients and providers to interpret nonverbal cues and facial expressions more accurately.Conversely, it may be more difficult for providers to see the patient in a video call compared to a photo of an affected area (sent via email or text) due to the wide range of devices that older adults and clinicians use, variations in connectivity or access to a reliable internet connection among patients, and compatibility between devices.In addition, video visits make additional demands on the patient, who must be able to get online and manage the technology, which may be difficult due to disability or lack of experience with technology or a stable internet connection [32,33]. Limitations of This Study First, our study is bound to a specific time and rooted in the perspectives of a small sample of older Canadians and as such may not be readily transferable to other settings.We recognize this as a limitation and that our findings only reflect the perspectives of the 20 interviewees.Future research with larger samples of older adults is warranted.Second, we recognize our sample is undoubtedly overrepresentative of individuals who have the interest, access, and privilege to engage in new technologies.Although we specifically sought out individuals from a range of cohorts, living arrangements, and ethnic groups, our recruitment strategies (which had to be mindful of social distancing) mostly connected us with privileged individuals who were already online, had access to email, and were able to complete a voluntary research interview (ie, they had the time and interest to do so and, at the very least, had a telephone).These advantages will be reflected in our results, and this body of literature would greatly benefit from more work with older adults who do not/cannot use technology to receive care.In the future, recruitment options that do not rely on newer technologies should be used to connect with individuals who are less tech-savvy (eg, radio, mail-based, and in-person recruitment). Future Directions Falk [34] has argued that virtual care may reduce inequities for some older persons, such as those living in remote communities, but at the same time might exacerbate inequities through avoiding direct service to these regions.Future research, including that of this team, must actively reach out to support those older adults on the underrepresented side of the digital divide [35][36][37], particularly as the United Nations calls for all nations to close these digital divides [38].As suggested by participants in this study, support strategies should target both older adults and providers; Chen et al [39] found that training geriatric care professionals on virtual care technologies prior to the pandemic helped ease the transition to virtual care.Multiple virtual care resources have been designed for older adults in Canada, including appointment checklists (eg, [5]) and supportive liaisons to help navigate particular technologies, as implemented at Women's College Hospital [40].Technology-based interventions can also improve access for marginalized groups with less technical experience by simplifying user interfaces and workflows on virtual care platforms to increase usability [40,41].Efforts should be made to collaborate with older adults when designing and implementing such strategies in order to maximize their usefulness and relevance [40,42,43].This can be accomplished by engaging older adults in designing technology and virtual care systems, training providers, and research/program evaluation (eg, through advisory committees, participatory research, codesign, etc) [40]. Conclusion The COVID-19 pandemic has been a major catalyst for the adoption of virtual care in Canada [6].Our study confirmed that the potential benefits of virtual care for older adults are numerous; despite barriers to accessing virtual care, many older adults perceive benefits and are open to continued use of virtual care after the pandemic.Our study also found many limitations of virtual care, and a consensus that virtual care should be an addendum to the health care system, rather than its main delivery mechanism.These findings would thus call into question policies, such as the United Kingdom's National Health Service (NHS) plan for digital-first primary care for every patient [44].As we transition to a postpandemic world, older adults must be included in discussions on the design and implementation of virtual care options.Concerns related to privacy and confidentiality have been highlighted in other studies [6,45] but were not significantly present in our findings; this could be an explanation for why some older adults were less comfortable accessing virtual care. In this study, we presented data from a small sample of older adults from Canada detailing their experiences with virtual care during the pandemic, their perceptions on the benefits and limitations of virtual care, and their willingness to engage in virtual care.Future dissemination of virtual care options should ensure that older adults' views, preferences, and circumstances are considered and that accommodations are made for those whose use of virtual care is limited by disability or discomfort with the technology.The findings can also be used to inform future studies on the use of virtual care by older adults, as providers and patients continue to adapt to both the potential and pitfalls of this mode of care delivery. Table 2 . Summary of the perceived benefits of virtual care use for older adults. That was convenient."[Nancy,66 years, female, B a ] • "Yes.And then you don't have to go to see her or him.You can just use your phone.And then that's easier."[Lily,77 years, female, B] It means that you, that people, elderly people particularly, don't have to leave their home, which in some…Because, sometimes, if one person is really ill, and they need somebody to go with them and then it, you wonder sometimes if you're hurting your health more by going than by staying home sort of thing."[Susan,82 years, female, B] • "And as you, as you get older, and now we go back to winter, you know, you really don't want to drive in winter, hence the reason why we go away for 3 months…Uh, you know you're risking, as I'm saying, you're getting older, you're not as quick on the draw as far as driving is concerned and so on.So, you're risking somebody's life really going in just to do that.Whereas if you can get it on the emailer…then it makes more sense" [Katherine, 74 years, female, B]With the introduction of virtual care, providers can improve the efficiency of their practices.A couple of participants also highlighted that sharing information and engaging in appointments with larger care teams can be easier with virtual care. • "And I think probably we're going to end up going that way a little bit.It does free up doctors to deal with bigger problems, maybe.And I, I, as I say, I have not used it.So, I really don't have any personal experience about it.But my understanding from people that I know that have phoned them, the doctor generally gets back to them ASAP.And, my one daughter has a doctor friend and, the doctor seems not to be as busy."[Shirley, 77 years, female, B] • "I think, well, especially if they were going to use Zoom or something like that, if you wanted to talk to the doctor face to face and actually see her, I think that would be great if they use Zoom rather than having us go in every time for something simple… It opens the door for them to take, as I said before, to take people in that really, really, really need to see the doctor.It saves us time, saves her time.I think there's a lot of pros."[Katherine, 74 years, female, B] a B: baseline. Table 3 . Summary of the perceived limitations of virtual care use for older adults.
2022-09-01T15:17:31.502Z
2022-04-06T00:00:00.000
{ "year": 2022, "sha1": "bf665ed5db62c4b362201f37e88285ff0cf5b9a0", "oa_license": "CCBY", "oa_url": "https://aging.jmir.org/2022/4/e38546/PDF", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "437daf5644637d93dd0108151bd29ae66ef790d7", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
256803225
pes2o/s2orc
v3-fos-license
Carotid Plaque Vulnerability Diagnosis by CTA versus MRA: A Systematic Review Stenosis grade of the carotid arteries has been the primary indicator for risk stratification and surgical treatment of carotid artery disease. Certain characteristics of the carotid plaque render it vulnerable and have been associated with increased plaque rupture rates. Computed tomography angiography (CTA) and magnetic resonance angiography (MRA) have been shown to detect these characteristics to a different degree. The aim of the current study was to report on the detection of vulnerable carotid plaque characteristics by CTA and MRA and their possible association. A systematic review of the medical literature was executed, utilizing PubMed, SCOPUS and CENTRAL databases, according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) 2020 guidelines. The study protocol has been registered to PROSPERO (CRD42022381801). Comparative studies reporting on both CTA and MRA carotid artery studies were included in the analysis. The QUADAS tools were used for risk of bias diagnostic imaging studies. Outcomes included carotid plaque vulnerability characteristics described in CTA and MRA and their association. Five studies, incorporating 377 patients and 695 carotid plaques, were included. Four studies reported on symptomatic status (326 patients, 92.9%). MRA characteristics included intraplaque hemorrhage, plaque ulceration, type VI AHA plaque hallmarks and intra-plaque high-intensity signal. Intraplaque hemorrhage detected in MRA was the most described characteristic and was associated with increased plaque density, increased lumen stenosis, plaque ulceration and increased soft-plaque and hard-plaque thickness. Certain characteristics of vulnerable carotid plaques can be detected in carotid artery CTA imaging studies. Nevertheless, MRA continues to provide more detailed and thorough imaging. Both imaging modalities can be applied for comprehensive carotid artery work-up, each one complementing the other. Introduction Extracranial carotid artery disease is historically reported to be responsible for approximately 15-20% of acute ischemic strokes, amaurosis fugax and transient ischemic attacks (TIA) [1]. High-grade stenosis of the extracranial carotid artery has been considered the main parameter justifying surgical revascularization, with either carotid endarterectomy (CEA) or carotid artery stenting (CAS). As of 2015, a total of 34 guideline statements from numerous cardiovascular societies have endorsed lumen stenosis, while synchronously incorporating symptomatic status and surgical risk, as the main factors guiding surgical intervention [2]. Subsequently, imaging modalities applied towards carotid artery disease evaluation, including duplex ultrasound (DUS), computed tomography angiography (CTA) and magnetic resonance angiography (MRA), of the carotid arteries have been focusing in detecting these characteristics, in addition to the degree of lumen stenosis [7][8][9]. The benefits and drawbacks of each imaging modality, regarding their diagnostic accuracy in detecting these characteristics, have not been comparatively assessed to produce robust results regarding the superiority of one method over the other. Carotid artery CTA provides a swift, detailed imaging of the extracranial and intracranial carotid artery systems, while contrast media administration aids in high accuracy detection of atherosclerotic plaques. However, specific MRA investigation protocols provide more detailed imaging due to improved spatial resolution, less saturation effects and intravoxel dephasing and better evaluation of vessel lumen and plaque characteristics. While sonographic evaluation of the carotid arteries is important in cerebrovascular events diagnosis and management, h, CTA and MRA are often fundamental in cases requiring surgical intervention. The aim of the current review was to systematically search and evaluate the available studies comparing the accuracy of CTA and MRA in diagnosing vulnerable carotid plaque characteristics. Review Protocol The study protocol for the current review has been registered to PROSPERO (CRD42022381801). The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) 2020 Guidelines for Systematic Reviews and Meta-analyses were followed [10]. Randomized and non-randomized comparative observational studies, published up to October 2022, reporting on symptomatic and/or asymptomatic carotid plaque vulnerability characteristics, using both CTA and MRA imaging, were considered eligible. Studies reporting solely on one of the two imaging modalities were excluded from the analysis. Case reports, case series, and studies reporting on experimental diagnostic models or not reporting on humans were not considered eligible. Language was not an exclusion criterion, given that an English copy of the article was made available. Scientific Council approval in terms of ethical considerations was not required due to the nature of the study. Data extraction and methodological assessment was executed by two investigators (K.D. and P.N.). Any disagreement was resolved after a discussion with a third investigator (G.K.). A full-text review of the eligible studies included was conducted, respecting the inclusion and exclusion criteria, accordingly. Data Extraction A standard Microsoft Excel spreadsheet extraction file was developed. Data extraction included general information (article author, title, year of publication, journal of publication, study type, country of origin, study aim, lesion definition and researcher experience on carotid artery lesion diagnosis). Additionally, clinical information was collected, including the patient number, age, male-to-female ratio, symptomatic of asymptomatic status, symptom type (stroke, TIA and amaurosis fugax), number of plaques evaluated, type of imaging modalities applied, time interval between imaging studies performed, MRA and CTA technical characteristics, sensitivity, specificity, positive prognostic value and negative prognostic value for diagnosis of vulnerable carotid plaque characteristics, applying either one of the included imaging modalities, wherever available. Carotid plaque vulnerability characteristics included the stenosis rate, intraplaque hemorrhage (IPH), mean plaque density, mean soft-plaque density, mean hard-plaque density and mean plaque density. In cases where a statistic significance was observed, it was reported accordingly. Quality Assessment The risk of bias of the included studies in the final analysis was executed through the application of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2 and QUADAS-C) tools, which are primarily used for quality assessment and applicability in systematic reviews and meta-analyses regarding the accuracy of diagnostic studies [12]. The tools consist of four key domains, including patient selection, index testing, reference standard as well as flow and timing and is applied in four phases. Following application of the tools in each study, a "low", "high" or "unclear" risk of bias is produced. Quality assessment was carried out by two independent investigators (K.D. and P.N.). In case of disagreement, a third author was advised (G.K.). Definitions Carotid artery stenosis severity was carried out in accordance with the NASCET criteria in all included studies [13]. The American Heart Association classification of atherosclerotic plaque was used in one of the included studies [14]. Statistical Analysis The heterogeneity in outcome report did not allow for a quantitative analysis of data. Thus, only a descriptive review of the data was executed. The included studies incorporated 377 patients and 695 carotid plaques. Four studies provided data regarding patient age (median age: 70.1 ± 9.2 years) and sex (357 patients, 70% males) [8,9,16,17]. Exclusion criteria reported in the studies involved previous carotid artery surgical interventions (carotid endarterectomy and carotid artery stenting), complete carotid artery occlusion, any major contraindication for CTA or MRA (established contrast-median anaphylactic reactions) and low-quality produced imaging studies. Study characteristics are presented in Table 1. Four studies incorporated data regarding symptomatic status of carotid plaque (326 patients, 92.9% symptomatic) [8,9,15,17]. Only one study referred to the presence of stroke or TIA (34 symptomatic patients, 88.2% stroke) [8]. No study reported on the severity of cerebrovascular events (Table 2). The included studies incorporated 377 patients and 695 carotid plaques. Four studies provided data regarding patient age (median age: 70.1 ± 9.2 years) and sex (357 patients, 70% males) [8,9,16,17]. Exclusion criteria reported in the studies involved previous carotid artery surgical interventions (carotid endarterectomy and carotid artery stenting), complete carotid artery occlusion, any major contraindication for CTA or MRA (established contrast-median anaphylactic reactions) and low-quality produced imaging studies. Study characteristics are presented in Table 1. Four studies incorporated data regarding symptomatic status of carotid plaque (326 patients, 92.9% symptomatic) [8,9,15,17]. Only one study referred to the presence of stroke or TIA (34 symptomatic patients, 88.2% stroke) [8]. No study reported on the severity of cerebrovascular events ( Table 2). All five studies included patients who underwent both CTA and MRA of the carotid arteries, while one study included patients who underwent digital subtraction angiography (DSA) as the referral imaging modality, as well as both CTA and MRA. MRA, CTA and DSA were utilized as the control imaging modality in three, one and one study, respectively. All studies incorporated blinded evaluation of the produced imaging studies and fully disclosed the number and experience of the researchers who evaluated the imaging studies. The time intervals between the applied imaging modalities and data regarding the technical aspects of both MRA and CTA, as well as the use of contrast median, were reported (Table 3). In terms of the stenosis evaluation, all studies reported their outcomes based on the NASCET criteria using CTA measurements [8,9,[15][16][17]. Four studies reported on the degree Diagnostics 2023, 13, 646 6 of 12 of stenosis, incorporating 585 carotid plaques (mild stenosis: 56.5%, moderate stenosis: 20%, severe stenosis: 23.5%) [8,9,15,17]. Anzidei et al. compared MRA and CTA findings with DSA as the referral imaging modality. They incorporated two different types of MRA protocols (FP; first-pass, SS; steady state) and compared the diagnostic accuracy of the two imaging modalities based on DSA findings. The main carotid plaque vulnerability characteristic assessed was the presence of a carotid ulcer. CTA and SS MRA were proven equivalent in ulcer detection, while FP MRA proved to be inferior, without, however, a statistically significant difference between modalities (p > 0.05) [15]. Gupta et al. examined the intraplaque high-intensity signal (IHIS) as a characteristic of IPH in MRA studies, and its association with CTA findings. Specifically, mean soft-plaque thickness in CTA was significantly higher in plaques with versus without IHIS presence (4.47 vs. 2.3 mm, p < 0.0001). In contrast, mean hard-plaque thickness in CTA was greater in plaques without versus with IHIS presence (2.09 vs. 1.16 mL, p = 0.0134) [8]. Eisenmenger et al. incorporated a magnetization-prepared rapid gradient-echo (MPRAGE) protocol for diagnosing IPH and investigated its association with adventitial calcification and an internal soft plaque (rim-sign), adventitial pattern, stenosis, maximum plaque thickness, ulceration and intraluminal thrombus on CTA. Specifically, IPHpositive plaques in MRA studies were characterized with stenosis with a higher mean NASCET percentage [53.9% vs. 24.9% (p < 0.001)], higher mean maximum-plaque thickness , higher mean soft-plaque thickness [5.26 mm vs. 2.99 mm (p < 0.001)] and higher mean hard-plaque thickness [2.97 vs. 1.91 mm (p = 0.002)]. Based on the aforementioned CTA characteristics, IPH observed in MRA studies was mainly associated with the presence of the rim-sign in addition to increased soft-plaque thickness. This specific pattern showed excellent IPH prediction (area-under-the-curve: 0.94) [17]. MRA and associated CTA vulnerable plaque characteristics are reported on Table 4. Risk of Bias Time intervals between the application of the two imaging modalities were comparable, and thus, introduced a low risk of bias in that category. Moreover, the disclosure of the number and experience of researchers assessing the corresponding imaging studies of each patient introduced a low risk of bias. However, the incorporated studies were characterized by a high risk of bias, mainly regarding the absence of randomization in the patient selection process. Additionally, a not-systematic method of lesion description and match between the two imaging modalities also introduces a certain degree of bias, as solely the description of the degree of lumen stenosis was common among the included studies, incorporating the NASCET criteria. Finally, non-disclosure of some quintessential patients' characteristics, such as age, sex and symptomatic status and cerebrovascular event severity of included patients (stroke, TIA, amaurosis fugax), negatively affects the objectivity of the produced results (Table 5, Figure 2). Table 4. Association between vulnerable carotid plaque MRA and CTA characteristics as reported in the included studies. Risk of Bias Footnote: P = patient selection; I = index test; R = reference standard; FT = flow and timing. indicates low risk; indicates high risk; ? indicates unclear risk. Discussion CTA and MRA have been the cornerstones, alongside with DUS for the imaging of carotid system lesions responsible for cerebrovascular events. DSA has been largely sidetracked in the last decades due to its interventional profile, related to complications. CTA has proven to be an excellent tool for stenosis evaluation, while MRA provides exquisite details regarding the morphology of the atherosclerotic carotid lesion [18,19]. Over the last decades, attention has shifted towards plaque characterization besides carotid lumen stenosis [18][19][20]. Intraplaque hemorrhage (IPH), plaque ulceration, plaque neovascularity, a thin fibrous cap and the presence of a lipid-rich necrotic core (LRNC), mainly characterized in MRA studies, have been vastly associated with cerebrovascular events, even in the absence of >50% carotid luminal stenosis [20]. Observation of those characteristics in CTA studies would greatly reduce the need for further imaging assessment, as well as allow for special patient categories (e.g., patients with cardiac pacemakers) for a complete diagnostic work-up However, stratification tools for vulnerable carotid plaques diagnosis in CTA studies have yet to be developed, as the literature lacks in comparative studies with Discussion CTA and MRA have been the cornerstones, alongside with DUS for the imaging of carotid system lesions responsible for cerebrovascular events. DSA has been largely sidetracked in the last decades due to its interventional profile, related to complications. CTA has proven to be an excellent tool for stenosis evaluation, while MRA provides exquisite details regarding the morphology of the atherosclerotic carotid lesion [18,19]. Over the last decades, attention has shifted towards plaque characterization besides carotid lumen stenosis [18][19][20]. Intraplaque hemorrhage (IPH), plaque ulceration, plaque neovascularity, a thin fibrous cap and the presence of a lipid-rich necrotic core (LRNC), mainly characterized in MRA studies, have been vastly associated with cerebrovascular events, even in the absence of >50% carotid luminal stenosis [20]. Observation of those characteristics in CTA studies would greatly reduce the need for further imaging assessment, as well as allow for special patient categories (e.g., patients with cardiac pacemakers) for a complete diagnostic work-up However, stratification tools for vulnerable carotid plaques diagnosis in CTA studies have yet to be developed, as the literature lacks in comparative studies with systematic reporting of outcomes between the two imaging modalities. Studies comparing detection of vulnerable carotid plaque characteristics are few, while no systematic reviews are currently available in the literature incorporating the limited available data coherently. Thus, we opted towards the incorporation of available data, aiming towards a more coherent report. Vulnerable carotid plaque hallmarks are more accurately distinguished in MRA studies, while some data are available regarding the appearance of these hallmarks in CTA imaging sequences. The current comparative studies suggest that certain CTA findings, including increased soft-plaque thickness, increased total-plaque thickness, increased density as well as increased NASCET percentage stenosis can be associated with vulnerable plaque characteristics detected in MRA studies. However, no current consensus exists due to the paucity of published studies as well as due to the heterogeneity of the published results. Plaque neovascularization and IPH have been studied as complex lesion features, predisposing to acute ischemic events [21]. Increased neovascularization is the predecessor in the pathogenesis cascade of IPH, often leading to lesions related to high morbidity and mortality [22]. MRA has proven to bear excellent tissue distinguishing properties, especially in cases of atherosclerotic plaque evaluation, with high diagnostic accuracy of IPH [23,24]. Furthermore, MRA analysis of carotid plaques characterized by IPH have been associated with increased plaque progression, further destabilizing the plaque [25]. In the current review, IPH was associated with increased luminal stenosis as well as increased plaque attenuation and soft-plaque thickness [8,9,16,17]. However, non-comparative stud-ies support that IPH could be associated with lower attenuation measurements in CTA studies, providing controversial data [26]. Thus, currently no standard CTA characteristics can be safely associated with IPH presence, as detected in MRA studies. LRNC poses another hallmark for vulnerable carotid plaque characterization and has been evaluated in MRA studies of symptomatic and asymptomatic patients. Data suggest that LRNC findings in MRA can predict plaque progression and rupture [27][28][29]. Its clinical significance relates to treatment individualization, as aggressive medical management of plasma lipid levels in these patients could prove beneficial [29]. While none of the studies included in the current analysis evaluated LRNC, literature on comparative CTA and histology findings suggests that mainly large LRNC can be accurately observed and diagnosed with CTA. The main factor seems to be the overlap in Hounsfield densities for connective tissues and lipids, rendering it difficult to distinguish small lipid cores [30]. Ulceration, predominantly observed in DUS studies, can also be observed in MRA studies. Plaque ulcers, representing a major plaque surface anomaly, are highly related to embolic cerebrovascular events [31]. MRA studies may successfully detect carotid plaque ulcers by utilizing contrast-enhanced modalities; however, detection is depended to ulcer orientation as well as degree of lumen stenosis [32]. CTA studies have associated plaque ulcers with increased lipid-volume, increased stenosis degree and decreased calcification proportions [33,34]. Additionally, CTA studies suggest that carotid plaque ulcers generally involve extension of contrast material beyond the vascular lumen of the plaque, usually of at least 1 mm [35]. The density and thickness of the atherosclerotic plaque fibrous cap stratifies the risk of plaque rupture, as thin fibrous caps (TFC), usually overlying a necrotic core substituent, are often associated with higher rates of plaque rupture and embolic events [36]. The pathogenesis of thin fibrous cap rupture involves increased inflammation and lipid-core growth [37]. Currently, no threshold for fibrous cap thickness has been universally adopted for characterizing it as "thin". MRA studies suggest that TFC can be accurately detected in multi-sequence imaging studies, as the observation and distinguishment of fine structures is one of the main characteristics of magnetic resonance imaging [38,39]. None of the studies included in the current review evaluated thin fibrous cap presence in MRA or CTA studies. Detection of TFC in CTA studies has proven to be difficult, as current technologies do not provide detailed enough imaging data to differentiate a thin fibrous cap from adjacent tissues. Data from the available included studies did not support the association of IPH and ulceration detected in MRA studies with important, well-known vulnerable carotid plaque hallmarks, including thin-fibrous cap as well as lipid-rich necrotic core. Future studies could potentially examine any possible correlation between the abovementioned characteristics, which would further provide grounds for better detection of vulnerable carotid plaques in CTA studies. Limitations The current descriptive systematic review bears a few limitations, mainly owed to the methodology and structure of the incorporated studies. Most importantly, there are currently no robust head-to-head comparative studies regarding vulnerable carotid plaque detection via CTA and MRA protocols. Furthermore, the retrospective nature of the included studies ascribes a certain degree of bias. In addition, the lack of systematic outcome comparison regarding common carotid plaque vulnerability characteristics, as well as the lack of symptomatic status divulgence limits the accordance of the outcomes. As knowledge on carotid plaque vulnerability rapidly expands, in parallel with diagnostic tools technical capabilities, the incorporation of patient outcomes from almost two decades ago could influence the produced results in a confusing manner. Conclusions While vulnerable carotid plaque characteristics are more accurately depicted in MRA studies, CTA provides promising potential in detecting certain vulnerable lesions in risk of embolic events. Future comparative studies are essential in order to standardize the diagnostic accuracy of these two imaging modalities, as well as the association among their findings.
2023-02-12T16:15:32.751Z
2023-02-01T00:00:00.000
{ "year": 2023, "sha1": "043a2b15fee37d3921fac410a2433829dcd34d05", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/4/646/pdf?version=1675933636", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c526d122558cc03868539a809249f7a04ad05097", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
11621176
pes2o/s2orc
v3-fos-license
Genome-wide candidate regions for selective sweeps revealed through massive parallel sequencing of DNA across ten turkey populations Background The domestic turkey (Meleagris gallopavo) is an important agricultural species that is largely used as a meat-type bird. Characterizing genetic variation in populations of domesticated species and associating these variation patterns with the evolution, domestication, and selective breeding is critical for understanding the dynamics of genomic change in these species. Intense selective breeding and population bottlenecks are expected to leave signatures in the genome of domesticated species, such as unusually low nucleotide diversity or the presence of exceptionally extended haplotype homozygosity. These patterns of variation in selected populations are highly useful to not only understand the consequences of selective breeding and population dynamics, but also to provide insights into biological mechanisms that may affect physiological processes important to bring changes in phenotype of interest. Results We observed 54 genomic regions in heritage and commercial turkey populations on 14 different chromosomes that showed statistically significant (P < 0.05) reduction in genomic variation indicating candidate selective sweeps. Areas with evidence of selective sweeps varied from 1.5 Mb to 13.8 Mb in length. Out of these 54 sweeps, 23 overlapped at least partially between two or more populations. Overlapping sweeps were found on 13 different chromosomes. The remaining 31 sweeps were population-specific and were observed on 12 different chromosomes, with 26 of these regions present only in commercial populations. Genes that are known to affect growth were enriched in the sweep regions. Conclusion The turkey genome showed large sweep regions. The relatively high number of sweep regions in commercial turkey populations compared to heritage varieties and the enrichment of genes important to growth in these regions, suggest that these sweeps are the result of intense selection in these commercial lines, moving specific haplotypes towards fixation. Electronic supplementary material The online version of this article (doi:10.1186/s12863-014-0117-4) contains supplementary material, which is available to authorized users. Background Characterizing genetic variation in populations of domesticated species and associating these variation patterns with the evolution, domestication, and selective breeding is critical for understanding the dynamics of genomic change in these species. Intense selective breeding and population bottlenecks are expected to leave signatures in the genome of domesticated species, such as unusually low nucleotide diversity or the presence of exceptionally extended haplotype homozygosity [1][2][3]. Genome-wide characterization of many different breeds and populations for these selective sweeps, along with the functional knowledge of the region, can reveal which genes are linked to traits or diseases with a complex genetic basis [4]. These patterns of variation in selected populations are highly useful to not only understand the consequences of selective breeding and population dynamics, but also to provide insights into biological mechanisms that may affect physiological processes important to bring changes in phenotype of interest [5,6]. The turkey (Meleagris gallopavo) is an important agricultural species that is largely used as a meat-type bird. All domesticated turkeys descend from the wild turkeys indigenous to North and South America. There are seven subspecies of the wild form [7] distinguished by geographic range and plumage differences. They are: South Mexican (M. g. gallopavo), Rio Grande (M. g. intermedia), Merriam's (M. g. merriami), Gould's (M. g. mexicana), Eastern (M. g. silverstris), Moore's (M. g. oneusta) and Florida (M. g. osceola). Three of the seven are believed to play an important role in domestication. It is generally accepted that domestication of turkey involved South Mexican turkey [8]. The earliest signs of turkey domestication dates to approximately 2000 years ago at Mayan sites in Southern Mexico such as Cobá [9]. Domestic turkey stocks were established by at least 180 AD within the Tehuacán valley [10]. A separate domestication event likely occurred in what is now the Southwest United States, where the first strong archaeological evidence for domestic stocks dates to similar time (ca. 200 BC-AD 500), although the wild progenitor has been long debated [11]. The modern domestic turkey has been recognized by the American Standard of Perfection since 1971 [12], and is registered as a single breed with eight varieties defined primarily by plumage color. Out of these eight heritage turkey varieties, five (Bronze, Narragansett, White Holland, Spanish Black and Blue Slate) were registered in 1874 [12], while the remaining three (Beltsville Small White, Bourbon Red, and Royal Palm) were registered in 1951, 1909, and 1971 respectively [12]. These domestic turkeys are presumed to be highly inbred [12], and to have undergone intensive selection for traits of economic importance such as body weight, meat quality and egg production [9,11]. Recent census data show that turkey is the second largest contributor in worldwide poultry meat production [13]. Global production of turkeys has experienced a massive expansion over the past 40 years. In 2008, turkeys represented 6.65% of the world poultry meat production [13]. Global turkey stocks nearly tripled from 178 million in 1970 to over 482 million in 2008 [13]. Astonishingly, in those four decades, average meat production per bird doubled from 6.7 to 12.7 Kg, showing the result of intensive selection in turkeys. An important genomic indicator of a selective sweep involves local reduction in genetic variation within the selected gene(s) and in nearby single nucleotide polymorphism (SNP) variants [14]. Selection affects all the genomic variability in the genome, including SNPs, microsatellites and several types of structural variations (SVs). The SV category includes large insertions and deletions, inversions, duplications and balanced or unbalanced inter-chromosomal translocations. Next generation sequencing (NGS) is an efficient approach for a large-scale, genome-wide SNP discovery and genotyping of individuals [15,16]. Availability of a high quality reference genome sequence [17] and resequencing of individuals or groups with appropriate genome coverage are key prerequisites for whole-genome SNP discovery [15,16]. Genomic sequences of individuals are aligned to a reference genome to detect nucleotide variations, i.e., differences in genotype of individuals at specific positions of the genome [18,19]. Our search was aimed at finding genomic regions where selection or domestication has changed the frequency of favourable alleles towards fixation. Genomic regions where these changes are observed elucidate the effect from the selective pressure of domestication or selection that was applied to the domesticated turkey. Populations Ten turkey populations that included seven commercial lines and three heritage varieties were used for whole genome sequencing (WGS). The seven commercial lines, L1 through L7, were provided by two breeding companies. Commercial lines were selected for different objectives including higher adult body weight and rapid growth except L5 which is a female line that was selected for medium adult body weight, conformation and egg production. The heritage varieties were Beltsville Small White (BvSW), Royal Palm (RP) and Narragansett (Nset) [20][21][22]. In total, 29 individuals were selected for WGS, with three individuals per population except for RP, which was represented by two individuals. Genomic DNA Extraction, Library Preparation and Sequencing Genomic DNA was extracted from whole blood with the QIAamp DNA blood Midi Kit (Qiagen, Valencia, CA); the procedure included a proteinase K digestion followed by column purification. Integrity of high molecular weight DNA following the extraction was confirmed by agarose gel analysis. Genomic DNA was sheared using the Covaris S2 to yield an average fragment size of 450 bp, as determined with the Agilent Bioanalyzer 2100 (Agilent, Santa Clara, CA). Genomic libraries were prepared with the Paired-end Sequencing Sample Preparation Kit (Illumina, San Diego, CA) with 5 μg of genomic DNA according to the manufacturer's instructions. All genomic DNA libraries were validated with the Agilent Bioanalyzer (model 2100). The automated cBot Cluster Generation System (Illumina) was used to generate clusters on the flow cell. Each individual was sequenced (paired-end; read length 120 bp) in a single lane of a flow cell using the Illumina GAIIx. Sequence mapping Sequence reads of each turkey were filtered on base quality; reads were trimmed if three consecutive bases had an average Phred-like quality score of less than 13. Both paired-end sequences of a fragment were required to be at least 40 bp long after trimming to be retained for analyses. Retained reads were aligned against the turkey reference genome (UMD 2.01) using the MOSAIK aligner [23]. Mapping of reads from each individual to the reference genome sequence was performed with hash size (hs) of 15, maximum hash positions (mhp) of 100, an alignment candidate threshold (act) of 20, and a maximum mismatch percentage (mmp) of 5. Banded Smith-Waterman algorithm (bw = 41) was used to increase the speed of alignments. The algorithm implemented in MOSAIK calculates a mapping quality for each sequence that measures the probability that a sequence belongs to a specific target. The alignments were filtered for ambiguously mapped reads and sorted using MosaikSort. Finally, the file was converted to BAM format [16] using MosaikText. All BAM files have been uploaded to NCBI's Sequence Read Archive (SRA) database under the study accession number "SRP012021" [24]. Heterozygosity Genome wide nucleotide diversity across the whole genome was assessed for each individual of the different turkey populations. The pileup function of SamTools version 0.1.12a [15] was used to perform SNP genotype calling, after which the nucleotide diversity was estimated across the whole genome for each individual separately. Nucleotide diversity was estimated by calculating the number of heterozygous SNP as well as the number of homozygous non-reference genotypes within each 300 Kb window. Windows of 300 Kb were necessary to avoid large random fluctuations in heterozygosity that were observed in a preliminary analysis with smaller windows. The random fluctuations with smaller windows were due to the low SNP detection rate. For calling SNPs, coverage per base was limited to 5-10 fold to avoid analysing repetitive regions of the genome as the average sequence depth per animal, at bases covered by at least one read, ranged from 2.07 to 6.72 [24]. In addition genotypes were only called when the genotype quality was at least 20. Observed number of heterozygous SNPs per nucleotide position was then averaged for each population within the window size of 300 Kb. Estimation of threshold values for calling sweeps Turkey chromosomes were divided into bins of 300 Kb, and these bins were used to estimate threshold values to determine significance levels of sweep regions in the genome. Patterns of heterozygosity among these bins were investigated separately for each turkey population. A sweep region was defined when heterozygosity was below the threshold for at least 5 consecutive bins. To obtain the genome wide significance threshold (P <0.05), heterozygosity values of the bins were randomly permuted across the genome. Subsequently the threshold that would lead to exactly one significant region of 5 consecutive bins was determined for each of 7000 replicates. The distribution of these 7000 thresholds was used to obtain the 5% genome wide threshold. With this 5% threshold heterozygosity value, each population had a 5% probability of finding 1 sweep region by chance. A threshold of five consecutive bins was selected because preliminary results showed large regions of homozygosity in the turkey genome, and also to obtain stable statistics for heterozygosity. Using these threshold values, each turkey population was investigated for regions of low heterozygosity indicative of the presence of a sweep. Subsequently, turkey populations were compared with each other for the overlap in putative sweep regions. Overlapping sweep regions were identified when a sweep was replicated in more than one population. The overlapping sweep regions were defined as the genomic region covered by the sweeps from at least two populations. Heat map plots Heat maps for the individual turkey chromosomes and for the whole turkey genome, including all turkey autosomes, were plotted separately to visualize overlapping sweeps in different turkey populations using the "heatmap.plus" package in R [25]. The color scale was based on the square root of heterozygosity values, for visualization and distinction of sweep areas in the genomic regions. Functional annotation analysis All genes lying within the overlapping sweep regions of turkey were used for functional annotation analysis. Functional annotation analysis was performed using DAVID (Database for Annotation, Visualization, and Integrated Discovery) with default parameters [26]. DAVID is a webbased bioinformatics application that systematically identifies enriched biology associated with large gene list(s) derived from high-throughput genomic experiments [26]. Correction for multiple comparison was done by the Benjamini-Hochberg method [27]. Annotation for turkey and chicken genes is very limited; therefore we used one to one orthologs of turkey to human to perform this functional annotation analysis. Ethical approval for the use of animals in this study Although animals were used in this work, no direct experiments were performed on them. Blood sample collection was carried out by highly skilled and experienced personnel from the breeding companies. No approval from the ethics committee was necessary according to local legislation. Results In order to identify candidate selective sweeps, threshold values were estimated for heterozygosity in each of the different turkey populations. These threshold heterozygosity values ranged from 1.0E-5 to 5.1E-5 ( Table 1). The highest threshold value was obtained for the L3 commercial line while the lowest threshold value was obtained for BvSW. A whole genome view of the sweep regions in the different turkey populations is presented in 1. In total, we observed 54 genomic regions where heterozygosity was significantly reduced (P <0.05). These candidate selective sweeps were found on 14 different chromosomes across turkey populations (Additional file 1). Areas with evidence of candidate selective sweeps varied from 1.5 Mb to 11.1 Mb in length (Additional file 1). Out of these 54 sweep regions, 31 were population-specific (Additional file 1) and observed on 12 different turkey chromosomes, while 23 were overlapping sweep regions that were observed in multiple populations and distributed across 13 different chromosomes (Table 2 & Additional file 1). The majority of the population-specific regions, 26 in total, were observed in the commercial populations (L1-L7), on average nearly 4 per population; whereas heritage populations (BvSW, Nset and RP) showed 1.6 population-specific sweeps per population. Differences between commercial populations were considerable, with as many as 8 sweep regions observed in population L3 and only one populationspecific sweep region observed in population L6. Five population-specific sweep regions were observed in heritage varieties with 1 (RP) or 2 (BvSW and Nset) sweeps per population. Out of 23 sweep regions that showed overlap in multiple populations, one was observed only in the heritage varieties (Nset and RP) while 13 were observed only in the commercial lines (Table 2). Commercial line L1 had the largest sweep region, 11.1 Mb, (Additional file 1) as well as the highest number (10) of overlapping sweep regions. The lowest number (3) of overlapping sweep regions was observed in the heritage variety Nset (Table 2). Differences were observed along the turkey genome, regarding the presence of sweeps at different chromosomes. Out of 54 observed sweep regions at different turkey chromosomes, chromosome 2 showed the highest number of significant regions, 8 in total, while chromosome 14 showed the lowest number, 2 in total. Chromosomes 5, 7, 9 and 14 had five candidate selective sweep regions that showed an overlap in at least 4 different turkey populations (Table 2; Figure 1). Chromosome 5 had two overlapping sweep regions that were each shared by at least five populations, and one of these two regions was presented by commercial lines only (Table 2). Chromosome 9 also had a sweep region that was shared by five populations (Table 2 and Figure 1). Overlapping sweep regions covered 5,452 genes, 34.7% of the total number of genes that were identified in turkey genome sequence [17]. BioMart website version 0.7 (http://www.biomart.org) was used to identify human orthologs for turkey genes. Out of these turkey genes, 3,858 were one to one orthologs with human genes and 3,832 turkey genes had a corresponding HUGO Gene Nomenclature Committee (HGNC) symbol in human genebuild (GRC37.p7). Finally, 3,718 of these genes with HGNC symbol had annotation information available in DAVID and were used in the functional annotation analysis. Functional annotation analyses resulted in 514 gene ontology (GO) terms with an Expression Analysis Systematic Explorer (EASE) P-value [28] of less than 0.1 (Additional file 2) which is a rather liberal threshold because it does not correct for multiple testing. The EASE P-value is a modified Fisher Exact P-value. GO terms that passed the significant threshold of 0.05 after Benjamini Hochberg (B-H) correction [27] are shown in Table 3. The most enriched (B-H corrected P <0.0005) was embryonic morphogenesis, while the other terms in Table 3 are related to nucleic acid binding. The nominally significant GO terms (P <0.10, Additional file 2) included a few more terms related with morphogenesis or growth but were not significant after B-H correction. Discussion We aimed at finding genomic regions with reduced heterozygosity, either resulting from strong selection in favor of specific alleles or other reasons such as genetic drift. For the discovery of these regions in different turkey populations (commercial lines and the heritage varieties), we used a modified whole genome heterozygosity distribution approach [2]. In a particular population, the occurrence of heterozygosity values equal or less than the threshold value (Table 1) within at least 5 consecutive bins (each with 300 Kb size) indicates a significant reduction in heterozygosity in that region. Use of large window size might have limited our access to highlight smaller significant sweep regions. This large window size was chosen due to the detection of a large number of consecutive sweep deserts in our preliminary analyses which might be due to species specific low heterozygosity and/or overall low sequence depth [24]. In general, heterozygosity in turkey is low with an estimated average heterozygosity of 1.07 SNPs Kb -1 [24], much lower than the observed heterozygosity in chicken, with 4.28 and 2.24 SNPs Kb -1 reported in two different studies [2,29]. We estimated threshold values separately for each turkey population. The threshold values (Table 1) can also be regarded as a measure of the level of genetic diversity in a particular population. In our study, we found the highest threshold value for commercial population L3, which is concordant with the highest observed genetic diversity and the highest number of SNPs discovered in this population in our previous study [24]. Similarly, the lowest threshold value was observed for BvSW, also concordant with the previously observed lowest genetic diversity and the lowest number of SNPs discovered in this population [24]. In our study, 48 significant regions (population-specific and overlapping) were observed in the commercial populations, while only 6 significant regions (populationspecific and overlapping) were observed in the heritage populations (Additional file 1 & Table 2). The small number of individuals (2-3) used per population could not reveal the complete variation of a particular population but each of these individuals still belonged to a specific population, therefore population specific terminology was used for the group of individuals that belong to a same population. The high number of candidate selective sweeps in commercial lines can be explained as a result of the high selection intensities applied to these populations [30]. A lower number of sweep regions in heritage varieties may be due to a number of reasons, such as the admixture of populations, relatively high effective populations size in heritage varieties, or relatively less intensive and less specific directional selection applied to these varieties in comparison to commercial turkeys. Specific information about population admixture or effective population size of heritage varieties is limited, but these varieties were likely pure lines given the anecdotal information from the turkey breeders. In our previous study, among the heritage varieties, Nset showed the highest heterozygosity followed by RP and BvSW respectively [24]. A consistent pattern was observed with a lower number of sweep regions and a higher threshold heterozygosity value for Nset compared to BvSW and RP. These differences in number of sweeps and threshold heterozygosity values for the different populations may also be an indication of difference in level of admixture or effective population size. The heritage variety BvSW showed the lowest threshold heterozygosity value and also the highest number of sweeps of all heritage varieties, which is consistent with the severe bottleneck that this population went through in 2000 (Alexandra Scupham, Personal communication). Similarly, Nset population showed highest threshold heterozygosity value and the lowest number of sweeps of all domesticated turkey populations which could represent a higher level of admixture or comparatively larger effective population size for this population. However, no historical information is available to support this. Regions of sweep with variable but large sizes (1.5-11.1 Mb) were observed. Reduction in genetic diversity/heterozygosity at different locations in the genome can persist for a long time, and indicate selection across a long genomic region [31]. The size of a sweep region may vary with history of domestication, the type of population (inbred or outbred), intensity of selection within a particular population, population dynamics such as bottlenecks and drift. SNP analyses of domestic dogs and cats show large stretches of alternating heterozygous and homozygous regions as a consequence of domestication and breed development [32,33]. In most outbred species, a selected region would display local SNP homozygosity, compared to abundant polymorphism elsewhere in the genome [34]. Uneven distribution of homozygous regions can be expected across the genome due to selection pressure through natural or artificial means [1][2][3]35]. Chromosome 5, 7, 9 and 14 are highly distinct with overlapping regions in at least four different turkey populations (Table 2). This suggests that genomic regions on these chromosomes contain gene(s) which affect the traits that are important for turkey production. Turkey populations that showed overlap in sweeps on these chromosomes may be selected for specific objectives that all populations had in common or, alternatively, may have been developed from parents that already were homozygous for these sweep regions. Two candidate selective sweep regions discovered on chromosome 5 and chromosome 22 show overlapping stretches only in commercial populations (Additional file 1). These regions may contain genes involved in commercially important traits. The regions, however, are too large to identify the individual genes that may have been selected. We could not use museum samples (South Mexican turkeys) in our current data which were included in our previous study [24] due to their very low available sequence depth. Average sequence depth at bases covered by at least one read in museum samples ranged from 1.38 to 1.81 [24] which is less than half the depth (5 to 10 -fold) that was used as the criterion for calling SNPs in all individuals of the current study. However, even though coverage was low, in our previous study [24] we identified genomic regions at four chromosomes with increased homozygosity of nonreference alleles in the museum samples. The domesticated populations were found to be fixed for the reference alleles at those same loci [24]. These genomic regions with high non-reference allele homozygosity were aligned with the candidate selective sweep regions of current study to find any overlap. Besides the region at chromosome 3, the other regions at chromosomes 9, 14 and 22 showed overlap with the detected sweep regions (Additional file 1) of current study. These sweep regions of chromosome 9, 14 and 22 show overlap in 5, 4 and 3 populations respectively. This concordance of results supports our hypothesis that these candidate sweep regions are likely result of selection in commercial populations. Chromosome studies have revealed that the karyotype is more conserved among avian species than in other taxa, such as mammals, with most avian species showing a diploid chromosome number between 76 and 80 (http://www. genomesize.com). This suggests that chromosomal evolution or large-scale rearrangements affecting chromosome number occur at a low rate in birds, and as a result many chromosomes have remained more or less intact during avian evolution [36]. Comparative cytogenetic and linkage maps between turkey and chicken showed conserved synteny and close ancestral relationships [37,38] that support the hypothetical ancestral Galliform karyotype [39]. The strong structural and functional conservation between the turkey and the chicken genomes [40,41], as well as the similarities in breeding objectives, suggest that overlap in selective sweep regions between the two species could be expected. To test whether selective sweep regions are conserved between chicken and turkey, the orthology to chicken for all significant overlapping sweep regions of turkey was determined. These genomic regions were then examined for the presence of sweeps, based on two different studies in the chicken [2,42]. Selective sweep studies reported about 400 sweep regions [2,42] which is about 0.38 sweep per Mb in chicken genome. Thirteen out of the 23 overlapping candidate selective sweep regions identified in turkey also harbored a selective sweep reported in chicken. Rubin et al. [2] reported 40 highly significant chicken sweep regions with very low Z transformed heterozygosity (ZHp < -6). Two of these highly significant chicken sweeps mapped within the syntenic regions of turkey sweeps on chromosomes 7 and 11 (Additional file 1). Overall, the concordance of chicken sweep regions with turkey sweep regions was low. Approximately 0.32 chicken sweeps were observed per Mb within the total overlapping sweep length of turkey. This result shows no enrichment of chicken sweeps within the overlapping sweep regions of turkey. Selective sweep regions are expected to have been involved in producing phenotypic variation for the traits of interest and intensive selection leads these regions towards fixation. To investigate the variation explained by these regions, we looked for available turkey quantitative trait loci (QTL) information within these regions [41]. We did not find overlap between the QTL regions from our previous study [41] and the candidate sweep regions in the current analyses. This discordance could be explained if QTL regions were still too much variable to be identified in a search for selective sweeps. Due to the limited availability of information on turkey QTLs and the presence of structural and functional conservation between the turkey and chicken genomes [24,38,40], overlapping regions of candidate selective sweeps ( Table 2) of turkey were aligned with chicken genome sequence (WASHUC2) to determine their positions in the chicken genome (Additional file 3). The orthologous chicken regions were subsequently examined for the presence of reported chicken QTL for growth [43]. Many QTL were found to be overlapped with these genomic regions (Additional file 3). The frequency of chicken growth QTL for which the confidence interval overlapped with the turkey sweep regions was found to be 11.33 growth QTL per Mb of sweep region. This high frequency of chicken growth QTL overlapping with the turkey candidate selective sweep regions was however a result of the high number of growth QTL discovered in chicken. The sweep regions did not show an enrichment of chicken QTL compared to other parts of the genome. Production censuses of turkeys from the last four decades show that turkeys have doubled in size. We had therefore expected to see a sweep in the region of the somatomedin, insulin-like growth factor 1 (IGF-1), which is well known to play an important role in muscle growth and development in various domesticated species [44][45][46]. However, we did not find a candidate sweep near the IGF-1 region on turkey chromosome 1 (56348061 bp-56402610 bp). This observation suggests that the sequence variation at the IGF-1 locus itself is not involved in regulating the level in turkeys. Previously, two QTL were detected for IGF-1 level in blood plasma in chicken at chromosome 1 and 2 [47,48]. These two chicken QTL regions, both are syntenic with turkey overlapping candidate sweep regions at chromosome 1 and 6, respectively (Additional file 3), showing that some genes are present within the candidate sweep regions that appear to affect the level of IGF-1 hormone in blood, which has been shown to regulate growth, reproduction, energy balance, cell proliferation and cell death [49]. Given the large increase in production per bird from 6.7 to 12.7 Kg in a 40 years period [13], intensive selection for growth must have taken place in turkeys. The likely candidate genes such as IGF2, Pit1, AFABP, PRKAG3, GDF8 etc. that have been previously reported to affect growth were not present within the candidate sweep regions. Gene ontology (GO) enrichment analysis was therefore performed to see if the complete set of genes within the candidate sweep regions has been enriched for association with growth. We performed gene functional annotation analysis using DAVID. Gene-based enrichment analysis showed some enrichment of genes for regulation of development and morphogenesis within the candidate sweep regions (Additional file 2). We found significantly (Benjamini P <0.05) enriched GO term with embryonic morphogenesis (Table 3) and other suggestive terms (1 < P >0.05) with embryonic organ morphogenesis, body development, maintenance of growth etc. (Additional file 2). This shows that the observed candidate selective sweep regions of turkey are enriched with genes that are important for some factors in growth and development.
2016-05-17T19:37:21.938Z
2014-11-25T00:00:00.000
{ "year": 2014, "sha1": "0b71b56c876507d15f14b176226c731ec4fbca98", "oa_license": "CCBY", "oa_url": "https://bmcgenet.biomedcentral.com/track/pdf/10.1186/s12863-014-0117-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0b71b56c876507d15f14b176226c731ec4fbca98", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10344941
pes2o/s2orc
v3-fos-license
Rab-3 and unc-18 Interactions in Alcohol Sensitivity Are Distinct from Synaptic Transmission The molecular mechanisms underlying sensitivity to alcohol are incompletely understood. Recent research has highlighted the involvement of two presynaptic proteins, Munc18 and Rab3. We have previously characterised biochemically a number of specific Munc18 point mutations including an E466K mutation that augments a direct Rab3 interaction. Here the phenotypes of this and other Munc18 mutations were assessed in alcohol sensitivity and exocytosis using Caenorhabditis elegans. We found that expressing the orthologous E466K mutation (unc-18 E465K) enhanced alcohol sensitivity. This enhancement in sensitivity was surprisingly independent of rab-3. In contrast unc-18 R39C, which decreases syntaxin binding, enhanced sensitivity to alcohol in a manner requiring rab-3. Finally, overexpression of R39C could suppress partially the reduction in neurotransmitter release in rab-3 mutant worms, whereas wild-type or E465K mutants showed no rescue. These data indicate that the epistatic interactions between unc-18 and rab-3 in modulating sensitivity to alcohol are distinct from interactions affecting neurotransmitter release. Introduction Drug addiction is one of the leading causes of preventable death, generating a considerable financial burden to society. Indeed alcohol use and abuse can lead to increased incidence of liver disease, cardiovascular disease, cancer and other debilitating illnesses [1]. Although the environment can influence addiction, current estimates of genetic heritability range between 40-80% [2]. One significant contributing component to the genetic determination of addiction is the individual's initial level of response as highlighted by a consistent association of alcohol addiction with polymorphisms in genes involved in alcohol metabolism [3,4]. Despite a ubiquitous prevalence in modern society, the precise physiological mechanisms of intoxication and addiction remain poorly understood. A complete understanding of the contributing factors that underlie alcohol sensitivity is therefore of potential therapeutic importance. Current models of alcohol action within the nervous system predict low-affinity interactions of alcohol with specific target proteins or protein complexes [5]. Genetic studies of alcohol sensitivity have contra-indicated many potential targets both pre-and post-synaptic in origin [6,7]. The model organism Caenorhabditis elegans is an excellent platform for the genetic dissection of alcohol sensitivity as it has a similar dosedependent response to exogenous alcohol to mammals [8]. Recent research from C. elegans has determined a role in alcohol sensitivity for proteins central to the exocytotic machinery, yet distinct from synaptic transmission efficacy. Loss-of-function (lof) mutations in the GTPase rab-3 reduces sensitivity to alcohol in C. elegans [9]. Similarly, a single point mutation in the protein Munc18 that inhibits SNARE complex binding specifically also reduces sensitivity to alcohol in C. elegans [10]. Both mutants also affect voluntary alcohol consumption in mice [9,11] emphasising the conservation of genetic determination of alcohol sensitivity from nematodes to mammals. Munc18 is an essential protein in presynaptic vesicle exocytosis whose precise function remains somewhat enigmatic [12,13]. Biochemically, Munc18 binds the t-SNARE (soluble N-ethylmaleimide-sensitive factor attachment protein receptor) syntaxin in two different modes of interaction as well as the assembled SNARE complex [14][15][16]. In worms, null unc-18 (e81) alleles display strong behavioural phenotypes including paralysed locomotion and resistance to inhibitors of acetylcholinesterase [17]. Rab3 is a GTPase that also functions in exocytosis by recruiting and tethering synaptic vesicles to the plasma membrane [18], although roles for Rab3 in late stages of docking [19] and vesicle fusion [20] have also been demonstrated. In worms, lof rab-3 mutants exhibit loopy, mildly slower locomotion and are also resistant to inhibitors of acetylcholinesterase [19]. We have previously investigated a number of point mutations of mammalian Munc18 that alter protein interactions [21], including an E466K gain-of-function (gof) mutation affecting direct binding to Rab3 [22]. In this study we have investigated the functional effects of some of these point mutations in unc-18, the nematode orthologue of Munc18-1, in both a wildtype and lof rab-3 genetic background. A mutation that interferes with closed-conformation syntaxin binding (unc-18 R39C) was hypersensitive to alcohol as was the orthologous mutation that enhances the Munc18-Rab3 interaction (unc-18 E465K). In addition overexpression of the R39C mutation partially compensated for lof rab-3 in neurotransmitter release; yet, was recessive to lof rab-3 in alcohol sensitivity. Conversely, the E465K mutation was dominant to lof rab-3 in alcohol sensitivity, but recessive in neurotransmitter release. We conclude that the specific interactions between unc-18 and rab-3 that govern exocytosis are functionally distinct from sensitivity to alcohol. Alcohol sensitivity phenotypes of single point mutations in unc-18 We recently demonstrated that a single point mutation (D216N) in Munc18 acts biochemically by reducing binding to the assembled SNARE complex and that the orthologous mutation in C. elegans unc-18 (D214N) reduces sensitivity to both low and high concentrations of exogenous ethanol [10]. Previously, we have biochemically characterised other point mutations in Munc18 that affect binding to other proteins including R39C (inhibits binding to closed-conformation syntaxin) [23,24], P242S (inhibits binding to Mint proteins) [21] and E466K (enhances binding to Rab3) [22]. To assess whether these other Munc18 interactions could also affect alcohol sensitivity we generated transgenic worms expressing the orthologous mutations of unc-18 in a null (unc-18 e81) background ( Figure 1A) and assessed their sensitivity to alcohol in comparison with transgenic worms expressing wildtype unc-18. Despite a strong reduction in alcohol sensitivity, worms that express the unc-18 D214N mutation have relatively normal, but statistically elevated locomotion rates [10]. Similarly, the unc-18 R39C, P240S and E465K expressing mutants exhibited qualitatively normal locomotion in comparison to unc-18 wild-type (Table 1) although the R39C mutants had a significant reduction in thrashing of 23% in comparison to wild-type (Kruskal-Wallis one-way analysis of variance on ranks with post-hoc comparison; P<0.05; N = 77 (Wt), 55 (R39C), 48 (P240S) and 55 (E465K)). Exposing worms to high external ethanol concentrations (400 mM) causes a depression in locomotion [8,25]. In addition, exposure of worms to low external concentrations (21 mM) stimulates locomotion [10]. Due to the low permeability of chemicals across the C. elegans cuticle, the internal ethanol concentrations are estimated to be substantially reduced and approximate that seen in intoxicated humans [8], although this interpretation is not universally shared [25]. We screened whether any of the unc-18 point mutations had effects on sensitivity to exogenous ethanol at either the stimulatory or depressive concentrations. In contrast to the previously characterised D214N mutation, the R39C and E465K mutations enhanced sensitivity to alcohol at both the stimulatory and the depressive concentrations ( Figure 1B, C). There were no effects of the P240S mutation at either concentration of ethanol. This lack of effect was perhaps unsurprising as the P240S mutation reduces binding to the Mint proteins [21] and the C. elegans orthologue of Mint, lin-10, lacks the Munc18 binding domain. Therefore, both the R39C and E465K mutations of unc-18 increased sensitivity to alcohol. Alcohol sensitivity phenotype of a double mutation in unc-18 Munc18 functions at the synapse at multiple steps in the exocytotic pathway through interactions with many proteins [12,13]. We were interested to determine whether the enhanced sensitivity to alcohol of the R39C or E465K mutations had additive phenotypic effects when combined. To assess this question, we generated transgenic worms expressing the double mutation (unc-18 R39C/E465K). Although the single mutants each had small inhibitory effects on basal thrashing rate, locomotion of the double mutant was in fact enhanced to a greater level than wild-type (Table 1; Kruskal-Wallis one-way analysis of variance on ranks with post-hoc comparisons; P<0.05; N = 55 (R39C), 55 (E465K) and 15 (R39C/E465K)). In comparison to worms expressing wild-type unc-18 or either single mutants, however, the double mutation (R39C/E465K) produced no additive effect on alcohol sensitivity (Figure 2A, B). At either low or high external ethanol the sensitivity of the double mutant was not significantly greater than the single mutants. Therefore the effects of either point mutation were not additive with respect to alcohol sensitivity. Exocytotic phenotypes of point mutations in unc-18 Movement of nematodes is determined by defined neural circuits, integrating sensory information to generate locomotion, as well as the strength of neuromuscular transmission. Munc18 has primarily been described as a protein essential for exocytosis [12,13]. Mice null for Munc18 have defects in both vesicle docking [26] and secretion [27]. In addition specific mutations of Munc18 can affect the kinetics of membrane fusion [15,28,29]. In C. elegans, unc-18 null mutants are paralysed and have defects in docking and neuromuscular transmission [17,30]. With respect to exocytosis, the R39C mutation causes an increase in EJP amplitude in Drosophila [31] and alters the kinetics of vesicle fusion in chromaffin cells [24], whereas it appears to have very little effect when expressed in C. elegans [23,32]. The E466K mutation enhances dense-core granule recruitment in chromaffin cells [22] and has a very mild hypersensitivity to aldicarb in C. elegans [33]. We next determined whether any of the mutations in unc-18 that affected alcohol sensitivity also affected the strength of synaptic transmission using the well established aldicarb sensitivity assay [34]. In this assay, quantitative changes in the rate at which a population of worms paralyse are indirect measurements of changes in synaptic strength. In comparison to worms expressing wild-type unc-18, the E465K mutants were mildly, but insignificantly, hypersensitive to aldicarb ( Figure 3). In contrast, R39C worms had a small, but consistent resistance to aldicarb indicative of a reduction in signalling strength at the neuromuscular junction. We also tested the R39C/E465K double mutant in the aldicarb assay and found that the R39C mutation was dominant over E465K for the aldicarb sensitivity phenotype ( Figure 3). As the two unc-18 mutations produced equivalent effects in ethanol but contrasting effects in aldicarb, we conclude that the function of the individual mutations in sensitivity to alcohol are uncorrelated with effects on synaptic transmission strength. Alcohol sensitivity phenotypes of point mutations in lof rab-3 Rab3 is a GTPase involved in the trafficking of synaptic vesicles and various aspects of exocytosis [18]. Lof rab-3 worms are resistant to the effects of depressive concentrations of exogenous alcohol [9]. The E466K mutation of Munc18 increases the interaction between Munc18 and Rab3 [22] without affecting binding to syntaxin or Mint proteins [21]. We therefore investigated whether the effects of any of our unc-18 mutations were epistatic to rab-3 by expressing in a lof rab-3 genetic background and assaying for alcohol sensitivity. We have previously investigated the effects of specific unc-18 point mutations in both a wild-type (N2) or null (unc-18) genetic background and found similar phenotypic effects either in the presence or absence of endogenous unc-18 [35]. Similar to that seen in the null unc-18 (e81) allele, expression of R39C in lof rab-3 (y250) caused a significant decrease in basal locomotor rate in comparison to expression of wild-type unc-18 (Table 1; In response to low levels of alcohol, lof rab-3 worms exhibited a normal stimulation of locomotion ( Figure 4A). The enhancement in alcohol-dependent stimulation by either single (R39C or E465K) or double (R39C/E465K) mutations of unc-18, however, was negated when these mutations were expressed in the lof rab-3 mutant background. As previously described [9], at depressive concentrations of ethanol lof rab-3 (y250) worms were less sensitive than Bristol N2 wild-types ( Figure 4B). Expressing either wild-type (Wt) or R39C unc-18 in lof rab-3 had no effect on this rab-3 phenotype. Surprisingly, expression of unc-18 E465K was dominant to the effects of lof rab-3. Expression of the double mutant showed that the addition of the R39C mutation did not alter the dominant effect of E465K ( Figure 4B). These experiments demonstrate that at low concentrations of ethanol, the lof rab-3 phenotype is dominant to both of the R39C and E465K unc-18 mutations whereas at high concentrations, E465K is dominant to rab-3 whereas rab-3 is dominant to R39C. Exocytotic phenotypes of point mutations in lof rab-3 The E466K mutation enhances the interaction between Munc18 and Rab3 [22], without affecting syntaxin binding [21]. Despite this biochemical characterisation, the effect of the mutation on sensitivity to high concentrations of alcohol was surprisingly independent of functional rab-3. We tested whether any of the unc-18 mutations required rab-3 to affect exocytosis. We verified that lof rab-3 (y250) worms were resistant to aldicarb in comparison to Bristol N2 wild-types ( Figure 5) as has been previously reported [19]. Expression of wild-type unc-18 in the lof rab-3 background had no effect on rab-3dependent resistance to aldicarb. Despite dominant effects to lof rab-3 in alcohol sensitivity the E465K mutation had no effect on the aldicarb phenotype. The unc-18 R39C mutation, which on its own caused a mild resistance to aldicarb, was able to block partially the effects of lof rab-3 ( Figure 5). Thus, despite lof rab-3 being dominant to R39C in sensitivity to alcohol the reverse was true for sensitivity to aldicarb. The R39C/E465K double mutant was not different from lof rab-3 indicating that the effects of R39C alone were suppressed by the additional E465K mutation. Discussion This paper demonstrates that the genetic interactions between two exocytotic proteins, unc-18 and rab-3, are different depending on the phenotypic context. For the alcohol phenotype, either the R39C or E465K unc-18 mutations increased sensitivity. The R39C mutation is characterised to decrease binding to closed conformation syntaxin for mammalian Munc18 in vitro [24] and in vivo [36] as well as C. elegans UNC-18 in vitro [23]. This then potentially implicates this interaction with syntaxin as an important regulator of alcohol sensitivity. Although this hypothesis has not been directly tested for ethanol specifically, syntaxin hypomorphs in C. elegans do have reduced sensitivity to volatile anaesthetics [37] emphasizing a potential convergence of cellular effectors of various anaesthetics at the presynaptic terminal. On the other hand, the E465K mutation acts to increase Rab3 binding, at least for Munc18 [22]. Applying the same logic of R39C and syntaxin, this would imply that increased ethanol sensitivity of the E465K mutation would be a consequence of increased Rab3 binding. Rab3 itself does not associate with Munc18 when it is syntaxin bound [22]. Consequently the ethanol phenotype of E465K alternatively could be a secondary consequence of the reduction in syntaxin binding in favour of Rab3. This interpretation could also explain the lack of additivity of the double mutant. The results of these mutations in the lof rab-3 genetic background, however, argue against the simple interpretation that the effects are solely the result of the same syntaxin interaction. For the stimulatory ethanol phenotype, the effects of R39C or E465K mutations were blocked. For the depressive ethanol sensitivity phenotype the E465K mutation is dominant to lof rab-3 whereas R39C is not. This then indicates that whatever the E465K mutation is doing at high ethanol concentrations, it acts both downstream and independent of functional rab-3, which itself is downstream of R39C. Interestingly, the E465K mutation is modelled on a Sly1p (yeast Sec1/Munc18 protein) that bypasses the requirement for a functional Rab protein during ER to Golgi vesicle trafficking [38]. This mutation then also bypasses the requirement of a functional Rab protein in alcohol sensitivity as expression of E465K in the lof rab-3 genetic background eliminates the rab-3 phenotype. What then are these unc-18 mutations or lof rab-3 doing to alter ethanol sensitivity? Previous work has excluded the interpretation that ethanol sensitivity is a simple reflection of alterations in signalling strength [8][9][10]; yet, both unc-18 and rab-3 are characterised primarily as exocytotic proteins involved potentially in docking, priming and fusion itself [13,18]. It remains possible that the action of ethanol presynaptically is at the level of synaptic vesicle trafficking or exocytosis that is separate from signalling strength per se. Alternatively, the action of ethanol could be postsynaptic and lof rab-3 or the unc-18 mutations are altering the trafficking of postsynaptic receptors whose function is modulated by ethanol. Indeed, ethanol can affect many neurotransmitter receptors including GABA (γ-aminobutyric acid), glutamate and serotonin [7]. The precise synaptic location of action of ethanol and the roles of exocytotic proteins therefore remains to be determined in greater detail. Despite this, it is clear that the unc-18 E465K mutation acts independently and can circumvent the requirement of functional rab-3 in ethanol sensitivity. The epistatic interactions between unc-18 and rab-3 that determine ethanol sensitivity stand in direct contrast to those for signalling strength. At the worm neuromuscular junction, the R39C mutation induced resistance to aldicarb implying a reduction in signalling strength. The R39C mutation has been previously shown to increase evoked postsynaptic currents in Drososphila [31] which may be a result of an increase in initial fusion rate [28]. The total amount of neurotransmitter released per exocytotic event, however, is concurrently decreased by the R39C mutation in bovine adrenal chromaffin cells [24] which would explain the observed reduction in signalling strength as assayed by aldicarb sensitivity in C. elegans. Contrary to ethanol sensitivity, R39C unc-18 is partially dominant to lof rab-3. Indeed as the R39C mutation is itself resistant to the effects of aldicarb in comparison to wild-type unc-18, it is possible that R39C is completely dominant to lof rab-3 for aldicarb sensitivity. It is most likely that this mutation overcomes the loss of functional rab-3 in exocytosis via changes to vesicle recruitment. Null unc-18 worms have a reduction in docked vesicles [30] that is dependent on syntaxin binding [39] and lof rab-3 alleles also reduce both the total number of synaptic vesicles and their trafficking [19]. Indeed the role of Munc18 in docking is downstream of Rab3 in adrenal chromaffin cells [40]. The data here support the notion that inhibiting the closed-conformation syntaxin interaction, and hence supporting binding of Munc18/UNC-18 to open syntaxin, The Munc18 E466K mutation acts to increase Rab3 binding and the number of fusion events from bovine adrenal chromaffin cells [21]. Therefore, the lack of effect of the orthologous mutation (unc-18 E465K) in the lof rab-3 genetic background could be relatively easy to rationalise. Indeed the rab-3 (y250) allele produces no detectable RAB-3 protein [19]. The phenotypic effect of the R39C mutation, however, is blocked in the R39C/E465K double mutant expressed in the lof rab-3 genetic background suggestive of an additional functional role of the E465K mutation. At present, no other biochemical effects of the E465K mutation are known [21,22]. Nonetheless, in contrast to ethanol sensitivity, the aldicarb data indicate that that the R39C mutation acts downstream and independently of rab-3, which itself is potentially downstream of E465K. The phenotypic effects presented here are likely to be consistent with phenotypic effects in mammals. Indeed, the pleiotropic action of alcohol in mammals is conserved for many phenotypes in nematodes [41]. Mutations that affect ethanol sensitivity in nematodes have been consistently demonstrated to alter more complex alcohol phenotypes in mice, including Munc18 and Rab3 [8][9][10][11]42,43]. In fact, various GTPases have been linked with addiction in general [44][45][46][47]. Whether Munc18/ UNC-18 itself is acting as an effector of Rab3 is a potential hypothesis requiring more investigation. For the exocytosis phenotypes, key insights have been derived from C. elegans, as the vast majority of exocytotic proteins have orthologues in nematodes [48]. The interactions between Munc18/UNC-18 and Rab3 have thus far been only investigated with respect to exocytosis [22,40,49] and this study furthers this knowledge by showing that the unc-18 R39C mutation can overcome the secretory defects associated with lof rab-3. In addition, genetic interactions between unc-18 and rab-3 in alcohol sensitivity determined that, for this phenotype, the unc-18 E465K mutation eliminated a requirement of rab-3. Most surprisingly, we demonstrate that the epistatic interactions between mutants of unc-18 and rab-3 are distinct depending on the phenotypic context such that the R39C mutation acts downstream of Rab3 in exocytosis whereas it acts upstream of Rab3 in ethanol sensitivity. Finally, our data emphasises that simple modulation of synaptic strength is unrelated to sensitivity to ethanol and that the functional actions of alcohol are a complex cellular mechanism involving a large spectrum of neuronal proteins. Molecular biology All point mutations of the unc-18 rescuing construct were introduced by site-directed mutagenesis using either the GeneTailor (Invitrogen) or QuikChange (Stratagene) methods as described previously [10,23]. Nematode culture, strains and microinjection C. elegans strains were grown and maintained on nematode growth medium (NGM) plates at 20°C with Escherichia coli OP 50 as a food source as previously described [10,23]. Strains used in this study were: Bristol N2 (wild-type reference), unc-18 (e81) and rab-3 (y250). Transgenic worms were generated by germline injection as previously described [10,23]. Transgenic expression constructs carried unc-18 cDNA, either wild-type or the indicated point mutations, under the control of its own genomic flanking regions. Successful transgenic expression was verified by co-injection with a sur-5::GFP marker (pTG96) (kind gift of Prof. A. Fire, Stanford, CA). The concentration of injected DNA was made up to 100 ng/µl with empty pBlue Script SK+ vector for all injections. For each transgenic construct, 3-5 individual independently-derived lines were generated and analysed. Results presented here were consistent for all generated lines. Behavioural assays and analysis All behavioural assays were performed in a temperature controlled room at 20°C using young adult hermaphrodite animals from sparsely populated plates. Locomotion rate was quantified by measuring thrashing in 200 ul Dent's solution (140 mM NaCl, 6 mM KCl, 1 mM CaCl 2 , 1 mM MgCl 2 and 5 mM HEPES; pH 7.4; with bovine serum albumin at 0.1 mg/ml) over a 1 minute period as described previously [10,23]. A thrash was defined as one complete movement from maximum to minimum amplitude and back again. For ethanol experiments, measurements of locomotion were made after 10 minutes exposure and are expressed as a percentage of mean locomotion rate in 0 mM ethanol measured each day (at least 10 control animals per transgenic line). Animals were assessed in both low ethanol concentrations that stimulate locomotion (21 mM) and high ethanol concentrations that depress locomotion (400 mM) [8,10,25]. All data are expressed as mean ± S.E. Significance was tested by one-way analysis of variance (ANOVA) and post-hoc comparison of means using either the Student-Newman-Keuls test or Dunn's test (where samples sizes were unequal). Aldicarb sensitivity was determined by measuring time to paralysis following acute exposure. For each experiment, 20-25 worms were moved to NGM plates containing aldicarb (1 mM; Sigma Chemical) and assessed for paralysis every 10 or 30 minutes after drug exposure by mechanical stimulation of the worms with a thin tungsten wire. Significance was tested by two-way ANOVA and post-hoc comparison of means using the Student-Newman-Keuls test. Experiments were performed three times.
2016-05-12T22:15:10.714Z
2013-11-14T00:00:00.000
{ "year": 2013, "sha1": "d53b0342668353d312a5cd1b2c447441c3dc4bf9", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0081117&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d53b0342668353d312a5cd1b2c447441c3dc4bf9", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
56485413
pes2o/s2orc
v3-fos-license
Proteomic Characterization of Bradyrhizobium diazoefficiens Bacteroids Reveals a Post-Symbiotic, Hemibiotrophic-Like Lifestyle of the Bacteria within Senescing Soybean Nodules The form and physiology of Bradyrhizobium diazoefficiens after the decline of symbiotic nitrogen fixation has been characterized. Proteomic analyses showed that post-symbiotic B. diazoefficiens underwent metabolic remodeling as well-defined groups of proteins declined, increased or remained unchanged from 56 to 119 days after planting, suggesting a transition to a hemibiotrophic-like lifestyle. Enzymatic analysis showed distinct patterns in both the cytoplasm and the periplasm. Similar to the bacteroid, the post-symbiotic bacteria rely on a non-citric acid cycle supply of succinate and, although viable, they did not demonstrate the ability to grow within the senescent nodule. Introduction In 1879, the German mycologist Heinrich Anton de Bary, defined symbiosis as "the living together of unlike organisms" [1]. Nitrogen-fixing symbioses between rhizobia and legumes have been studied since 1888 [2], with the vast number of investigations describing the infection events and the mature nitrogen-fixing nodule. During nodule formation, the rhizobia transform into a non-growing form capable of reducing atmospheric dinitrogen, called bacteroids. The plant receives reduced nitrogen compounds in exchange for photosynthetically-derived substrates transported to the bacteroids to provide the energy for the nitrogen-fixing reactions. In determinate nodules, such as those formed between Bradyrhizobium diazoefficiens and soybean, the nitrogen fixation activity of the nodule increases in parallel with nodule development and then declines as the plant portion of the nodule senesces. With bacteroids obtained from senescing, determinate nodules are able to de-differentiate into free-living bacteria and thus remain viable [3][4][5][6][7]. Bacteroids within the decaying nodule could take advantage of the abundant supply of metabolites from the decaying plant nodule, in effect becoming hemibiotrophs. A hemibiotroph is an organism that is a saprophyte or parasite in living tissue while the plant is alive, and which upon plant death consumes the decaying tissue [8,9]. According to the original definition of Anton de Bary [1], the senescing nodule is no longer a symbiosis, since the unlike organisms are no longer living together, but rather one is surviving on the remains of the other. This post-symbiotic, hemibiotrophic-like lifestyle of the bradyrhizobia has received scant attention, but has significant ecological relevance, as it may be the primary mechanism by which the bacteria are perpetuated in the rhizosphere and soil. The rhizosphere supports a far greater number of bacteria than the bulk soil [10] because up to 20% of the entire carbon fixed photosynthetically by the plant may be excreted from the roots [11]. Unlike the symbiotic state, in which the symbiotic bacteroids are provide a defined diet of substrates dictated by the plant, the post-symbiotic bacteria are presented with a diverse milieu of metabolites derived from the catabolism of the entire cellular content of plant nodule cells. In contrast to the rhizosphere, where bacteria must compete for excreted materials, the bradyrhiobia are imbedded within a rich metabolic matrix, for which they do not need to compete. Elucidating the genes and molecular events for survival and perpetuation of applied strains beyond symbiosis in the senescent nodule and their eventual release into the soil would be an agricultural and financial benefit to farmers in third world-countries, who lack the resources for annual fertilizer applications. Proteomic and transcriptomic analysis of Bradyrhizobium diazoefficiens bacteroids has been undertaken to better understand the symbiosis between B. diazoefficiens and its obligate legume host soybean (Glycine max) to improve crop production [12,13]. However, the majority of this work has only focused on the early stages of infection to the peak of symbiotic nitrogen fixation. Though much is known about the process of nodule senescence with regard to the plant, little is known about the determinate bacteroid and its process of post-symbiotic re-differentiation [6,[13][14][15][16]. Only one published proteomics report examines bacteroids past the peak of nitrogen fixation and utilizing soybean root nodules grown under field conditions [12]. This leaves a glaring omission in the critical stage in the natural cycle where the bradyrhizobia return to the soil. This study was undertaken to provide a global proteomic analysis of the post-symbiotic form of B. diazoefficiens. Purified bacteroids were fractionated into their periplasmic and cytoplasmic compartments and marker enzymes were followed over a period of 9 weeks. The fractionated proteins were prepared for analysis via LC-MS/MS and three general patterns were identified: Proteins decreasing in abundance, constitutive proteins, and proteins increasing in abundance. The results of this study should help in understanding how the B. diazoefficiens persists after symbiosis to provide greater insight into how the association could be better exploited to increase crop production. Nodule Mass and Leghemoglobin Content Soybean root nodules were measured for mass per nodule and leghemoglobin content over the 9-week (56-119 days after planting) post-symbiotic period. The maximal nitrogen fixation activity was observed on day 43, but by day 55 it had declined to 25% and was negligible by day 95 (data not shown). Nodule mass fluctuated over time, but the leghemoglobin content was consistently between 8-9 mg of leghemoglobin per g fresh weight of nodules until day 112, when leghemoglobin concentration started to decline, with a final concentration of 3 mg per g nodule by day 119 (Figure 1). Figure 1. Soybean nodule mass and leghemoglobin content from soybean nodules at various days after planting. The values are the mean ± standard deviation of three replicates. Bacteroid Protein and Poly-β-hydroxybutyrate (PHB) Content and Enzymes Activities in the Post-Symbiotic Period Total bacteroid protein fluctuated over the time course with a pattern similar to, but not identical with, that of nodule mass ( Figure 2). Isolated bacteroids were fractionated into periplasmic and cytoplasmic fractions. The periplasm is at the interface between the bacteria and the plant and, thus, would be assumed to respond to changes caused by the post-symbiotic environment. β-hydroxybutyrate dehydrogenase, a cytoplasmic enzyme marker necessary for the production of polyhydroxybutyrate (PHB), a bacteroid carbon storage polymer associated with effective symbiosis, displayed cytoplasmic activity, remaining relatively constant, and periplasmic activity increased to 91 days and remained relatively constant until it declined at days 112 and 119 ( Figure 3). The PHB content remained relatively unchanged until days 104-112, when it increased nearly 3-fold ( Figure 2). The periplasmic marker enzyme cyclic phosphodiesterase displayed a bimodal pattern, while the periplasmic activity increased from day 55 to 91 and then remained constant ( Figure 4). Bacteroid Protein and Poly-β-hydroxybutyrate (PHB) Content and Enzymes Activities in the Post-Symbiotic Period Total bacteroid protein fluctuated over the time course with a pattern similar to, but not identical with, that of nodule mass ( Figure 2). Isolated bacteroids were fractionated into periplasmic and cytoplasmic fractions. The periplasm is at the interface between the bacteria and the plant and, thus, would be assumed to respond to changes caused by the post-symbiotic environment. β-hydroxybutyrate dehydrogenase, a cytoplasmic enzyme marker necessary for the production of polyhydroxybutyrate (PHB), a bacteroid carbon storage polymer associated with effective symbiosis, displayed cytoplasmic activity, remaining relatively constant, and periplasmic activity increased to 91 days and remained relatively constant until it declined at days 112 and 119 ( Figure 3). The PHB content remained relatively unchanged until days 104-112, when it increased nearly 3-fold ( Figure 2). The periplasmic marker enzyme cyclic phosphodiesterase displayed a bimodal pattern, while the periplasmic activity increased from day 55 to 91 and then remained constant ( Figure 4). Bacteroid Protein and Poly-β-hydroxybutyrate (PHB) Content and Enzymes Activities in the Post-Symbiotic Period Total bacteroid protein fluctuated over the time course with a pattern similar to, but not identical with, that of nodule mass ( Figure 2). Isolated bacteroids were fractionated into periplasmic and cytoplasmic fractions. The periplasm is at the interface between the bacteria and the plant and, thus, would be assumed to respond to changes caused by the post-symbiotic environment. β-hydroxybutyrate dehydrogenase, a cytoplasmic enzyme marker necessary for the production of polyhydroxybutyrate (PHB), a bacteroid carbon storage polymer associated with effective symbiosis, displayed cytoplasmic activity, remaining relatively constant, and periplasmic activity increased to 91 days and remained relatively constant until it declined at days 112 and 119 ( Figure 3). The PHB content remained relatively unchanged until days 104-112, when it increased nearly 3-fold ( Figure 2). The periplasmic marker enzyme cyclic phosphodiesterase displayed a bimodal pattern, while the periplasmic activity increased from day 55 to 91 and then remained constant ( Figure 4). Isocitrate dehydrogenase, another cytoplasmic marker enzyme, has been previously shown to decline over the first five weeks of symbiosis [17,18] and Figure 5 shows it continued to decline and became undetectable at days 112 and 119. Cytoplasmic malate dehydrogenase activity showed a bimodal trend similar to cyclic phosphodiesterase activity and the periplasmic malate dehydrogenase activity showed a gradual increase through 78 days and then a more pronounced increase to 91 days and a decrease at days 112 and 119 ( Figure 6). Protocatechuate 3,4-dioxygenase activity in both fractions showed a bimodal activity profile (Figure 7). Isocitrate dehydrogenase, another cytoplasmic marker enzyme, has been previously shown to decline over the first five weeks of symbiosis [17,18] and Figure 5 shows it continued to decline and became undetectable at days 112 and 119. Cytoplasmic malate dehydrogenase activity showed a bimodal trend similar to cyclic phosphodiesterase activity and the periplasmic malate dehydrogenase activity showed a gradual increase through 78 days and then a more pronounced increase to 91 days and a decrease at days 112 and 119 ( Figure 6). Protocatechuate 3,4-dioxygenase activity in both fractions showed a bimodal activity profile (Figure 7). Isocitrate dehydrogenase, another cytoplasmic marker enzyme, has been previously shown to decline over the first five weeks of symbiosis [17,18] and Figure 5 shows it continued to decline and became undetectable at days 112 and 119. Cytoplasmic malate dehydrogenase activity showed a bimodal trend similar to cyclic phosphodiesterase activity and the periplasmic malate dehydrogenase activity showed a gradual increase through 78 days and then a more pronounced increase to 91 days and a decrease at days 112 and 119 ( Figure 6). Protocatechuate 3,4-dioxygenase activity in both fractions showed a bimodal activity profile ( Figure 7). Proteomics Time Course LC-MS/MS analysis was performed on the proteins of the cytosolic and periplasmic fractions of bacteroids isolated from soybean plants over the nine-week time course. Periplasmic protein samples covered the entirety of the time course, while cytoplasmic analysis covered the seven time points of days 63, 70, 91-119. For the cytosolic fraction, 1869 unique peptides were identified, with 706 proteins identified via SePro. For the periplasmic fraction, 2849 peptides were identified, with 1417 proteins identified via SePro. Trend Quest from Pattern Lab for Proteomics identified three unambiguous progressions of peptide frequencies: Proteins that declined following symbiosis, proteins that increased following symbiosis, and constitutive proteins (Figures 8 and 9). Other patterns displayed significant fluctuations at various sampling times that are difficult to interpret. These proteins may be more responsive to climatic or soil conditions than those of the three unambiguous patterns. The sampling time points include the development of nitrogen fixation activity and proteins known to be involved in this process were identified. Proteins known to participate in nodule initiation were absent and likely have been degraded as they have served their purpose at the first sampling point of functional nodules, actively reducing atmospheric dinitrogen. Proteins associated with symbiotic nitrogen fixation were identified: The nitrogenase metallo cluster biosynthetic protein (blr1756), nitrogenase molybdenum-cofactor synthesis protein (blr1746), nitrogenase stabilizing protein (blr1771), glutathione synthetase (bll0668), alanine dehydrogenase (blr3179) alanine racemase (bll4070), serine hydroxymethyltransferase (bll5033), L-asparaginase (bll4950), aspartate-semialdehyde dehydrogenase (bll0501), and aspartate aminotransferase (bll7416). All of these proteins declined markedly during senescence. Proteomics Time Course LC-MS/MS analysis was performed on the proteins of the cytosolic and periplasmic fractions of bacteroids isolated from soybean plants over the nine-week time course. Periplasmic protein samples covered the entirety of the time course, while cytoplasmic analysis covered the seven time points of days 63, 70, 91-119. For the cytosolic fraction, 1869 unique peptides were identified, with 706 proteins identified via SePro. For the periplasmic fraction, 2849 peptides were identified, with 1417 proteins identified via SePro. Trend Quest from Pattern Lab for Proteomics identified three unambiguous progressions of peptide frequencies: Proteins that declined following symbiosis, proteins that increased following symbiosis, and constitutive proteins (Figures 8 and 9). Other patterns displayed significant fluctuations at various sampling times that are difficult to interpret. These proteins may be more responsive to climatic or soil conditions than those of the three unambiguous patterns. The sampling time points include the development of nitrogen fixation activity and proteins known to be involved in this process were identified. Proteins known to participate in nodule initiation were absent and likely have been degraded as they have served their purpose at the first sampling point of functional nodules, actively reducing atmospheric dinitrogen. Proteins associated with symbiotic nitrogen fixation were identified: The nitrogenase metallo cluster biosynthetic protein (blr1756), nitrogenase molybdenum-cofactor synthesis protein (blr1746), nitrogenase stabilizing protein (blr1771), glutathione synthetase (bll0668), alanine dehydrogenase (blr3179) alanine racemase (bll4070), serine hydroxymethyltransferase (bll5033), L-asparaginase (bll4950), aspartate-semialdehyde dehydrogenase (bll0501), and aspartate aminotransferase (bll7416). All of these proteins declined markedly during senescence. Proteomics Time Course LC-MS/MS analysis was performed on the proteins of the cytosolic and periplasmic fractions of bacteroids isolated from soybean plants over the nine-week time course. Periplasmic protein samples covered the entirety of the time course, while cytoplasmic analysis covered the seven time points of days 63, 70, 91-119. For the cytosolic fraction, 1869 unique peptides were identified, with 706 proteins identified via SePro. For the periplasmic fraction, 2849 peptides were identified, with 1417 proteins identified via SePro. Trend Quest from Pattern Lab for Proteomics identified three unambiguous progressions of peptide frequencies: Proteins that declined following symbiosis, proteins that increased following symbiosis, and constitutive proteins (Figures 8 and 9). Other patterns displayed significant fluctuations at various sampling times that are difficult to interpret. These proteins may be more responsive to climatic or soil conditions than those of the three unambiguous patterns. The sampling time points include the development of nitrogen fixation activity and proteins known to be involved in this process were identified. Proteins known to participate in nodule initiation were absent and likely have been degraded as they have served their purpose at the first sampling point of functional nodules, actively reducing atmospheric dinitrogen. Proteins associated with symbiotic nitrogen fixation were identified: The nitrogenase metallo cluster biosynthetic protein (blr1756), nitrogenase molybdenum-cofactor synthesis protein (blr1746), nitrogenase stabilizing protein (blr1771), glutathione synthetase (bll0668), alanine dehydrogenase (blr3179) alanine racemase (bll4070), serine hydroxymethyltransferase (bll5033), L-asparaginase (bll4950), aspartate-semialdehyde dehydrogenase (bll0501), and aspartate aminotransferase (bll7416). All of these proteins declined markedly during senescence. Proteins That Declined Following Symbiosis The rate of protein synthesis and protein turnover have been shown to decline during nodule development due to the diversion of cellular energy to nitrogen fixation [19] and, as expected in the post-symbiotic period, proteins directly associated with nitrogen fixation, the two component proteins of nitrogenase (blr1743, blr1744), and fixC, a flavoprotein dehydrogenase (blr1774), were found to decrease over the nine-week time course (Table 1). All three proteins are regulated by RegR under microoxic conditions [20,21]. The ability to assimilate fixed nitrogen into transferable amino acids decreased over time as the aminotransferase proteins (blr1686, blr4134), glutamate synthase (blr7743), glutamine synthetase I (blr4949), and two enzymes for branched chain amino acid production, 3-isopropylmalate dehydrogenase (bll0504), and 3-isopropylmalate isomerase (blr0488) all decreased over the time course. Succinate semi-aldehyde dehydrogenase (blr0807), which is necessary for the breakdown of glutamate and phenylalanine to succinate [22], also declined. Proteins of glycolysis and gluconeogensis were well represented in the decreasing data set; pyruvate dehydrogenase (bll4782), phosphoenolpyruvate carboxykinase (bll8141), fructose bisphosphate aldolase (bll1520), and enolase (bll4794). Pyruvate dehydrogenase (bll4782) provides a link between glycolysis to branched chain amino acid biosynthesis. Citric acid cycle enzymes succinyl-CoA synthetase (bll0455) and succinate dehydrogenase (blr0514) were found to decrease over time as well, indicating the decreases in cellular energy needs for nitrogen fixation and the need for carbon backbones for the production of amino acids. A large number of proteins associated with the ribosome were found to decline. The symbiotic specific GroEL/S3 (blr2059, blr2060) were notable as they serve as a marker of the decline of the symbiotic state of the bacteroid, as GroEL/S3 were induced during the symbiotic state and are regulated by NifA [23,24]. The decline of several proteases, LA protease (bll4942), serine transmembrane protease (bll6508), and a zinc protease (blr7485), may suggest a physiological adaptation following symbiosis. The 30S ribosomal proteins S1, S4, S7, and S18 and the 50S ribosomal proteins L14 all decreased beyond 91 days. Proteins That Increased Following Symbiosis The number of proteins found to be increasing over the time course (Table 2) was much lower than that for the proteins in decline ( Table 1). Half of the proteins associated with this pattern were unknown or hypothetical proteins. Annotated proteins in this pattern include fatty acid metabolism proteins enoyl-CoA hydratase (blr1160), acetyl-CoA carboxylase (blr0191), acyl-CoA thiolase (blr1159), and enoyl-CoA hydratase (bll7821). CheY (bll7795), a two-component transcriptional regulator which was found to be expressed during times of desiccation stress [25], increased over the time course, as did a carboxy-terminal protease (blr0434) and a peptidyl cis-trans isomerase (bll4690), which is required for proper protein folding. Among the proteins without annotation, bll2012 and blr1830 were found to be induced by soybean seed extracts [26]. Discussion Bacteroid is the term that refers to the symbiotic, nitrogen-fixing form of rhizobia. Franck et al. have demonstrated that post-symbiotic bacteroids are transcriptionally active up to 95 days after planting [33]. The data collected over the period of time from 56 to 119 days after planting clearly demonstrate the metabolic activity of the bacteria that reside within the decaying plant nodule. The bacteria, although possessing enzyme activity, do not possess nitrogenase activity, the central metabolic activity of the symbiosis and, furthermore, the symbiosis no longer occurs as per the definition of Anton de Bary [1], who defined symbiosis as "the living together of unlike organisms". Thus, the post-symbiotic form should not be called "bacteroids", as they no longer possess two of the key features of the symbiosis, nitrogenase and a living host partner. However, like bacteroids, the post-symbiotic form(s) of the bacteroid do not display any of the proteins or processes consistent with cellular growth and division, but they can be extracted from senescing nodules and grown on artificial medium [3][4][5][6][7]34,35]. Studies during the developmental time course of B. diazoefficiens bacteroids through symbiosis have followed several enzymes during the symbiosis, including nitrogenase, citric acid cycle enzymes, and the carbon storage compound poly-β-hydroxybutyrate [21,36]. These enzymes constitute the fixing of atmospheric dinitrogen, the energy metabolism for nitrogen fixation, and the storage of carbon metabolites in the determinate nodule system. Other studies have looked at the effects of mutations in hydrogenase systems on nitrogen fixation, leghemoglobin content, and nodule physiology up to 71 days after emergence [37]. Beyond these studies, there is no knowledge at present about the changes that the B. diazoefficiens bacteroids experiences during its re-differentiation to a free-living bacterium in the post-symbiotic state [3][4][5][6][7]36,37]. The enzymatic and proteomic analysis reported here and the transcriptomic analysis [33] provide insight into the physiological nature of the post-symbiotic form of B. diazoefficiens. The retention of metabolic and transcriptional [33] activity of the bacteria as the plant cells dies is the definition of hemibiotrophy [8,9]. A hemibiotroph is defined as an organism that is saprophytic or parasitic in living tissue while the plant is alive, and which upon plant death consumes the dead tissue [8,9]. Although a symbiont and not a parasite, B. diazoefficiens survives on plant-supplied metabolites during symbiosis and remains viable by consuming decaying plant compounds. B. diazoefficiens should be considered a highly specialized hemibiotroph, as it is restricted to limited plant hosts and a single, specialized plant organ, the nodule formed via symbiosis. The specificity of the infection process, and sequestration of the symbiont within the senescing nodule, has apparently limited the expression of the hemibiotrophic lifestyle of B. diazoefficiens, as it is not known to be a necrotroph on other plants. The senescing nodule would be a metabolite-rich environment, with active proteases from the plant cells providing amino acids and peptides as metabolites [38]. A number of enzymatic and transport activities were identified among the constitutive and up-regulated proteins, suggesting the post-symbiotic form of B. diazoefficiens was accumulating and hydrolyzing peptides from the decaying plant nodule cells (Tables 2 and 3). A previous study of 28-day-old, greenhouse-grown B. diazoefficiens bacteroids, at the period of maximal nitrogenase activity, indicated no defined fatty acid metabolism [12]. Fatty acid metabolism was markedly increased in the post-symbiotic period ( Table 2). The symbiosome membrane amounts to approximately 30 times more membrane than that of the plasma membrane [39]. The turnover of membrane lipids derived from the senescing plant cell (both symbiosome and plasma membrane) could provide a rich source of energy for the post-symbiotic, hemibiotrophic-like B. diazoefficiens. The bacteroids of winged bean appear to be protected from degradation via a 21 kDa nodulin that is homologous to a plant Kunitz trypsin inhibitor [40]. This raises the issue that not only is the senescing nodule a source of nutrients, but also a source of potentially harmful hydrolases from which the post-symbiotic bacteria need protection. The pyrroloquinoline quinone (PQQ)-dependent alcohol dehydrogenase (Table 3) further supports a role in bacterial protection, as it was previously found to be one of three PQQ-dependent dehydrogenases induced during osmotic stress [25]. The presence of CheY suggests that post-symbiotic bacteria are able to respond to the changing conditions within the senescing nodule ( Table 2). The presence of enzymes in reactive oxygen metabolism suggests a protective mechanism against these species, generated during plant nodule senescence ( Table 3). The senescence of the plant nodule cells leads to the loss of functional symbiosome and plant plasma membranes and, thus, the selectivity of metabolites transported to bacteria is no longer restricted, as the bacterial periplasm must adapt to the diversity of metabolites produced from plant cell degradation. For example, during symbiosis, the bacteroids receive malate from the plant. Post-symbiotically, the source of malate and of dicarboxylates change. Curiously, the protocatechuate 3,4-dioxygenase and malate dehydrogenase activities display inverse patterns between 56 and 119 days in both periplasm and cytoplasmic fractions (Figures 6 and 7). The two metabolic sources of dicarboxylates, represented by protocatechuate dioxygenase and malate dehydrogenase, are inversely regulated to maintain a constant source of dicarboxylates as the nodule environment changes. Combined activity profiles of the enzymes, and transcripts [33] of the post-symbiotic bacteria each demonstrate unique patterns, suggesting that post-symbiotic bacteria are actively and purposely expressing metabolic responses to its changing environment. The presence of heat shock (blr0678), cold shock (bsl1386), and peptidyl prolyl cis-trans isomerase (bll5690) suggest the bacteria have the means for the remodeling of the bacteroid as it transitions to the post-symbiotic bacteria (Tables 2 and 3). However, the 30S ribosomal proteins S1, S4, S7, and S18 and the 50S ribosomal proteins L14 showed a decrease beyond 91 days (Table 1). Franck et al. [33] showed the transcripts for the ribosomal proteins remain fairly constant up to 78 days, but some ribosomal proteins, particularly 30S ribosomal proteins S3, S21, S10, S17, and 50S ribosomal proteins L16 and L30 declined at 95 days after planting. This indicates a loss of overall translational activity despite the increase of select proteins at the last time point (data not shown). Levels of poly-β-hydroxybutyrate (PHB), the major storage compound in determinate nodules, but not indeterminate nodules [41], were stable over the majority of the time course and then surprisingly increased nearly 3-fold between days 104-112 (Figure 2), without a corresponding change in β-hydroxybutyrate dehydrogenase activity (Figure 3). The reduction of translational activity combined with the increase in PHB suggest the bacteria may have reached a state of nutrient exhaustion and/or the build-up of waste products and will enter a quiescent state until the bacteria can be released from lignified nodule exterior and returned to the soil. Three soybean proteins, histone H4, histone H3, glu/leu/phe/val dehydrogenase (Glyma02g38920.1, Glyma06g32880.1, Glyma16g04560.1), were present in the periplasm throughout the time course (Table 3). Previously, it was demonstrated that histone H2A and lipoxygenase were localized to the bacteroid surface [18], suggesting a role for these proteins. The presence of the soybean proteins in the periplasmic fractions of the post-symbiotic form of B. diazoefficiens suggests a role for these proteins and implies a continuation, albeit limited to a few specific intermolecular interactions, of the symbiosis among the two former symbionts beyond the period of nitrogen fixation. In summary, the post-symbiotic form of B. diazoefficiens remains transcriptionally, translationally, and metabolically active late into senescence. During senescence, B. diazoefficiens transitions to a hemibiotrophic-like species that may still benefit from soybean-derived proteins and membranes. Source of Nodules and Bacteroid Preparations Soybean plants were obtained from the Bradford Research and Extension Center of the University of Missouri over a nine-week period (56-119 days after planting). B. diazoefficiens strains were residual and the seeds were not inoculated prior to planting. Soybean plants were harvested from the same, un-irrigated field between 8 and 9 a.m. Approximately 100 plants were harvested at each sampling. Intact roots and nodules attached to the tap root were placed in ice water and then harvested at 4 • C. Bacteroids were isolated as described previously [17] and enzyme activities were performed on the same day. Cytoplasmic and periplasmic fractions were prepared as described previously [42]. Briefly, 30 g of bacteroids from each biological replicate were aliquoted into 10 g amounts and resuspended in 10 mL of 25 mM citrate buffer, pH 4.0. After incubation at room temperature for 15 min, they were centrifuged at 12,000× g for 15 min and the pellet was gently brought up into isolation buffer (50 mM Tris-HCl, pH 7.5 with 2 mM EDTA and 20% (w/v) sucrose) using a #4 tapered-tip artist paint brush. The suspension was treated with ready-lyse lysozyme solution (25,000 U, Epicentre-Madison, WI, USA) and protease inhibitor cocktail (10 uL, Calbiochem, Rockland, MA, USA), mixed gently, and then incubated for 30 min at room temperature. The periplasm was obtained by centrifugation at 12,000× g for 10 min at 4 • C. The pelleted bacteroid spheroplasts were gently suspended in 15 mL of MEP buffer (5 mM MgCl 2 , 1 mM EDTA, 50 mM K-phosphate buffer, pH 7.0) to which 10 uL of protease inhibitor cocktail was added. The spheroplasts were then ruptured in the French press [17]. Enzymatic, Leghemoglobin, and Poly-β-hydroxybutyrate (PHB) Analysis To ascertain the level of purity of the bacteroid periplasmic fraction, several enzymes known to be cytoplasmic were measured in both fractions as well as cyclic phosphodiesterase, a known periplasmic marker, to assay the amount of the periplasmic release, as previously outlined for rhizobia and bradyrhizobia bacteroids [43]. Also measured for activity was the possible periplasmic enzyme protocatechuate 3,4-dioxygnease. Procedures for measurement of enzymes were described previously [17], except protocatechuate 3,4-dioxygenase, which was measured by adding 50 µL of enzyme extract to 900 µL of 50 mM CHES, pH 9.3, and 50 µL of 40 mM protocatechuate and recording absorbance at 290 nm (molar absorptivity 3.8 mM −1 cm −1 ). Leghemoglobin concentration was measured using Drabkin's reagent [44]. Poly-β-hydroxyburyrate (PHB) was measured as described by Karr et al. [45]. The large fluctuations of the data can be attributed to weather over particular sampling periods, but no effort was made to adjust the data accordingly. Protein Isolation and Identification The periplasmic and cytoplasmic fractions were each precipitated using equal volumes of phenol. The fractions were mixed at room temperature for one hour. Phases were then separated by centrifugation at 4000× g for 10 min at 4 • C. The phenol phase was collected and four volumes of 100% methanol containing 0.1 M ammonium acetate and 10 mM dithiothreitol added. Protein was precipitated overnight at −20 • C. Protein precipitate was collected by centrifugation at 4000× g for 10 min at 4 • C. The protein pellet was washed once with the methanol/ammonium acetate/dithiothreitol (DTT) solution. The protein pellet was then washed three times with 90% ethanol containing 10 mM DTT and then stored in 90% (v/v) ethanol/DTT at −80 • C. Precipitated protein in 90% (v/v) ethanol/DTT was collected by centrifugation at 4000× g and 4 • C. Reconstitution buffer (30 mM Tris-HCl, 7 M urea, 2 M thiourea, 4% (w/v) CHAPS at pH 8.8) was added to the pellet, followed by gentle vortexing for one hour. A 20 µg portion of protein from each sample, quantified by the Bradford method, was removed and diluted to 1 µg/uL with reconstitution buffer. Bovine serum albumin was added as an internal standard to give a protein ratio of 1% (w/w). Disulfide bonds were reduced with 10 mM DTT (100 mM stock in 50 mM ammonium bicarbonate), at 25 • C for 1 h and then alkylated with 40 mM iodoacetamide (200 mM stock in 50 mM ammonium bicarbonate), at 25 • C in the dark for 1 h, and finally quenched with additional DTT to 30 mM (100 mM stock in 50 mM ammonium bicarbonate) and incubated at 25 • C for 30 min. Urea was brought to 1 M by dilution with 50 mM ammonium bicarbonate. Trypsin (Sequencing Grade Modified-Promega, Madison, WI, USA) was reconstituted to 0.02 ug/uL and activated as per the manufacturer's instructions in the provided resuspension buffer (1:200 w/w, trypsin:sample). Samples were incubated at 37 • C for 16 h. Digests were then lyophilized to dryness. Mass Spectrometry Analysis Lyophilized protein samples were reconstituted in 100 uL of 18 MΩ water with 0.1% (v/v) formic acid and 5.0% (v/v) acetonitrile by pipetting and mild vortexing. Samples were spun at 13,000 rpm at 4 • C for 10 min in a tabletop centrifuge to remove insoluble debris. Twenty uL portions from each sample were placed in polypropylene 96 v-well plates and covered with adhesive film. The plates were centrifuged to collect samples at the bottom of the well, and then placed in the precooled tray of the LC autosampler. Ten 10 µL injections were analyzed on a LTQ ProteomeX linear ion trap LC-MS/MS instrument (Thermo Fisher, San Jose, CA, USA). C 8 captraps (Michrom Bioresources, Auburn, CA, USA) were used to concentrate and desalt peptides before final separation by C 18 column chromatography (acetonitrile gradient of 0-90% solvent B (100% acetonitrile with 0.1% (v/v) formic acid), in solvent A (deionized 18 MΩ water with 0.1% (v/v) formic acid for a duration of 110 min). The peptide trap and C 18 column were re-equilibrated with 100% solvent A for 25 min before applying the next sample. LC separation was performed using fused silica nanospray needles (26 cm length, 360 µm outer diameter, 150 µm inner diameter; Polymicro Technologies, Phoenix, AZ, USA), packed with "Magic C18" (200 Å, 5 µm particles, Michrom Bioresources) in 100% methanol. Columns were equilibrated for 3-4 h at 200 nL/min (at the column tip) with a 60:40 mix of solvent B to solvent A prior to sample application. Samples analysis was performed in the data-dependent positive acquisition mode on the LC-MS/MS instrument, with a normal scan rate for precursor ion analysis with dynamic exclusion enabled (1 repeat count, 30 s repeat duration, 30 s exclusion, list size of 50). After each full scan (400-2000 m/z), a data-dependent triggered MS/MS scan was obtained for the three most intense parent ions. The nanospray column was maintained at ion sprays of 2.0 kV. Peptide Match Filtering In order to filter the SEQUEST matches for nonrandom hits, the files were converted to ".SQT" file formats. Filtering of the ".SQT" files was performed using SEPro (Warrendale, PA, USA) [22] with the following settings: Spectral FDR: 5%, Peptide FDR: 3%, and final filtering of protein hits at: 1%. All filtered data were saved as SEPro file outputs (.spr). Protein Expression Trends Spectral count data associated with the protein IDs provided by SEPro were used for trend analysis via the proteomic analysis software PatternLab [46]. As per the software workflows, PatternLab input files were created using the Regrouper software (Pittsburgh, PA, USA) (SparseMatrix and index files). Folders for each time point in the time course were created, and the selected SEPro files for each timepoint were placed into the folders. Regrouper was pointed to these folders, and the SparseMatrix and index files were created. These files were provided to PatternLab in the TrendQuest module. Trends were created using an assigned minimum average signal of 2 per 6 replicates (3 biological, 2 technical), with minimum data points of 2 and minimum items per cluster of 3 and a health of 0.800. Funding: There was no external funding for this research.
2018-12-15T14:02:35.083Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "6eb30ed0c3c9817ba0da10bbc633ab0513a5426a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/19/12/3947/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6eb30ed0c3c9817ba0da10bbc633ab0513a5426a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1866245
pes2o/s2orc
v3-fos-license
Characterization of the major core structures of the alpha2-->8-linked polysialic acid-containing glycan chains present in neural cell adhesion molecule in embryonic chick brains. To gain more insight into the possible functional significance of the core glycan chain(s) on which polysialylation takes place in polysialic acid (poly-Sia)-containing glycoproteins, the structure of the core glycans in the embryonic form of chick brain neural cell adhesion molecule (N-CAM) were examined using chemical and instrumental techniques. The following new structural features, which had not been reported by the early pioneering study by Finne (Finne, J. (1982) J. Biol. Chem. 257, 11966-11970), were revealed (Structure I). (i) Two distinct types of multiantennary N-linked glycans, i.e. tri- and tetra-antennary structures, are present; (ii) an α1→6-linked fucosyl residue is attached to the proximal GlcNAc residue of the di-N-acetylchitobiosyl unit; (iii) that the action of GlcNAc-transferase V, which catalyzes the attachment of the β-(1→6)-linked GlcNAc residue on the (1→6)-α-linked mannose (Man) arm, appears to be essential for polysialylation to occur on the core glycan chain is suggested by the fact that the Man residue α1→6-linked to the β-linked Man residue is invariably 2,6-di-O-substituted by the GlcNAc residue; (iv) both type 1 (Galβ1→3GlcNAc) and type 2 (Galβ1→4 GlcNAc) sequences are present in the peripheral portion of the core glycan structure. An extended form of the type 2 chain, i.e. Galβ1→4GlcNAcβ1→3Galβ1→4GlcNAc, is also expressed on the (1→3)- and (1→6)-α-linked Man arms; (v) on average about 1.4 mol of sulfate is attached to the type 2 N-acetyllactosamine chain(s), where in the extended form the sulfate group is probably substituted at the O-3 position of the outmost GlcNAc residue, i.e. Galβ1→4(HSO3→3)GlcNAcβ1→3Galβ1→4GlcNAcβ1→Man. Structure I. The overall structure of α2→8-linked polySia chain-containing tri- and tetraantennary glycans present in the chick embryonic brain N-CAM molecule. It is possible that the unusual structural features identified in this study might play a role in the initiation of polysialylation and our data should facilitate future research regarding the signals that control polysialylation. Neural cell adhesion molecule (N-CAM) 1 is a widely distributed cell-surface glycoprotein, which mediates and regulates various cell-cell interactions (1). Multiple molecular forms of N-CAM are now known to be expressed from a single copy gene, depending on spatiotemporal stages of the cells (1). The molecular diversity is produced by various modifications. One is due to an alternative splicing, which gives rise to three major forms of N-CAM with different membrane-anchoring modes, whose extracellular domains contain a tandem alignment of immunoglobulin-like (Ig) and fibronectin-type (III) domains (1). Modification by glycosylation, sulfation, and phosphorylation of the protein core also results in great molecular diversity and consequent functional differences (1)(2)(3)(4). Addition of polysialic acid (polySia), which is a unique homopolymer of ␣238-linked sialic acid (Sia), is the most important modification of N-CAM. Expression of polySia chains on N-CAM is developmentally regulated and negatively affects the adhesive properties of the cells (5,6). Polysialylation occurs on N-glycan chains in the fifth Ig domain, where two of the three N-glycosylation sites have been shown to be polysialylated (7). Interestingly, in embryonic vertebrate brain, N-CAM is the major carrier protein of polySia chains (8). The question why N-CAM is selectively polysialylated in embryonic brains remains unelucidated. It is possible that a particular protein sequence may play a role in determining the expression of polysialyl units on certain glycoproteins, as is the case with mannose-6-phosphate-bearing lysosomal enzymes (9) and 4-Osulfated GalNAc terminated glycoprotein hormones (10). In a recent report on polysialylation of N-CAM, the importance of the domain organization of the protein was suggested, although the involvement of a specific protein sequence determinant was not ruled out (7). Alternatively, it is also conceivable that the core glycan structure codes a signal for initiation of polysialylation. For example, the sulfotransferase responsible for 4-O-sulfation of terminal GalNAc is known not to recognize the peptide sequence of the acceptor glycoprotein hormones (11). Thus, a cryptic signal, which triggers initiation of polysialylation, could be present in the core glycan chain. We have recently demonstrated that biosynthesis of polysi-alic acid chains on O-glycans of fish egg polysialoglycoproteins involves at least three distinct sialyltransferase (ST)-catalyzed reactions and a KDN capping reaction (12,13): (i) ␣236-STcatalyzed reaction, 34Gal␤133GalNAc␣13 Thr (or Ser) ϩ CMP-Sia 3 Sia␣ 236(34Gal␤133)GalNAc␣13 Thr (or Ser) ϩ CMP; (ii) ␣238-STor initiase-catalyzed reaction: Sia␣236-(34Gal␤133)GalNAc␣13 Thr (or Ser) ϩ CMP-Sia 3 Sia␣23 8Sia␣236(34Gal␤133)GalNAc␣13 Thr (or Ser) ϩ CMP; (iii) ␣238-polySTor polymerase-catalyzed reaction: Sia␣23 8Sia␣236(34Gal␤133)GalNAc␣13 Thr (or Ser) ϩ nCMP-Sia 3 Sia␣238[Sia␣238] n Sia␣236(34Gal␤133)GalNAc␣13 Thr (or Ser) ϩ nCMP; (iv) ␣238-KDN transferase-or terminase-catalyzed reaction: Sia␣238[Sia␣238] n Sia␣236(34-Gal␤133)GalNAc␣13 Thr (or Ser) ϩ CMP-KDN 3 KDN␣238 Sia␣238[Sia␣238] n Sia␣236(34Gal␤133)GalNAc␣13 Thr (or Ser). Likewise the biosynthetic pathway for polysialic acid chain formation on N-glycans is presumably similarly complicated, and a series of transferases are probably involved besides those needed for core glycosylation. Much effort has been devoted to clarifying the biosynthetic mechanism of polySia chain formation in embryonic brains using partially purified enzyme preparations, and it is now known that the developmentally dependent up-and down-regulation of expression of enzyme activity parallels the polySia epitope expression (14,15). Recently, several reports on expression cloning of ␣238polySTs from different animal origins have appeared, permitting elucidation of regulatory mechanisms of polySia expression using molecular biological approaches (16 -21). At least two distinct types of ␣238polySTs were shown to exist in rat brain (20). However, it is still unclear how critical these enzymes are in the in vivo biosynthesis of polySia chains, and nothing is known about their substrate specificities or whether such enzymes are regulated by particular protein sequences or glycan moieties. To address these important problems, we are determining detailed structures of the glycan chains present in N-CAM molecules. In this paper we report the results of our studies on the structural determination of the polysialylated glycan chain(s) present in the embryonic form of chicken brain N-CAM. In 1982, Finne (22) reported some evidence suggesting the presence of both tri-and tetraantennary structures in the polysialylated glycan chains of N-CAM from fetal rat brains, although no further details were described. The present study has revealed several new structural features, not reported by the early pioneering work by Finne (22), which include some possible candidates for polysialylation signals. Preparation of Polysialyl Glycopeptide Fraction (PSgp) In one experiment, 100 brains (40 g) were homogenized in 120 ml of 10 mM Tris-HCl, pH 8.0, and mixed with 320 ml of methanol and 160 ml of chloroform by stirring for 30 min. After centrifugation at 14,000 rpm at 4°C for 20 min, the pellet was suspended in 300 ml of 10 mM Tris-HCl (pH 8.0)/methanol/chloroform (3:8:4, v/v), stirred for 30 min, and centrifuged. The pellet was suspended in 100 ml of ethanol and filtered to remove chloroform and methanol. The residue (about 15 g) was suspended in 380 ml of 0.1 M Tris-HCl, pH 8.0, containing 10 mM CaCl 2 , and incubated under toluene with 0.2 g of Actinase E (Kaken Co. Ltd., Tokyo, Japan) at 37°C. After 24 and 48 h, 0.2 g each of Actinase E was added. After a 72-h incubation, the digest was centrifuged at 12,000 rpm at 4°C. The supernatant prepared from 500 brains was applied to a column of DEAE-Sephadex A-25 (equilibrated with 10 mM Tris-HCl, pH 7.0), and followed by stepwise elution with 1.0 liter each of 0, 0.2, 0.6, and 1.0 M NaCl in the equilibration buffer. The 0.6 M NaCl fraction was diluted with 2.0 liters of 0.1 M Tris-HCl, pH 7.0, and applied to a DEAE-Sephadex A-25 column, eluted first with 0.2 M NaCl in the same buffer, and next with 900 ml of a linear gradient of 0.2-0.7 M NaCl in the same buffer. Elution profile was monitored by the TBA method (25,26) for sialic acid. The fractions were also tested for polySia by measuring the rate of acid hydrolysis as described below. The pooled fraction positive for poly-Sia was concentrated and applied on a Sephacryl S-200 column. Molecular weight markers used were dextran (Sigma; M r 487,000, 72,200, 39,000, and 9,400) and galactose. The polySia-positive fraction was dialyzed and subjected to DEAE-Sephadex A-25 chromatography. The column was eluted first with the equilibration buffer, and then with 900 ml of linear gradient of 0 -0.7 M NaCl in the same buffer. The pooled polySia-positive fraction was desalted. Preparation of Asialo-PSgp and Endosialidase-treated PSgp, Endo-PSgp PSgp (3.6 mg as Neu5Ac), prepared from 450 brains, was digested with 600 milliunits of Arthrobacter ureafaciens exosialidase (Nacalai Co., Kyoto, Japan) at 37°C for 72 h (27). Sia released was removed by Sephadex G-25 gel filtration. The flow-through glycopeptide fraction was treated with 0.1 M NaOH for 30 min, neutralized with 0.1 M HCl, and redigested with 75 milliunits of the sialidase at 37°C for 30 h. The digest was chromatographed on a Sephadex G-50 column (eluted with 50 mM NH 4 HCO 3 ) and the asialo-glycopeptide fraction, asialo-PSgp, was desalted. Asialo-PSgp was applied on a DEAE-Sephadex A-25 column, and eluted first with 2 ml of 10 mM Tris-HCl, pH 7.0, and next with 2 ml of 0.3 M NaCl in the same buffer. The 0.3 M NaCl fractions were pooled, rechromatographed on a Sephadex G-50 column, and desalted. PSgp (4 mg as Neu5Ac) from 500 brains was digested with total of 750 milliunits of Endo-N at 37°C for 72 h in 20 mM Tris-HCl, pH 7.4. During incubation an aliquot was examined at every 12 h for the absence of polymer by TLC (see below), and the enzyme (125 milliunits each) was added to the reaction mixture. Digestion continued until no further change in chain length of the released oligoSia was confirmed after prolonged incubation with additional enzyme. Endo-PSgp was separated from oligoSia by a Sephadex G-50 column and desalted. Identification and Quantitation of O-Acetyl Neu5Ac in PolySia Chain of PSgp The oligoSia fraction obtained by Endo-N treatment of PSgp was incubated in 0.01 M trifluoroacetic acid at 70°C for 30 min and applied on a Sephadex A-25 column (equilibrated with 10 mM Tris-HCl, pH 7.0). The column was eluted with 5 ml each of 0, 0.05, 0.2 M NaCl in the same STRUCTURE I. The overall structure of ␣238-linked polySia chain-containing tri-and tetraantennary glycans present in the chick embryonic brain N-CAM molecule. buffer. The 0.05 M NaCl fraction that contained Sia was desalted and subjected to preparative TLC. The Sia fraction was spotted on a 0.2mm-thick silica gel plate (Kiesel gel 60, Merck) and developed in 1-butanol/1-propanol/water (5:10:3, v/v/v) (28). The slit of the plate was visualized by the resorcinol method (29), and the band corresponding to O-acetylated Neu5Ac was extracted with 10% ethanol for FAB-MS measurement. For quantitating O-Ac Neu5Ac, the oligoSia fraction was hydrolyzed as described above, and O-Ac Neu5Ac and Neu5Ac obtained by preparative TLC were analyzed by the TBA method (25,26). Identification and quantitation of O-Ac Neu5Ac in PSgp were also made by fluorometric HPLC essentially according to the method of Hara et al. (30). A glycoprotein fraction isolated from carp eggs was used as a source for Neu5,7Ac 2 , Neu5,8Ac 2 , Neu5,9Ac 2 , and Neu5,7(8),9Ac 3 . 2 A sialooligosaccharide isolated from the eggs of Tribolodon hakonensis (a dace) was used as a source for Neu4,5Ac 2 (31). Determination of Sia and PolySia Sia was determined by the mild acid hydrolysis-subsequent mild methanolysis/GLC (32). The presence of polySia was analyzed by the following methods. Acid Hydrolysis Rate Measurement-Each sample (1.6 g as Neu5Ac) was hydrolyzed in 0.05 M trifluoroacetic acid at 80°C for 0 to 3 h and released Sia was determined by the TBA method. The rate of hydrolysis of ␣2,8-linked polySia is shown to be significantly slower than that of ␣2,3and ␣2,6-linkages of Sia. Incubation periods necessary for completion of hydrolysis were 3 h for poly␣2,8-Sia and 30 -45 min for ␣233and ␣236-sialosides (33). Mild Acid Hydrolysis-TLC-Sample (10 g of Neu5Ac) was partially hydrolyzed in 0.05 M trifluoroacetic acid at 80°C for 15 min. The hydrolysate was analyzed for formation of oligoSia by TLC (see below). Chemical Analysis Sia was quantitated by the TBA method and the resorcinol method (25,26,29). Carbohydrate composition and amino acid analyses were carried out as previously reported (35,36). Methylation analysis of glycopeptides and oligosaccharides was carried out according to Anumula and Taylor (37). Partially methylated alditol and hexosaminitol acetates were quantitated by GLC analysis (38). Sulfate ion was determined by the HPLC analysis of acid hydrolysate of samples. Five to 10 nmol of each sample were hydrolyzed in vacuo in 6 M HCl at 110°C for 24 h and applied to a TSK gel IC-anion PW column, which was equilibrated with 0.5 mM sodium phthalate in solution containing 0.036% boric acid, 0.05% sodium tetraborate, 0.2% (w/v) sodium gluconate, 12% acetonitrile, and 3% (v/v) 1-butanol, and eluted with the same solution. Elution was monitored and quantitated by measuring the absorbance at 265 nm using sodium sulfate solution as standard. Desulfation by Mild Methanolysis Asialo-PSgp and Smith degradation products (see below) were desulfated by methanolysis. Samples were dried up in vacuo in a desiccator, and incubated in 0.5 M methanolic HCl at 25°C for 5 h. After the solvent was removed, the residue was dissolved in 200 l of 10 mM Tris-HCl, pH 7.0, and loaded on a DEAE-Sephadex A-25 column. After washing the column with the same buffer, the flow-through fraction was desalted. Periodate Oxidation Asialo-PSgp and Endo-PSgp (1.5-2.5 nmol) were dissolved in 19 l of 26 mM sodium periodate, 0.53 M sodium acetate, pH 4.5, and kept in the dark at room temperature. After 6 h the reaction was stopped by adding 10 l of 3% ethylene glycol and stood for 30 min. The samples thus obtained were subjected to carbohydrate analysis. Asialo fetuin GP-I (40) and A-1 (41) were used as control. Smith Degradation Asialo-PSgp (7.8 nmol) was incubated at 4°C in 12 l of 30 mM sodium periodate, 50 mM sodium acetate, pH 4.5, in the dark. After 24 h, 12 l of the same solution was added. After another 24-h incubation, 7.2 l of 50 mM sodium periodate were added. The reaction was stopped by adding 18.8 l of 3% ethylene glycol. After a 30-min incubation at room temperature in the dark, a 10-l aliquot was analyzed for carbohydrate composition and the remaining 40-l portion was reduced by adding 40 l of 0.5 M sodium borohydride in 0.5 M sodium borate buffer, pH 8.0, and incubated at 4°C for 14 h. After adjusting pH to neutral with 1 M acetic acid, the solution was desalted and concentrated. The desalted sample was treated with 0.1 ml of 0.05 M HCl or 0.05 M methanolic HCl at 80°C for 1 h. Smith degradation products thus obtained were applied to a DEAE-Sephadex A-25 column to separate neutral products from sulfated products. The neutral fraction was desalted for carbohydrate analysis. The acidic fraction was subjected to a Sephadex G-25 column and analyzed for carbohydrate composition. Endo-PSgp (23 nmol) was also subjected to Smith degradation in the same way. Smith degradation products were separated into neutral from acidic fractions. The neutral fraction and the acidic fraction were desulfated before carbohydrate composition and methylation analyses. FAB-MS Spectrometry For O-Ac Neu5Ac, the perdeuterioacetylated sample was prepared for FAB-MS (42). The perdeuterioacetylated sample was dissolved in 10 l of methanol, and a 1-l aliquot was added to the monothioglycerol matrix. The FAB mass spectrum was recorded using a VG Analytical ZAB-2S.E. FPD mass spectrometer fitted with a cesium ion gun operated at 20 -25 kV. Data acquisition and processing were performed using the VG Analytical Opus software. Permethylated derivatives of Smith degradation products of Endo-PSgp were similarly analyzed by FAB-MS. Preparation of Polysialylated Glycopeptides, PSgp A fraction exhibiting a slow rate of hydrolysis of interketosidic linkages typical for that of ␣238-linked polySia chains was eluted under a single sharp peak at about 0.45 M NaCl on DEAE-Sephadex A-25 chromatography of the Actinase E digest of delipidated embryonic chick brain homogenate. This fraction was eluted from a Sephacryl S-200 column in the molecular weight region from 6,500 to 20,000. The pooled fraction was subjected to DEAE-Sephadex A-25 rechromatography and named as PSgp. The isolation of PSgp was confirmed by the binding activity with H.46 antibody and susceptibility to Endo-N digestion (data not shown). Endo-PSgp contained Asx and Ser in a molar ratio of 1.0: 0.61 relative to 3 mol of Man (Table I) and the N-terminal amino acids of Endo-PSgp were determined to be Asx and Ser in a molar ratio of 1.0: 0.85. These results suggested that Endo-PSgp consisted of a 1.0:0.85 mixture of glycoasparagine and glycopeptide having ϮSer-Ser-Asn sequence, in which the glycan chain was attached to the Asn residue. Polysialylated glycan chains have been shown to be linked to one or more glycosylated Asn residues present in the fifth immunoglobulinlike domain of N-CAM (7), and the Ser-Ser-Asn sequence is located at the second site of glycosylation. Thus, at least 46% of the polySia-glycan chains are attached to the second site. No information on the possible polysialylation of the other two glycosylation sites was provided by the above data, but our results are not inconsistent with the recent report of mutant N-CAM expression experiments on transfected cells, where it is proposed that the second and third sites are heavily polysialylated (7). Preparation of Asialo-PSgp Extensive removal of the Sia residues by digestion of PSgp was only attained after mild alkaline treatment, because the presence of O-Ac Neu5Ac residues in the polySia chains prevented complete digestion (see below). Asialo-PSgp, completely devoid of Neu5Ac, was obtained at K av ϭ 0.40 on Sephadex G-50 chromatography, and applied on a DEAE-Sephadex A-25 column. No carbohydrate component was detected in the flowthrough fraction and the retarded fraction contained asialo-PSgp, thus indicating that asialo-PSgp contained some anionic residues on the glycan chains. because it was composed of neutral sugars, Ser, and Asn (Table I). Inorganic anion analysis showed the presence of 1.6 mol of sulfate ion/3.0 mol of Man residues in asialo-PSgp (Table I) and that no phosphate ion was present. Sulfate was also detected in Endo-PSgp. Methylation Analysis of Endo-PSgp, Asialo-PSgp, and Desulfated Asialo-PSgp The results of methylation analysis of Endo-PSgp, asialo-PSgp, and desulfated asialo-PSgp are summarized in Table II. One residue each of 2,6-Man and 3,6-Man and in total one residue of 2-Man and 2,4-Man were detected in these three samples. These results, together with the 1 H NMR data as shown below, are consistent with a proposed structure consisting of an almost equimolar mixture of the following two structures (Structure II). 1.4 mol of t-Gal and 4.0 mol of 3-Gal were present in Endo-PSgp. Based on the known substrate specificity of Endo-N (23), the presence of t-Gal in Endo-PSgp indicated that these Gal residues should have been non-sialylated in the intact glycan. Desialylation of Endo-PSgp resulted in the decrease of about 2 mol of 3-Gal with a concomitant increase of t-Gal, indicating that Neu5Ac residues are linked to the O-3 position of the Gal residues. About 1 mol of 3-Gal persisted on de-sialylation and subsequent desulfation of Endo-PSgp, indicating the presence of a non-sialylated and non-sulfated internal Gal residue. As shown below, this can be attributed to an extended form of the type 2 N-acetyllactosamine structure. In Endo-PSgp and asialo-PSgp, t-Fuc was detected in the same amount as 4,6-GlcNAc, suggesting the presence of the Fuc136(GLcNAc34)GLcNAc3 sequence, which was also supported by the 1 H NMR spectral analysis (see below). On desulfation of asialo-PSgp, 0.23 mol (or 1.4 mol for corrected value) 3 of 3,4-GlcNAc and 0.4 mol of 4,6-GlcNAc disappeared with a concomitant increase of 1.7 mol of 4-GlcNAc. No change occurred in 3-GlcNAc or substitutions on Gal and Man residues. The mild methanolysis used for desulfation also resulted in defucosylation, and the disappearance of 0.4 mol of 4,6-GlcNAc was accompanied by a corresponding 0.3-mol loss of t-Fuc, which was again compatible with the presence of the Fuc136(34)GLcNAc3 sequence in asialo-PSgp. Thus, the loss of 3,4-GlcNAc and the major increase in 4-GlcNAc were attributable to desulfation, indicating that sulfate groups reside in position O-3 of GlcNAc residues. 3 Notably, 1-2 mol of 3-GlcNAc were detected in all three samples and were assigned to the type 1 sequence, Gal133GlcNAc13, as shown below. Periodate Oxidation and Smith Degradation of Asialo-PSgp To obtain information on the internal carbohydrate chain sequence, periodate oxidation/Smith degradation experiments were carried out. Periodate oxidation of asialo-PSgp resulted in complete destruction of Fuc and a decrease of about 3 mol of Gal (Table III), suggesting that Fuc and about 3 mol of Gal were located at the nonreducing termini, consistent with the methylation analysis (see above). These results are also consistent with the linkage analysis that indicated some internal 3-Gal residues, because about 2 mol of Gal remained unoxidized. These Gal residues survived during Smith degradation and subsequent periodate oxidation (see IO 4 Ϫ -treated smasialo-PSgp in Table III), whereas GlcNAc residues, on the other hand, decreased (Table III), suggesting that Gal3 GlcNAc3 Gal3, but not Gal3 Gal3, occurs in asialo-PSgp as the terminal sequence. For identifying sulfated residue(s), asialo-PSgp was subjected to Smith degradation and the sulfated fragments, which were obtained by DEAE-Sephadex A-25 chromatography, were separated on a Sephadex G-25 column into the flow-through and retarded fractions. Carbohydrate compositions of these fractions were: Man/Gal/GlcNAc ϭ 2.0:2.2:4.8 (mol/mol) for the flow-through fraction, indicating that peripheral GlcNAc residues on antennae were sulfated. Sulfate groups were unlikely to be attached to the di-N-acetylchitobiose structure because Man3 GlcNAc3 GlcNAc3peptide (or Asn) was found to exist in the retarded fraction of neutral Smith degradation products (data not shown), but not in the corresponding acidic fraction. This conclusion was also supported by the 1 H NMR data (Table IV). The sulfated flow-through fraction could conceivably contain GlcNAc132(GlcNAc134)Man133Man134GlcNAc134 GlcNAc13peptide (or Asn) having a sulfated GlcNAc residue on the unoxidizable branched Man residue. This type of fragment was also detected in Smith degradation products of Endo-PSgp (see "Periodate Oxidation and Smith Degradation of Endo-PSgp"). From the recovery of GlcNAc in both the acidic flow-through and retarded fractions (49 and 11%, respectively), the sulfated GlcNAc was calculated to represent at least 23% of total GlcNAc present in asialo-PSgp. Thus, about 1.6 GlcNAc residues/glycan chain were sulfated, consistent with the compositional analysis (Table I). Periodate Oxidation and Smith Degradation of Endo-PSgp Periodate oxidation of Endo-PSgp resulted in decrease of about 1 mol of both Gal and Man, and complete disappearance of Fuc (Table IV). The decrease of Man and Fuc was the same as for asialo-PSgp (see above). The presence of 1.1 mol of periodate oxidizable Gal confirmed the existence of unsubstituted Gal residues in Endo-PSgp, which was also indicated by the methylation analysis (Table II). The periodate oxidation also gave information on the chain length of oligoSia attached in Endo-PSgp. Of the 4 Neu5Ac residues on average, present in Endo-PSgp, 2.4 mol of Neu5Ac were oxidized and 1.5 mol remained unchanged, indicating that the distal Neu5Ac/internal or proximal Neu5Ac ratio is 2.4:1.5 (mol/mol). Therefore, the terminal sequences of sialylated antennae can be considered to consist of 0.9 mol of Neu5Ac␣263Gal and 1.5 mol Neu5Ac␣238Neu5Ac␣233Gal, although we can not exclude the possible occurrence of a minute amount of trisialylated Gal. The presence of the Neu5Ac␣233Gal sequence was confirmed by Salmonella typhimurium sialidase digestion as is shown below. Assuming that the core glycan has on average 3.5 antennae (see Structure II), the nonreducing termini of the antennae of Endo-PSgp were estimated to contain 1.1 mol of unsubstituted Gal and 2.4 mol of sialylated Gal (see Structure III), which is consistent with the 1 H NMR data of Endo-PSgp (see below). To obtain more information on the core glycan structure, fragment oligosaccharides obtained by Smith degradation of Under the conditions used for periodate oxidation experiment, nonreducing terminal Gal was completely oxidized, but oxidation of Man residues in Man-core was partial for asialo-PSgp as well as control asialobiantennary N-glycan (41). c The sum of t-, 4-, and 3-GlcNAc amounts was smaller than that from the GLC analysis, possibly due to the reason described in Footnote 5. TABLE IV Carbohydrate composition and methylation analyses of IO 4 Ϫ -treated Endo-PSgp and Smith degradation products of Endo-PSgp Endo-PSgp were characterized by composition and methylation analyses as well as FAB-MS spectrometry. Endo-PSgp was subjected to periodate oxidation, BH 4 Ϫ reduction, and acid hydrolysis, and the products were separated into neutral and acidic fractions by a DEAE-Sephadex A-25 column. On acid hydrolysis, all Sia residues were cleaved off and the sulfate group was the only acidic moiety in the products. Forty-three percent of the GlcNAc residues were recovered in the neutral fraction and 57% in the acidic (sulfated) fraction. Positive FAB-MS of the permethylated neutral fraction gave two pairs of molecular and A-type ions at m/z 380 for (Hex-HexNAc-R ϩ H) ϩ , where r ϭ glycerol, and m/z 260 for Hex-NAc ϩ , respectively, and at m/z 584 and 464 for (Hex-Hex-NAc-R ϩ H) ϩ and Hex-HexNAc ϩ , respectively (Table V) (Table IV). Yields of 4-GlcNAc (0.75 mol), 3-GlcNAc (0.38 mol), and t-Gal (1.1 mol) indicated the presence of Gal134GlcNAc13 and Gal133-GlcNAc13 in a molar ratio of 2:1. t-GlcNAc was suggested to come from the unsubstituted terminal Gal-Glc-NAc-sequence, and the formation of 0.75 mol of t-GlcNAc from 1 mol of the glycan chain was consistent with data based on methylation analysis of Endo-PSgp (Table II) and on periodate oxidation of Endo-PSgp (1.1 mol of Gal were oxidized), both showing the presence of unsubstituted Gal. The linkage between the unsubstituted Gal and GlcNAc was not determined here, but it was presumed to be exclusively 133, when one considers the total amount of 3-GlcNAc in Endo-PSgp (1.3 mol) and a very low yield of 3-GlcNAc (0.1 mol) in the acidic fraction (see the following paragraph). Combining all of these data together, it was concluded that Endo-PSgp contains unsulfated carbohydrate sequences as shown in Structure IV. Values in brackets represent the amounts of the respective structures obtained from 1 mol core glycan chain, based on the methylation analysis taking into consideration of the overall yields of the products during Smith degradation and the subsequent preparation procedures. The acidic fraction was analyzed after desulfation. On positive FAB-MS of the permethylated sample, a prominent A-type ion was observed at m/z 913, whereas this peak was weak in the neutral fraction, indicating that the Hex 2 HexNAc 2 group is more prevalent in the sulfated fraction than in the neutral fraction (Table V). Thus, the appearance of the ions suggested the presence of an extended form of the N-acetyllactosamine sequence, Gal3 GlcNAc3 Gal3 GlcNAc. The presence of this twice-repeated form of the N-acetyllactosamine sequence was also suggested by the results of the periodate oxidation/Smith degradation of asialo-PSgp (see above). Interestingly, 3-Gal was found on methylation analysis of the desulfated acidic fraction, but not in the neutral fraction, which, combined with all other data, also supported the presence of the Gal134GlcNAc133Gal134GlcNAc sequence being present in the sulfated species. The desulfated acidic components were shown to contain t-GlcNAc/4-GlcNAc/3-GlcNAc (0.54: 0.53: 0.11, mol/mol), and t-Gal/3-Gal (1.1: 0.65, mol/mol). 2,4-Man and 3-Man were also detected. 5 The proportion of 3-GlcNAc was only 7% of the total nonreducing terminal residues (t-GlcNAc and t-Gal), in the acidic fraction, suggesting the preferential occurrence of the Gal133GlcNAc sequence in the unsulfated antenna(e). This is reasonable because transfer of a sulfate group into the O-3 position of the GlcNAc residue competes with galactosylation at the same site. Taking all of the data into account, the acidic components were considered to contain the sequences, GlcNAc13, Gal134GlcNAc13, and Gal134GlcNAc133 Gal134GlcNAc13, in the molar ratio of 0.54:0.47:0.65. Thus, sulfated carbohydrate sequences present in Endo-PSgp were deduced as shown in Structure V. Values in brackets represent the relative amounts of these structures in 1 mol of core glycan chain, as shown in Structure IV. Salmonella typhimurium Exosialidase Digestion of Endo-PSgp and PSgp PSgp and Endo-PSgp were digested with the Neu5Ac␣23 3Gal-specific sialidase, and the release of free Neu5Ac was observed from Endo-PSgp, but not from PSgp. These results confirmed the presence of the Neu5Ac␣233Gal13 sequence in Endo-PSgp, and its absence in PSgp. Gal-GlcNAc-(GlcNAc-)Man-Man-GlcNAc- H NMR of Asialo-PSgp One-dimensional 1 H NMR and two-dimensional TOCSY spectra of asialo-PSgp are shown in Fig. 1 (A and B). Proton resonances were assigned based on the two-dimensional TOCSY using two different mixing times and the previously reported data (e.g. 43,44), and their assignments are summarized in Table VI. The residue numbering is shown in Structure VI. Based on chemical shifts and coupling constants of the H-1 protons in asialo-PSgp, anomeries of all component sugar residues were determined as described below. Man-3, -4, and -4Ј-The resonance signals at 5.12 and 4.20 ppm were, respectively, assigned to H-1 and H-2 of the ␣-Man-4 residue. The chemical shift values of these protons were reported to be invariant irrespective of the branch structure, i.e. 2-Man or 2,4-Man (43). The resonance signal at 4.05 ppm was assigned to H-3 of the 2,4-substituted Man-4 (44). The area intensity of this signal was three-fifths of that of H-1 signal of the Man-4, supporting the quantitative data of methylation analysis of asialo-PSgp (Table II). The H-3 signal of the 2-substituted Man was assigned to 3.91 ppm. Assignments of H-1, H-2, H-3, and H-4 of the ␣-Man-4Ј residue and H-1 of the ␤-Man-3 residue were assigned as indicated in Table VI. GlcNAc-1, -2, and Fuc-Two signals assignable to H-1 of the ␤-GlcNAc-1 were observed at 5.03 and 5.05 ppm, the difference depending on whether the GlcNAc-1 residue resided in glycoasparagine or glycopeptide. Both signals gave a cross peak with GlcNAc-1 H-2 at 3.85 ppm (Fig. 1B). The resonances at 4.601 and 4.67 ppm (J h ϭ 7.0 Hz for both) were assignable to H-1 signals of unfucosylated and 6-O-fucosylated ␤-GlcNAc-2, respectively, consistent with the report that of the H-1 signal of GlcNAc-2 is downfield-shifted by 0.07 ppm on 6-O-fucosylation of GlcNAc-1 (4.60 versus 4.67 ppm). The area intensity of the signal at 4.67 ppm is about half of that of the Man-4 H-1 signal at 5.13 ppm, consistent with the carbohydrate composition (Fuc:Man ϭ 0.51:3.0; Table I). The cross-peaks at 3.77, 3.74, and 3.58 ppm originated from the signal at 4.67 ppm were also assigned as shown in Table VI. The signals resonating at 4.87 and 3.79 ppm were, respectively, assigned to H-1 and H-2 of Fuc residue which is attached ␣136 to GlcNAc-1. The resonance signal at 4.13 ppm was assigned to H-5 of the Fuc by connecting through a cross-peak from the methyl signal at 1.19 ppm. The signal area intensity can again explain the carbohydrate composition (Fuc:Man ϭ 0.51:3.0). Peripheral GlcNAc-A series of resonance signals connectable with each other on the two-dimensional TOCSY spectrum (Fig. 1B) at 4.75, 4.32, 3.78, 3.70, and 3.58 ppm was most likely arising from GlcNAc in a ␤-configuration, based on their spinspin coupling systems (Fig. 1, A and B Other resonance signals assignable to the H-1 of peripheral ␤-GlcNAc residues were observed at 4.54, 4.58, and 4.60 ppm (Fig. 1A). The GlcNAc H-1 signal at 4.54 ppm corresponded to that in the Gal␤134GlcNAc␤13 Man sequence, and had crosspeaks centered at 3.72 ppm (H-2, H-3, and H-4), 3.59 ppm (H-5) and 3.92 ppm (one of the H-6 signals). The signal at 4.58 ppm was assignable to H-1 of the GlcNAc residue in the Gal␤133GlcNAc␤13 Man sequence, and gave several crosspeaks at 3.78 -3.85 ppm, 3.55 ppm, 3.47 ppm, and 3.97 ppm. We were not able to assign all these signals, but the data were closely consistent with the reported data for the sequence: H-1, 4.59 -4.62 ppm; H-2, 3.83-3.87 ppm; H-3, 3.80 -3.84 ppm; H-4, 3.55-3.58 ppm; H-5, 3.48 -3.50 ppm; H-6, 3.78 -3.79 ppm; H-6Ј, 3.92-3.94 ppm. A signal at 4.60 ppm was assigned to H-1 of the GlcNAc residue in the sequence of GlcNAc␤133Gal␤13 4GlcNAc␤13 Man. This GlcNAc H-1 is known to resonate at lower field as compared with that of the GlcNAc residue in the sequence Gal␤134GlcNAc␤13 Man (4.60 ppm versus 4.54 ppm; reported in Ref. 44). No signal was observed at 4.70 ppm, where the GlcNAc H-1 in the Gal␤134GlcNAc␤133 Gal␤13 sequence is reported to resonate (44). Considering the presence of the Gal␤134GlcNAc␤133Gal␤13 sequence in a sulfated form in asialo-PSgp, the GlcNAc residue in this sequence is suggested to be invariably sulfated. From the area intensity of the H-1 of each peripheral GlcNAc residue in asialo-PSgp, the proportion of 4-GlcNAc, 3-GlcNAc, and 3-O-sulfated GlcNAc was estimated to be 1.8:1.5:1.5 (mol/mol), being consistent with the results of methylation analysis of asialo-PSgp (Table II). Gal-A cluster of resonance signals centered at 4.45 ppm was assigned to H-1 of unsubstituted Gal residues, which were observed to a lesser extent on the spectrum of Endo-PSgp. These signals gave cross-peaks with three groups of signals at 3.92, 3.65, and 3.54 ppm, consistent with the previously reported data for this type of Gal residue. The cross-peak connecting 4.53 and 4.18 ppm signals suggested that these were possibly assignable to H-1 and H-4 of 3-O-substituted Gal, respectively, as was previously reported. H NMR of Endo-PSgp The one-dimensional 1 H NMR and two-dimensional TOCSY spectra are shown in Fig. 2, A and B, respectively. The spectra of Endo-PSgp are closely similar to those of asialo-PSgp, and the following points are noteworthy. A cluster of signal peaks observed at about 4.45 ppm in the asialo-PSgp spectrum was diminished and shifted to about 4.54 ppm (Fig. 1A versus Fig. 2A), indicating that peaks at 4.54 ppm were assignable to H-1 of the Gal residue of the Neu5Ac␣263Gal␤13 sequence. The resonance signals observed at 3.55, 4.11, and 3.96 ppm were, respectively, assignable to H-2, H-3, and H-4 of the sialylated Gals. The small resonance peaks remaining at 4.45 ppm on the spectrum of STRUCTURE V. Two different structures of the sulfated peripheral portion of the glycan chain in Endo-PSgp. Endo-PSgp were assignable to H-1 of unsubstituted gal residues, and the groups of cross-peaks at 3.92, 3.65, and 3.53 ppm also substantiated such assignment, thus confirming the presence of the unsubstituted terminal Gal residues in Endo-PSgp suggested by methylation analysis (see above). Three pairs of H-3 proton chemical shifts of Neu5Ac were observed and assigned, based on the previous data (46), as (H-3 eq , H-3 ax ) ϭ (2.75 ppm, 1.79 ppm) for the terminal Neu5Ac residue in Neu5Ac␣263Gal13 structure, (2.77 ppm, 1.75 ppm) for the distal Neu5Ac residue in oligoSia, and (2.66 ppm, 1.73 ppm) for the internal Neu5Ac residue including proximal Neu5Ac residue in oligoSia (see also Structure III). The H-3 proton area intensity of the terminal and distal Neu5Ac residues was almost twice as strong as that of the proximal Neu5Ac residue, which is in good agreement with the results from the periodate oxidation experiment of Endo-PSgp (see Structure II). Detection and Identification of O-Acetylated Neu5Ac Residue in PolySia Chain of PSgp On exosialidase digestion of PSgp, 68% of Neu5Ac residues were released after a 26-h incubation at 37°C, while almost all Neu5Ac were released from PSgp pretreated with mild alkali (data not shown), suggesting the presence of alkali-sensitive modification on Neu5Ac residues in PSgp. TLC of the mild acid hydrolysate of the oligoSia fraction obtained by Endo-N treat- (Fig. 3). DISCUSSION In this study, we have examined the structure of the polysialylated glycopeptide (PSgp) derived from 14-day embryonic chick brain, where polySia chain(s) is shown to occur exclusively on N-CAM molecules (8). The distinctive feature of the core carbohydrate is the presence of a hitherto unreported structure, which is significantly different from that previously described (22). The composite structure of polysialylated Nglycan chains of N-CAM depicted in Structure I is consistent with all of the experimental data given under "Results." The detailed structural features of the polysialylated core glycan chains revealed by this study, which differ from those proposed by Finne (22), can be summarized as follows. (a) Two distinct types of multiantennary structures, i.e. triand tetraantennary, are present as shown in Structure II, and on average 3.5 antennae are attached to the ␣-Man residues. (b) An ␣136-Linked fucosyl residue is attached to the proximal GlcNAc residue of the di-N-acetylchitobiosyl unit. (c) Notably, the mannose residue ␣-(136)-linked to the ␤-linked Man residue of the core is invariably 2,6-di-O-substituted by the GlcNAc residues, indicating the importance of GlcNAc transferase V, which catalyzes the attachment of the ␤-(136)-linked GlcNAc residue on the (136)-␣-linked Man arm, for polysialylation to occur on the core glycan chain. (d) The peripheral portion of the core glycan structure was found to be more complicated than recognized previously (22) and was revealed to contain both type 1 Gal␤133GlcNAc and type 2 Gal␤134GlcNAc sequences. Furthermore, an extended form of the type 2 chain, Gal␤134-GlcNAc ␤ 133Gal␤13 4GlcNAc, was also shown to occur. These three structures are attached on the (133)-and (136)-␣-linked Man arms in the proportion of 1.1: 1.8: 0.65. The presence of type 1 and the extended form of type 2 chains are unusual structural features in the core glycan of the embryonic N-CAM, and these unique features may possibly be relevant to initiation signal(s) for polysialylation. (e) Most interestingly, the core oligosaccharide unit contains, on average, about 1.4 mol of sulfate attached to the O-3 position of the peripheral GlcNAc residues. Sulfate groups reside exclusively in the type 2 N-acetyllactosamine chain(s) and the extended form of N-acetyllactosamine chain, where the sulfate is probably located on the outmost GlcNAc residue, i.e. Gal␤13 4(HSO 3 33)GlcNAc␤133Gal␤134GlcNAc␤13 Man, as evidenced from the data based on 1 H NMR measurement. No sulfation was shown to occur on the type 1 chain, as expected if one considers that sulfotransferase and ␤133-galactosyltransferase compete for the same site, i.e. the O-3 position of GlcNAc residues of the acceptor glycan core since both sulfation and type 1 galactosylation take place at the O-3 of the GlcNAc residues. Summary of proton chemical shifts for each carbohydrate residue in asialo-PSgp Gal␤133GlcNAc␤13 (32%), as shown in Structures III and IV. Interestingly, at least one terminal residue of the antennae was found not to be sialylated, indicating that polysialylation occurs asymmetrically on the antennae. However, the biological significance, if any, and the biosynthetic mechanism of such asymmetric polysialylation remain to be elucidated. Nevertheless, inner carbohydrate structural elements for embryonic chick brain N-CAM may possibly contribute to the regulation of elongation of polySia chain(s). It should be noted that our results showed that, after Endo-N digestion, only mono-, di-, and trisialyl groups were left on the core glycan chain. Particular attention will thus need to be given to the elucidation of polysialoglycan structures suggested by previous experiments using anti-polySia antibodies. Recently, we have examined the antigenic specificities of various anti-polySia antibodies, some of which appear to recognize oligoSia chains as short as di-or trisialyl sequences (46). Sulfation is acknowledged as a biologically important modification of carbohydrate residues of glycoconjugates (45,(47)(48)(49)(50)(51), and sulfated glycan chains are known to participate in regulation of the lifetime of serum glycoprotein hormones (47), to constitute a ligand structure for certain receptors (48,49), and to be involved in mediation of cell-cell adhesion (45,50,51). In vertebrate glycoproteins, 6-O-sulfated GlcNAc, 3-or 6-Osulfated Gal residues are frequently found (52-54) and less prevalent but still important were 4-O-sulfated GalNAc residues (10,11,47). The present finding of 3-O-sulfated GlcNAc residue in the glycan core chain of polysialylated N-CAM from fetal chick brain is novel and unusual in vertebrate glycoproteins, although the presence of such unusual structural elements has been documented in vertebrate proteoglycans and in invertebrate glycans, both of which are of biological importance. Thus a pentasaccharide sequence containing a 3-O-sulfated N-sulfated GlcNAc residue in heparin is a structural unit required for the binding to antithrombin III (49) and HSO 3 33GlcNAc␤133Fuc is a part of the carbohydrate unit present in the sponge cell surface polysaccharide necessary for cell aggregation (45). Vertebrate N-CAM molecules are known to have sulfated carbohydrate units (2,4,51) mostly in the HSO 3 33 GlcA␤13 3Gal3 sequence as a part of the HNK-1 antigen (4, 51). We have evidence of the presence of 3-O-sulfated GlcNAc residues in pig brain N-CAM. 6 linked glycan chains has recently been identified (11). Much remains to be learned about the mode and importance of sulfation on GlcNAc residues of the core glycan chain, and experiments testing the effect of chlorate-induced deprivation of sulfate donor on expression of polySia chain are under way in our laboratory. Finally, the polysialyl chain of N-CAM is partially (5-10%) O-acetylated on either O-7 or O-9 and both O-7 and O-9 of interchain and nonreducing terminal Neu5Ac residues. A part, if not all, of the 8-O-acetyl derivative as revealed in HPLC analysis can be accounted for by migration of the O-Ac group during hydrolysis and/or the derivatization prior to HPLC, although its natural occurrence cannot be ruled out. This is the first report of the presence of O-Ac groups on the N-CAM polySia chain, although polysialylated capsular polysaccharides from certain strains of Escherichia coli K1 are known to contain O-Ac groups at O-7 or O-9 of Neu5Ac residues, and the responsible O-acetyltransferase that transfers O-acetyl group from acetyl-CoA to polySia chain with DP more than 14 has been identified (55). The partially O-acetylated polySia chain of N-CAM was sensitive to Endo-N giving rise to free oligoSia chains of DP as large as about 9, which are longer than those from unsubstituted polySia chains. Thus, the polySia chain of N-CAM is considered to be sparsely O-acetylated. Endo-N recognizes homooligomers of sialic acid with minimum DP 5 (23) and can access the intervening stretch of unsubstituted Neu5Ac residues. Alternatively, some clusters of O-acetylation might occur on a distal region of the polySia chain of DP Ͼ 55 (56), leaving the proximal region susceptible to Endo-N. A plausible function of O-acetylation is a termination signal for polySia chain elongation. O-acetylation of polySia would prevent ␣238-polysialyltransferase from acting as an acceptor substrate. Recently, we proposed a concept of termination of polySia elongation based on our biosynthetic studies on polySia formation, where KDN capping of the polySia chain on the O-linked glycan chains of trout polysialoglycoprotein was demonstrated as a stop signal for the elongation reaction (13). More recently, 9-O-sulfation and 8-O-sulfation were proposed as the most likely termination signals in polySia elongation in sea urchin egg cell surface polySiaglycoprotein (57) and in sperm surface oligo/polysialoglycosphingolipids (58), respectively.
2018-04-03T03:15:03.707Z
1996-12-20T00:00:00.000
{ "year": 1996, "sha1": "bbb0ff161a1056dce7bf8e7eeedab99c6ecd79b8", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/271/51/32667.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "753f0ed2bfb1b2fc85873af5b5dd38b0b3f13ff6", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
230554494
pes2o/s2orc
v3-fos-license
Supply Chain Management practices: Competitive Advantage and Organizational Performance in Sri Lankan Construction Industry In the construction industry supply chain management (SCM) is a vital tool in controlling business processes in a defined and a systematic way to improve quality, time management and increase profit. Effective supply chain management has become a potentially valuable method of securing and improving competitive advantage and organizational performance since competition no longer between organizations, but between global organizations and among supply chains. This paper aims to investigate the impact of supply chain management practices on competitive advantage and organizational performance in the construction industry, Sri Lanka, due to the lack of application of supply chain management practices to determine the organizational performance in the competitive environment. Further, this study focuses five SCM practices: strategic supplier partnership, customer relationship, level of information sharing, quality of information sharing, and postponement to investigate what supply chain management is, how it works in increase competitive advantage and what are its dynamics. Six hypotheses were developed based on the constructed conceptual framework derived from the supply chain management literature. The data were collected over the survey technique by randomly administering structured questionnaires from 198 respondents of construction management teams and different sub-contractors. First Multiple regression analysis was performed to explore the impact of five supply chain management practices on competitive advantage and organizational performance in the construction industry and the analysis was carried out the factor analysis to explore the significance of supply chain management dimensions. The results of the regression analysis indicated that all SCM variables have a positive impact on competitive advantages and organizational performance of the construction industry in Sri Lanka. Moreover, it suggested that the strategic supplier partnership was the most significant SCM variable which determines the competitive advantage and level of information sharing variable was the less significant variable towards competitive advantage. The results of this study provide new insights to the construction companies to better understand the significant role that SCM variables play in respect to the competitive advantages and organizational performance in Sri Lanka. The study has outlined to examine the five T.S.L.W. Gunawardana Senior Lecturer, University of Ruhuna, Sri Lanka gunawardana@badm.ruh.ac.lk D.H. Wedage M & E Engineer, International Construction Consortium (Pvt) Ltd. Sri Lanka. dhanushkahw@gmail.com Supply Chain Management practices: Competitive Advantage and Organizational Performance in Sri Lankan Construction Industry competitive advantage and organizational performance in Sri Lankan context. Therefore, this study attempts to fill the above research gaps by studying the impact of supply chain management practices on competitive advantage and organizational performance in Sri Lankan construction industry. Supply Chain Management Practices: SCM practices are defined as a set of practices undertaken by an organization to promote effective supply chain management (Tan, 2001). Many studies have done about SCM practices in different aspects. Li et al., (2006) reviewed SCM practices literature and identified five distinctive elements. They are strategic supplier partnership, customer relationship, level of information sharing, quality of information sharing and postponement. Afande et al., (2015) studied about these elements and according to him these five constructs cover upstream (strategic supplier partnership) and downstream (customer relationship) sides of supply chain, information sharing of a supply chain (level of information sharing and quality of information sharing), and internal supply chain process (postponement). Studying the literature above clarifies that there are five distinctive dimensions and partnership. Strategic partnership with suppliers increases the efficiency or productivity since they are willing to share the success of the products (Li et al., 2006). Supplier participating at the early stages in the product design process can offer more cost-effective designs, help to select best technologies and components, and help in design assessment (Tan, Lymann & Wisner, 2002). Strategically aligned organizations can work closely together and eliminate wasteful time and effort (Balsmeier & Voisin, 1996). An effective supplier partnership can be a critical component of a leading-edge supply chain (Noble, 1997). Level of Information Sharing: Level of information sharing has defined as the willingness to share strategic and tactical data with other members of the supply chain by Global logistic research team (Mentzer et al., 2001). Information sharing refers to the ability of enterprises to share knowledge and information with supply chain partners with an effective and efficient manner. Companies share demand related information with their upstream and downstream partners with the purpose of improving planning and coordination of logistics and production-related activities (Glenn, Chen, Fawcett & Adams 2009;Cooper et al., 1997). Together supply chain partners can understand the needs of the end customer better and hence can respond to market change quicker (Stein & Sweat, 1998). It can be considered that effective use of relevant and timely information by all functional elements within a supply chain as a key competitive and distinguishing factor (Tompkins & Ang, 1999). Many types of research in the field focuses on the effect of information sharing on supply chain members (Huang & Wang, 2017). Simplified material flow, including streamlining and making highly visible all information flow throughout the chain, is the key to an integrated and effective supply chain (Childhouse & Towill, 2003). Quality of Information Sharing: Quality of Information sharing has defined as the accuracy, timeliness, adequacy and credibility of information sharing (Moberg, Cutler, Gross & Speh, 2002;Monczka, Peterson, Handfield & Ragatz, 1998). Besides the level of information sharing the quality of information sharing is also very important. The high level of information with low quality shared among partners in the supply chain will limit the positive effect of general information sharing action. Marinagi, Trivellas and Reklitis (2015) implied that information sharing among partners along the supply chain facilitates higher overall performance as a result of enforced SCM practices elevating information reliability and quality. Efficiently and friendly information technology applications will improve information sharing as described (Yang & Maxwell, 2011). However main barriers and difficulties to discourage quality of information sharing is the cost and complexity of technological solutions (Brau, Fawcett & Morgan, 2007). Customer Relationship: Customer relationship is defined as the entire array of practices that are employed for the purpose of managing customer complaints, building a long-term relationship with the customer, and improving customer satisfaction (et al., 2006). Improving customer relationship can enhance the benefits by reducing coordination frictions and helping sellers learn about related buyers' utility (Shi, 2016). Having understood the importance of the customer relationship towards the long-term survival organizations are moving towards the customized products and personalized services (Moberg et al., 2002). Success in market place demands going beyond satisfactory exchanges with customers, therefore firms should build a close relationship with their customers. Postponement: Postponement is defined as the practice of moving forward one or more operations or activities (making, sourcing and delivering) to a much later point in the supply chain (Beamon, 1998;Van Hoek, 1998). Two primary steps of developing a postponement strategy are determining how many steps to postpone and determining which steps to be postponed (Beamon, 1998). Postponement allows an organization to be flexible in developing different versions of the product to meet the changing customer needs and to differentiate a product or to modify a demand function (Waller, Dabholkar & Gentry, 2000). Keeping materials undifferentiated for as long as possible will increase an organization's flexibility in responding to changes in customer demand. Besides, an organization can reduce supply chain cost by keeping undifferentiated inventories (Lee & Billington, 1995; Van Hoek, Voss & Commandeur, 1999). Competitive Advantage: Competitive advantage (CA) has defined as the extent in which an organization can create a defensible position over its competitors (Mcginnis & Vallopra, 1999;Porter, 1985) and includes a feature that allows an organization to distinguish itself from its competitors (Li, Ragu-Nathan, Ragu-Nathan & Rao, 2006). CA is related to the unique resources and competencies. Where other competitors do not have, which leads to better performance over the competitors (Sadri & Lees, 2001). CA is based on the (Kessler & Chakrabarthi, 1996). According to Li et al., (2006) competitive advantage is based on the following capabilities; competitive pricing, premium pricing, value to customer, quality, dependable delivery and product innovation. Organizational Performance: Organizational performance refers to how well an organization achieves its market-oriented goals as well as its financial goals (Yamin & Gunasekruan 1999). Organizational performance is difficult to measure and there is no universally accepted definition. Many prior studies have measured organizational performance using both financial and market criteria, including return on investment (ROI), Market share in the industry. Profit margins on sales, the growth of ROI, the growth of sales, the growth of market share and overall competitive position in the industry (Vickery, Calantone & Droge, 1999). The short-term objectives SCM are primarily to increase productivity and reduce inventory and cycle time, while long term objectives are to increase market share and profits for all members of the supply chain (Tan, Kannan & Handfield, 1998). Any organizations initiative is to use SCM practices and other management techniques to improve organizations performance. As per the above literature, we will use Market share in the industry, Return on Investment, Profit Margins, Growth of Sales and Competitive position in the industry dimensions will consider measuring the organizational performance. CONCEPTUAL FRAMEWORK AND HYPOTHESES: In this section, the approach is taken to develop an initial research model and the hypotheses deduced from the research question and research is described. Much care has been exercised in order to satisfy the criterion of replicability (Kerlinger, 1986). The result is a fairly detailed measurement and data collection sections making it possible for others to reproduce the research, to reanalyze the data, and to judge the adequacy of the methods and the data collection. When developing the conceptual framework of the present study, based on the previous literature, measure the positive or negative impact on the dependent variable (i.e., organizational performance) by the independent variable (i.e., supply chain management practices).and the mediating variable (i.e., competitive advantage). As mentioned, observational data collected prior to this study were important as they led to a research idea and hence Competitive Advantage and Organizational Performance: Having a CA for an organization can have one or more of the capabilities such as lower prices, higher quality, higher dependability and shorter delivery time when it is compared with its competitors. These capabilities will in turn enhance the organizations' overall performance (Mentzer, Min & Zacharia, 2000). CA can lead to high levels of economic performance, customer satisfaction and loyalty. Brands with higher consumer loyalty face less competitive switching in their target segments thereby increasing sales and profitability (Moran, 1981). An organization that supplies high-quality products can charge a premium or higher price for their quality. That causes the organization to increase its profitability and the return on investment. An organization having a short time to market supply and with rapid product innovation can lead the market with a higher market share and sales volume. As per the above following the hypothesis can be created The Sample, Study Variables, Questionnaire Design and Data Collection: The sampling frame was designed on the register list of Construction Industry Development Authority (CIDA). The members' list offered useful information such as the name and addresses of construction companies' location, and telephone and fax numbers, while simple random sampling method was applied in order to select the respondents from the population. The research conducts on 95% confidence. In conformity with this precedent, the level of analysis of the present study is the supply chains in constructions industry, while the unit of analysis is a staff who is responsible for the construction sites performance and mostly involved in the construction supply chain. This study, therefore, uses PLS to process the data because of sample size is somewhat sufficient. Demographic data analysis has done through the SSPS and 130 respondents were executives or above grades and that was 65.7%, 68 were non-executive grades that was 34.3%. The experience of the respondents was as follows, 134 respondents have 5 years or less than 5 years' experience in the construction organization that was 67.68%, 64 respondents have more than 5 years' experience that was 33 under market share, return on investment, profit margin, competitive position and growth of sales. Using these dimensions five items that measure organizational performance was developed by Zhang (2001) and constructs from Vickery, Calantone and Droge, (1999). The questions were developed by using a five-point scale ranging from "1= strongly disagree" to "5= strongly agree". DATA ANALYSIS AND RESULTS: The discriminant validity of the latent variables was tested using Fornell & Larcker (1981) approach. Table 1 show the square of correlations (R 2 ) between constructs. No nondiagonal entry exceeds the AVE of the specific construct. There are two statistical methodologies for estimating SEM with latent variables, the covariance-based (CBSEM) approach and the variance-based partial least squares path modelling (PLS). CBSEM is the method of choice for theory testing, while PLS is appropriate for prediction-oriented applications (Wold 1982 dependent and independent variables in regression analysis (Segars & Grover, 1993) but also provides a comprehensive means to assess and modify theoretical models (Karahanna & Straub, 1999). The results reveal that SCM practices have a positive and statistically significant relationship with CA. Each construct in the measurement model was measured using multiple items. Each manifest variable in a certain measurement model is assumed to be generated as a linear function of its latent variables and the residual. Table 4 Construction firms should give priority to complete construction projects within proper time limits which will improve the performance of the sites based on the results of the study. Targets and penalties should be introduced to employees to complete their products within the shortest time possible to introduce to the markets to get a better performance within the organization. Regression Analysis: Based on the results policymakers can improve the strategic supplier partnership by introducing new regulations and guidelines to the supplier side as well. Also, they can improve innovations by providing guidelines for the compulsory requirements of research and development facilities in each organization. Policymakers also need to address project completions and product completions time within given targets or less time by adding more restrictions for delays. They also can encourage more studies towards SCM practices, CA and Org. performance to gain more knowledge of the impact in the construction industry. Therefore, the application of new policies and new guidelines based on this study finding will help to improve performance in the construction firms and local firms will able to compete with multinational organizations by improving these factors. FUTURE RESEARCH DIRECTIONS The findings of the study and application are limited to the construction industry organizations in Sri Lanka. They may not be applying to construction organizations operating outside the country. Therefore, it is important to note that they can only be used for competitive purposes and not any direct application in another country. To overcome this limitation future studies, need to be conducted involving construction firms outside the country. Some multinational construction firms are operating in Sri Lanka. Another major limitation of the study was this study carried out using twelve construction sector organizations. But as per the CIDA records, there are more than two thousand registered construction companies in Sri Lanka. If the samples and population could have expanded more than this, more vigorous results could have been obtained, which could have generalized in a much broader manner. Therefore, future studies should be done involving more construction companies registered in the CIDA to overcome this limitation. In this study researcher used only five dimensions in each SCM Practices, CA & Org. Performance variables. As per the literature, there are more dimensions to measure each of these variables which haven't used in this study. Therefore, there is a limitation on the dimensions of study variables. Future research can expand on the domain of SCM practices by
2020-12-10T09:06:36.887Z
2020-12-08T00:00:00.000
{ "year": 2020, "sha1": "3acee333a37bc88d093a1c06a8dcd6442bc77465", "oa_license": "CCBY", "oa_url": "http://sljmuok.sljol.info/articles/10.4038/sljmuok.v6i2.42/galley/48/download/", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f2b8bb71cc1849766a73d94aee8790d1a05f8ab2", "s2fieldsofstudy": [ "Business", "Engineering" ], "extfieldsofstudy": [ "Business" ] }
207833524
pes2o/s2orc
v3-fos-license
Significance of cerebrospinal fluid inflammatory markers for diagnosing external ventricular drain-associated ventriculitis in patients with severe traumatic brain injury. Objective The aim of this study was to investigate the diagnostic potential of the inflammatory markers interleukin-6 (IL-6), total leukocyte count (TLC), and protein in the CSF and IL-6, C-reactive protein, and white blood cell count in the serum for the early diagnosis of ventriculitis in patients with traumatic brain injury (TBI) and an external ventricular drain compared with patients without ventriculitis. Methods Retrospective data from 40 consecutive patients with TBI and an external ventricular drain treated in the authors' intensive care unit between 2013 and 2017 were analyzed. For all markers, arithmetical means and standard deviations, area under the curve (AUC), cutoff values, sensitivity, specificity, positive likelihood ratio (LR), and negative LR were calculated and correlated with presence or absence of ventriculitis. Results There were 35 patients without ventriculitis and 5 patients with ventriculitis. The mean ± SD IL-6 concentration in CSF was significantly increased, with 6519 ± 4268 pg/mL at onset of ventriculitis compared with 1065 ± 1705 pg/mL in patients without ventriculitis (p = 0.04). Regarding inflammatory markers in CSF, IL-6 showed the highest diagnostic potential for differentiation between the presence and absence of ventriculitis (AUC 0.938, cutoff 4064 pg/mL, sensitivity 100%, specificity 92.3%, positive LR 13, and negative LR 0), followed by TLC (AUC 0.900, cutoff 64.5 /µL, sensitivity 100%, specificity 80%, positive LR 5.0, and negative LR 0) and protein (AUC 0.876, cutoff 31.5 mg/dL, sensitivity 100%, specificity 62.5%, positive LR 2.7, and negative LR 0). Conclusions The level of IL-6 in CSF has the highest diagnostic value of all investigated inflammatory markers for detecting ventriculitis in TBI patients at an early stage. In particular, CSF IL-6 levels higher than the threshold of 4064 pg/mL were significantly associated with the probability of ventriculitis. detection of ventriculitis after SAH, 9 the diagnostic potential of inflammatory reactions in serum and CSF for the detection of ventriculitis after TBI has not yet been objectively assessed. Based on our experience with patients with SAH, 14,15 we aimed to investigate whether the interleukin-6 (IL-6) level in CSF in particular may be useful for the early detection of bacterial ventriculitis after TBI and placement of an EVD. Methods This retrospective, single-center study includes all consecutive adult patients (> 18 years) with TBI and initial EVD implantation who were treated in our neurosurgical ICU between January 2013 and December 2017. Existence and severity of TBI was diagnosed on admission using Glasgow Coma Scale (GCS) scores and CT findings. 29 The ethics review committee of the Ludwig Maximilian University of Munich approved this study. Indications for EVD were invasive ICP monitoring in sedated and intubated patients or acute hydrocephalus observed on CT scanning. 27 EVD implantation was performed under sterile conditions and antibiotic prophylaxis (1.5 g cefuroxime intravenously) in the emergency department under CT control or in the operating room. After the hair was shaved in the area of Kocher's point (2.5 cm from the midline and approximately 12 cm posterior to the nasion but anterior to the coronal suture), the skin was disinfected with a povidone-iodine solution. 15 The skin incision was made at Kocher's point, followed by a drill hole with a gimlet and catheter implantation into the lateral ventricle. A control scan confirmed correct positioning of the catheter and excluded a procedure-related hemorrhage. Subcutaneous tunneling was not routinely performed. A purse-string stitch sutured the wound and secured the EVD position. The wound and the exit point of the EVD were covered with sterile dressings, and the EVD was connected to a closed CSF collection system. CSF was aspirated sterilely by a physician via a proximal 3-way stopcock. According to our standard operating procedures in TBI patients, the serum markers white blood cell count (WBCC), IL-6, and CRP and CSF markers IL-6, total leukocyte count (TLC), and protein were determined daily from the installation until the removal of the EVD. Serum and CSF markers were measured in the Department of Laboratory Medicine at our hospital. Measurements of biomarker levels were performed according to the manufacturer's instructions, and quality control was ensured. Blood and CSF specimens were not obtained solely for the purpose of this study. EVD-associated ventriculitis in patients with TBI was defined in this study as culture-verified ventriculitis with a positive microbiological CSF culture, positive CSF Gram stain, or positive microbiological culture of the EVD tip. The criteria for diagnosing meningitis and EVD-associated ventriculitis are based on the criteria for nosocomial infections from Centers for Disease Control and Prevention (CDC): 1) proof of an organism in the CSF in a microbiological testing method performed for purposes of clinical diagnosis and not for surveillance sampling; and 2) clinical worsening or new neurological symptoms together with altered laboratory CSF parameters (protein, glucose, and TLC). 6,11,34 Most patients with severe TBI have a GCS score of 3 due to anesthesia and have altered CSF parameters due to the trauma. Thus, neurological and laboratory deterioration is difficult to detect and interpret and, according to current CDC criteria, unsuitable for diagnosing ventriculitis. Microbiological testing was performed in TBI patients with suspected ventriculitis. Suspected ventriculitis was diagnosed concordantly by 2 experienced consultant doctors (a neurosurgeon and a neurointensive physician) in the field of neurointensive care. Physicians must be aware that contamination could lead to false-positive results. We tried to minimize the risk of contamination by strictly sterile aspiration of the CSF by a physician via the proximal 3-way stopcock. According to our standard operating procedures, 2-4 CSF tubes were collected for bacterial culture, Gram stain, and molecular testing. Furthermore, we prioritize multiple tests on small-volume samples (< 1 mL). The gold standard for diagnosing any type of infection is the proof of a pathogen in a microbiological testing method. As current CDC criteria for ventriculitis and meningitis are inadequate for TBI patients, we investigated only patients with culture-verified ventriculitis. We used the inflammatory marker level at the time of first diagnosis of ventriculitis due to TBI (inflammatory marker level was measured on the same day when the specimen was obtained for microbiological testing) and compared it with levels in patients with TBI and an EVD without ventriculitis on the 12th day, as the mean time to infection was 11.6 ± 2.7 days. Normal distribution of data was investigated using the Kolmogorov-Smirnov test. We used receiver operating characteristic (ROC) curves with the corresponding area under the curve (AUC) to determine the diagnostic potential of inflammatory markers for predicting ventriculitis. The arithmetical mean ± SD of biomarker levels in both groups were compared using the Student t-test. Mean values were considered to differ statistically significantly when p < 0.05. Outcome parameters of this study were sensitivity, specificity, positive likelihood ratio (LR) and negative LR. Optimal thresholds were calculated using Youden's J statistic by maximizing sensitivity and specificity. Univariate analysis of risk factors was investigated using the chi-square test, and calculations were performed using SPSS (version 17, SPSS Inc.) and IBM SPSS (version 23.0, IBM Corp.) for Windows. Results Basic characteristics, including the injury pattern after head trauma, of the 40 patients are summarized in Table 1. In 17 patients (43%), initial cranial CT revealed an open TBI (i.e., perforation of the scalp, fracture of the skull with or without rupture of the hard meninges, or CT scan with air inclusions situated in the extradural, subdural, or subarachnoid spaces or in the brain parenchyma 3,5,31 ) with an indication for prophylactic antibiotic treatment with ceftriaxone. Thirty-five patients sustained a TBI and did not experience ventriculitis, and 5 patients developed ventriculitis. The detected pathogens are listed in Table 2. The mean time to infection was 11.6 ± 2.7 days (± SD) after trauma. Inflammatory marker levels were normally distributed. The mean CSF IL-6 levels were significantly increased in patients at onset of ventriculitis (6519 ± 4268 pg/mL in patients with ventriculitis vs 1065 ± 1705 pg/mL in those without [p = 0.04]). The AUC for the IL-6 level in CSF for predicting ventriculitis was 0.938 ( Fig. 1). The opti-mal threshold was 4064 pg/mL with a sensitivity of 100%, specificity of 92.3%, positive LR of 13.0, and negative LR of 0. The TLC level in CSF was significantly higher in patients with ventriculitis (883 ± 845/µL vs 13.5 ± 16.0/ µL in patients without ventriculitis). The corresponding AUC for the TLC level in CSF was 0.900 (cutoff 64.5/ µL, sensitivity 100%, specificity 80%, positive LR 5.0, and negative LR 0). Results for serum WBCC, serum IL-6, serum CRP, and CSF protein are given in Table 3. The diagnostic significance of each of these parameters was lower than that of the IL-6 level in CSF. The respective ROC curves are depicted in Fig. 1. The predictive potentials of the IL-6 level in CSF at 24 hours and 48 hours before the positive microbiological culture are depicted in Fig. 2. Scatterplots of each biomarker are provided in Fig. 3. A univariate analysis of risk factors for predicting EVD-associated ventriculitis is given in Table 4. No risk factor reached statistical significance. Figure 4 depicts the number of patients with an EVD per day. Discussion Nearly 2.5 million people per year are affected by TBI in the United States. 13 Estimates predict that TBI will become the third most common cause of death and disability within the general population by 2020. 12,13 Today, ICP monitoring is a safe and reliable method for ICP measurement with reported low complication rates (especially low infection rates). 4 ICP monitoring is possible using an EVD or an ICP catheter. The indication for ICP monitoring is the lack of neurological assessment of a patient (Glasgow Coma Scale score 3-8) due to a TBI or drainage of acute hydrocephalus through an EVD. 4 Since ICP monitoring has not been proven superior to imaging and clinical examination, the avoidance and early detection of deviceassociated infections is of central importance. 4 This could prevent permanent device-associated secondary neurological deficits. In line with previous TBI studies, the patients included in our study represent a typical adult patient cohort with severe TBI. Similar mechanisms of injury, types of injury, ventriculitis rates, and rates of patients with polytrauma have been reported previously. 4,13,19,21 Nevertheless, based on the inclusion criteria of this study, our patients tended to have sustained more-severe TBI than patients in the STITCH[Trauma] trial; 19 the severity of TBI in our patients was similar to that of those in the BEST:TRIP trial. 4 Approximately 60% of patients with severe TBI need develop EVD-associated ventriculitis. EVD-associated ventriculitis contributes significantly to the high morbidity and poor outcome of ICU patients. 14 Therefore, it is essential to detect and treat ventriculitis at an early stage. 15 This may avoid ventriculitis-associated sequela, shorten hospital stay, and save costs for the healthcare system. Currently, CSF culture remains the gold standard for diagnosing bacterial meningitis and is positive in 70%-85% of cases prior to antibiotic administration. 2,17 However, patients with TBI and an EVD frequently receive periprocedural antibiotic prophylaxis, thereby reducing the diagnostic value of microbiological CSF samples. 14,17 Moreover, in the early phase after severe TBI, patients are severely ill and need antibiotic treatment for various indications (e.g., infections of the lung, urinary tract infections, sepsis), which may further reduce the diagnostic value of microbiological CSF samples. 15 In addition, it takes at least 48 hours for routine CSF cultures to yield results, limiting their clinical use acutely. 17 Here, we sought to identify the diagnostic potential of routine inflammatory markers in CSF and serum for early detection of ventriculitis after TBI and EVD implantation. We presented ROC curves and thresholds of common serum and CSF biomarkers on the day of onset of EVD-associated ventriculitis compared with noninfectious controls, which can support clinical decisionmaking. We identified IL-6 in the CSF to be a diagnostic marker with high diagnostic potential and a cutoff value of 4064 pg/mL for diagnosing EVD-associated ventriculitis in TBI patients. The respective values for IL-6 in serum did not reach similar diagnostic power. Moreover, IL-6 in CSF was a useful early predictive marker 24 hours before EVD-associated ventriculitis became manifest. To the best of our knowledge, the diagnostic power of IL-6 for early diagnosis of EVD-associated ventriculitis after TBI has not been defined for clinical routine use so far. 14 However, its diagnostic significance can be derived from other neurological diseases. One study investigated the role of IL-6 in CSF for predicting postoperative EVD-associated ventriculitis after neurosurgical procedures. 17 IL-6 in CSF was significantly increased in patients with ventriculitis, and it had a moderate diagnostic potential for diagnosing ventriculitis on the day of fever rise. The calculated optimal cutoff value was similar to ours. 17 Moreover, the usefulness of IL-6 in CSF for diagnosing ventriculitis has been shown in several studies of patients with aneurysmal SAH. 10,14 Again, the diagnostic potential of IL-6 in CSF and respective cutoff values were in line with our results. 4 It has been concluded that IL-6 in CSF after SAH could be an early marker for predicting ventriculitis. 10 However, conflicting results have been shown regarding the usefulness of IL-6 in CSF for diagnosing bacterial meningitis in children 25 and adults. 24,32 In addition to its diagnostic potential for early diagnosis of ventriculitis, IL-6 in CSF has also been investigated regarding its association with injury severity and neurological outcome. Initially, a neuroprotective effect by elevated IL-6 concentrations in CSF after TBI with improved clinical outcome was assumed. 30 Two more recent studies show, however, that persistently elevated IL-6 concentrations in CSF correlate with injury severity and increase the odds for unfavorable global outcome. 13,21 In particular, IL-6 concentrations higher than 2000 pg/mL in CSF have a direct prognostic significance for predicting worse neurological outcome after TBI. 13,21 This threshold is about half as high as the cutoff value for predicting ventriculitis after TBI. Vasospasm can also potentially occur in the context of TBI. Vasospasm has been shown to increase CSF IL-6 levels in SAH patients. 14 This must also be considered in the assessment of CSF IL-6 increases in TBI patients. The role of routinely determined biomarkers in the CSF such as TLC, percentage of neutrophils, glucose, and protein for diagnosing ventriculitis after TBI is unclear. 15 While many biomarker studies did not include patients with TBI, 10,14,15,23 our literature search identified 3 main studies that included subpopulations of patients with TBI and examined CSF inflammatory markers for diagnosing ventriculitis. 17,18,34 In our study, TLC in CSF was significantly increased in patients with ventriculitis and had a very good diagnostic potential for predicting EVD-associated ventriculitis (cutoff value 64.5/µL). This finding is in line with those of previous reports. 7,15 Two studies confirmed significantly increased TLC in patients with ventriculitis, but results regarding glucose, protein, and percentage of neutrophils in CSF or serum markers WBCC and CRP were not conclusive. 17,34 Another recently published study showed that the cell index (ratio of leukocytes to erythrocytes in CSF and leukocytes to erythrocytes in the peripheral blood) had a good diagnostic potential for predicting ventriculitis. 18,23 Since the TLC in CSF already showed good diagnostic potential for diagnosing ventriculitis in this and other studies of TBI patients, it remains unclear whether the determination of the cell index could provide further diagnostic information. Two studies about the diagnostic potential of biomarkers for diagnosing ventriculitis in patients with SAH reported a good diagnostic potential of TLC in CSF 7,15 (cutoff value 635/µL), while one study with a consecutive cohort of patients with EVD detected no significant difference in the mean concentrations of TLC in CSF. 28 The clinical value of protein in CSF for predicting ventriculitis is also unclear. One study reported moderate diagnostic potential for protein in CSF, 14 and another reported no significant difference of mean protein concentrations in patients with or without ventriculitis. 28 The percentage of neutrophils in CSF was a useful marker for predicting ventriculitis in 2 previous studies. 7,14 Insufficient diagnostic potential has been described for serum IL-6 and WBCC. 7,14,15 For serum CRP, the study situation is unclear. Good and insufficient diagnostic potential have been described so far. 7,14,15,34 It is concluded that cutoff values of TLC in CSF for diagnosing ventriculitis differ widely in TBI and SAH patients, but the role of protein in CSF, percentage of neutrophils in CSF, serum CRP, and serum WBCC remains unclear in patients with TBI or SAH and deserves further investigation. The strength of this study is the strict criteria for ventriculitis after TBI and the homogeneous study population of patients with TBI. Other studies have been limited by a heterogeneous study population of patients who experienced numerous types of brain injury (e.g., SAH, intracranial bleeding, intraventricular bleeding, craniotomy, EVD only). 7,17,18,23,28,34 Limitations include that our study was designed as a retrospective clinical study. Therefore, data acquisition may not have been as accurate as that in prospective clinical studies. A high percentage of TBI patients had an acute traumatic pneumocephalus. These patients received prophylactic antibiotic treatment with ceftriaxone. Furthermore, all patients underwent perioperative antibiotic prophylaxis with cefuroxime prior to EVD implantation. Antibiotic treatment lowers the sensitivity of microbiological testing, 16 especially in patients with ventriculitis. 14 Patients with culture-negative ventriculitis may have been misclassified as not having bacterial ventriculitis in this study. This is a common problem of inflammatory marker studies. 15 In addition, the contamination of microbiological samples is a common problem in clinical routine. Despite all the precautions described in Methods, contamination can never be safely excluded. 20 However, we think that we have captured the inflammatory marker levels in a representative and exactly defined study population that was treated by a strictly standardized operating procedure. Conclusions Diagnosing EVD-associated ventriculitis in patients with TBI at an early stage is challenging. Daily supervision of clinical symptoms in sedated patients and daily determination of biomarker levels in the CSF may be essential. IL-6 in CSF is significantly increased after TBI in patients with ventriculitis. Patients with a CSF IL-6 level greater than 4064 pg/mL have a drastically increased posttest probability for ventriculitis. Future prospective studies will show whether additional inflammatory markers in the CSF can further increase the diagnostic accuracy of current inflammatory markers.
2019-11-03T14:06:29.447Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "0e173e4f15cf1ac0b9b850b64c3c28cc8716ce11", "oa_license": null, "oa_url": "https://thejns.org/downloadpdf/journals/neurosurg-focus/47/5/article-pE15.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "bd669055bf8d7db15eb2a3c39f73f00e0dfbc0fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13973139
pes2o/s2orc
v3-fos-license
Playing FPS Games with Deep Reinforcement Learning Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios. Introduction Deep reinforcement learning has proved to be very successful in mastering human-level control policies in a wide variety of tasks such as object recognition with visual attention (Ba, Mnih, and Kavukcuoglu 2014), high-dimensional robot control (Levine et al. 2016) and solving physics-based control problems (Heess et al. 2015). In particular, Deep Q-Networks (DQN) are shown to be effective in playing Atari 2600 games (Mnih et al. 2013) and more recently, in defeating world-class Go players . However, there is a limitation in all of the above applications in their assumption of having the full knowledge of the current state of the environment, which is usually not true in real-world scenarios. In the case of partially observable states, the learning agent needs to remember previous states in order to select optimal actions. Recently, there have been attempts to handle partially observable states in deep reinforcement learning by introducing recurrency in Deep Q-networks. For example, Hausknecht and Stone (2015) use a deep recurrent neural network, particularly a Long-Short-Term-Memory (LSTM) Network, to learn the Q-function to play Atari 2600 games. Foerster et al. (2016) consider * The authors contributed equally to this work. a multi-agent scenario where they use deep distributed recurrent neural networks to communicate between different agent in order to solve riddles. The use of recurrent neural networks is effective in scenarios with partially observable states due to its ability to remember information for an arbitrarily long amount of time. Previous methods have usually been applied to 2D environments that hardly resemble the real world. In this paper, we tackle the task of playing a First-Person-Shooting (FPS) game in a 3D environment. This task is much more challenging than playing most Atari games as it involves a wide variety of skills, such as navigating through a map, collecting items, recognizing and fighting enemies, etc. Furthermore, states are partially observable, and the agent navigates a 3D environment in a first-person perspective, which makes the task more suitable for real-world robotics applications. In this paper, we present an AI-agent for playing deathmatches 1 in FPS games using only the pixels on the screen. Our agent divides the problem into two phases: navigation (exploring the map to collect items and find enemies) and action (fighting enemies when they are observed), and uses separate networks for each phase of the game. Furthermore, the agent infers high-level game information, such as the presence of enemies on the screen, to decide its current phase and to improve its performance. We evaluate our model on the two different tasks adapted from the Visual Doom AI Competition (ViZDoom) 2 using the API developed by Kempka et al. (2016) (Figure 1 shows a screenshot of Doom). The API gives a direct access to the Doom game engine and allows to synchronously send commands to the game agent and receive inputs of the current state of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios and we demonstrate the importance of each component of our architecture. Background Below we give a brief summary of the DQN and DRQN models. Deep Q-Networks Reinforcement learning deals with learning a policy for an agent interacting in an unknown environment. At each step, an agent observes the current state s t of the environment, decides of an action a t according to a policy π, and observes a reward signal r t . The goal of the agent is to find a policy that maximizes the expected sum of discounted rewards R t where T is the time at which the game terminates, and γ ∈ [0, 1] is a discount factor that determines the importance of future rewards. The Q-function of a given policy π is defined as the expected return from executing an action a in a state s: It is common to use a function approximator to estimate the action-value function Q. In particular, DQN uses a neural network parametrized by θ, and the idea is to obtain an estimate of the Q-function of the current policy which is close to the optimal Q-function Q * defined as the highest return we can expect to achieve by following any strategy: In other words, the goal is to find θ such that Q θ (s, a) ≈ Q * (s, a). The optimal Q-function verifies the Bellman optimality equation If Q θ ≈ Q * , it is natural to think that Q θ should be close from also verifying the Bellman equation. This leads to the following loss function: where t is the current time step, and y t = r + γ max a Q θt (s , a ). The value of y t is fixed, which leads to the following gradient: Instead of using an accurate estimate of the above gradient, we compute it using the following approximation: Although being a very rough approximation, these updates have been shown to be stable and to perform well in practice. Instead of performing the Q-learning updates in an online fashion, it is popular to use experience replay (Lin 1993) to break correlation between successive samples. At each time steps, agent experiences (s t , a t , r t , s t+1 ) are stored in a replay memory, and the Q-learning updates are done on batches of experiences randomly sampled from the memory. At every training step, the next action is generated using an -greedy strategy: with a probability the next action is selected randomly, and with probability 1 − according to the network best action. In practice, it is common to start with = 1 and to progressively decay . Deep Recurrent Q-Networks The above model assumes that at each step, the agent receives a full observation s t of the environment -as opposed to games like Go, Atari games actually rarely return a full observation, since they still contain hidden variables, but the current screen buffer is usually enough to infer a very good sequence of actions. But in partially observable environments, the agent only receives an observation o t of the environment which is usually not enough to infer the full state of the system. A FPS game like DOOM, where the agent field of view is limited to 90 centered around its position, obviously falls into this category. To deal with such environments, Hausknecht and Stone (2015) introduced the Deep Recurrent Q-Networks (DRQN), which does not estimate Q(s t , a t ), but where h t is an extra input returned by the network at the previous step, that represents the hidden state of the agent. A recurrent neural network like a LSTM can be implemented on top of the normal DQN model to do that. In that case, h t = LSTM(h t−1 , o t ), and we estimate Q(h t , a t ). Our model is built on top of the DRQN architecture. Model Our first approach to solving the problem was to use a baseline DRQN model. Although this model achieved good performance in relatively simple scenarios (where the only available actions were to turn or attack), it did not perform well on deathmatch tasks. The resulting agents were firing at will, hoping for an enemy to come under their lines of fire. Giving a penalty for using ammo did not help: with a small penalty, agents would keep firing, and with a big one they would just never fire. Figure 2: An illustration of the architecture of our model. The input image is given to two convolutional layers. The output of the convolutional layers is split into two streams. The first one (bottom) flattens the output (layer 3') and feeds it to a LSTM, as in the DRQN model. The second one (top) projects it to an extra hidden layer (layer 4), then to a final layer representing each game feature. During the training, the game features and the Q-learning objectives are trained jointly. Game feature augmentation We reason that the agents were not able to accurately detect enemies. The ViZDoom environment gives access to internal variables generated by the game engine. We modified the game engine so that it returns, with every frame, information about the visible entities. Therefore, at each step, the network receives a frame, as well as a Boolean value for each entity, indicating whether this entity appears in the frame or not (an entity can be an enemy, a health pack, a weapon, ammo, etc). Although this internal information is not available at test time, it can be exploited during training. We modified the DRQN architecture to incorporate this information and to make it sensitive to game features. In the initial model, the output of the convolutional neural network (CNN) is given to a LSTM that predicts a score for each action based on the current frame and its hidden state. We added two fully-connected layers of size 512 and k connected to the output of the CNN, where k is the number of game features we want to detect. At training time, the cost of the network is a combination of the normal DRQN cost and the cross-entropy loss. An illustration of the architecture is presented in Figure 2. Although a lot of game information was available, we only used an indicator about the presence of enemies on the current frame. Adding this game feature dramatically improved the performance of the model on every scenario we tried. Figure 4 shows the performance of the DRQN with and without the game features. We explored other architectures to incorporate game features, such as using a separate network to make predictions and reinjecting the predicted features into the LSTM, but this did not achieve results better than the initial baseline, suggesting that sharing the convolutional layers is decisive in the performance of the model. Jointly training the DRQN model and the game feature detection allows the kernels of the convolutional layers to cap-ture the relevant information about the game. In our experiments, it only takes a few hours for the model to reach an optimal enemy detection accuracy of 90%. After that, the LSTM is given features that often contain information about the presence of enemy and their positions, resulting in accelerated training. Augmenting a DRQN model with game features is straightforward. However, the above method can not be applied easily to a DQN model. Indeed, the important aspect of the model is the sharing of the convolution filters between predicting game features and the Q-learning objective. The DRQN is perfectly adapted to this setting since the network takes as input a single frame, and has to predict what is visible in this specific frame. However, in a DQN model, the network receives k frames at each time step, and will have to predict whether some features appear in the last frame only, independently of the content of the k − 1 previous frames. Convolutional layers do not perform well in this setting, and even with dropout we never obtained an enemy detection accuracy above 70% using that model. Divide and conquer The deathmatch task is typically divided into two phases, one involves exploring the map to collect items and to find enemies, and the other consists in fighting enemies (McPartland and Gallagher 2008;Tastan and Sukthankar 2011). We call these phases the navigation and action phases. Having two networks work together, each trained to act in a specific phase of the game should naturally lead to a better overall performance. Current DQN models do not allow for the combination of different networks optimized on different tasks. However, the current phase of the game can be determined by predicting whether an enemy is visible in the current frame (action phase) or not (navigation phase), which can be inferred directly from the game features present in Figure 3: DQN updates in the LSTM. Only the scores of the actions taken in states 5, 6 and 7 will be updated. First four states provide a more accurate hidden state to the LSTM, while the last state provide a target for state 7. the proposed model architecture. There are various advantages of splitting the task into two phases and training a different network for each phase. First, this makes the architecture modular and allows different models to be trained and tested independently for each phase. Both networks can be trained in parallel, which makes the training much faster as compared to training a single network for the whole task. Furthermore, the navigation phase only requires three actions (move forward, turn left and turn right), which dramatically reduces the number of state-action pairs required to learn the Q-function, and makes the training much faster (Gaskett, Wettergreen, and Zelinsky 1999). More importantly, using two networks also mitigates "camper" behavior, i.e. tendency to stay in one area of the map and wait for enemies, which was exhibited by the agent when we tried to train a single DQN or DRQN for the deathmatch task. We trained two different networks for our agent. We used a DRQN augmented with game features for the action network, and a simple DQN for the navigation network. During the evaluation, the action network is called at each step. If no enemies are detected in the current frame, or if the agent does not have any ammo left, the navigation network is called to decide the next action. Otherwise, the decision is given to the action network. Results in Table 2 demonstrate the effectiveness of the navigation network in improving the performance of our agent. Reward shaping The score in the deathmatch scenario is defined as the number of frags, i.e. number of kills minus number of suicides. If the reward is only based on the score, the replay table is extremely sparse w.r.t state-action pairs having non-zero rewards, which makes it very difficult for the agent to learn favorable actions. Moreover, rewards are extremely delayed and are usually not the result of a specific action: getting a positive reward requires the agent to explore the map to find an enemy and accurately aim and shoot it with a slow projectile rocket. The delay in reward makes it difficult for the agent to learn which set of actions is responsible for what reward. To tackle the problem of sparse replay table and delayed rewards, we introduce reward shaping, i.e. the modification of reward function to include small intermediate rewards to speed up the learning process (Ng 2003). In addition to positive reward for kills and negative rewards for suicides, we introduce the following intermediate rewards for shaping the reward function of the action network: • positive reward for object pickup (health, weapons and ammo) • negative reward for loosing health (attacked by enemies or walking on lava) • negative reward for shooting, or loosing ammo We used different rewards for the navigation network. Since it evolves on a map without enemies and its goal is just to gather items, we simply give it a positive reward when it picks up an item, and a negative reward when it's walking on lava. We also found it very helpful to give the network a small positive reward proportional to the distance it travelled since the last step. That way, the agent is faster to explore the map, and avoids turning in circles. Frame skip Like in most previous approaches, we used the frame-skip technique (Bellemare et al. 2012). In this approach, the agent only receives a screen input every k + 1 frames, where k is the number of frames skipped between each step. The action decided by the network is then repeated over all the skipped frames. A higher frame-skip rate accelerates the training, but can hurt the performance. Typically, aiming at an enemy sometimes requires to rotate by a few degrees, which is impossible when the frame skip rate is too high, even for human players, because the agent will repeat the rotate action many times and ultimately rotate more than it intended to. A frame skip of k = 4 turned out to be the best tradeoff. Sequential updates To perform the DRQN updates, we use a different approach from the one presented by Hausknecht and Stone (2015). A sequence of n observations o 1 , o 2 , ..., o n is randomly sampled from the replay memory, but instead of updating all action-states in the sequence, we only consider the ones that are provided with enough history. Indeed, the first states of the sequence will be estimated from an almost non-existent history (since h 0 is reinitialized at the beginning of the updates), and might be inaccurate. As a result, updating them might lead to imprecise updates. To prevent this problem, errors from states o 1 ...o h , where h is the minimum history size for a state to be updated, are not backpropagated through the network. Errors from states o h+1 ..o n−1 will be backpropagated, o n only being used to create a target for the o n−1 action-state. An illustration of the updating process is presented in Figure 3, where h = 4 and n = 8. In all our experiments, we set the minimum history size to 4, and we perform the updates on 5 states. Figure 4 shows the importance of selecting an appropriate number of updates. Increasing the number of updates leads to high correlation in sampled frames, violating the DQN random sam- pling policy, while decreasing the number of updates makes it very difficult for the network to converge to a good policy. Hyperparameters All networks were trained using the RMSProp algorithm and minibatches of size 32. Network weights were updated every 4 steps, so experiences are sampled on average 8 times during the training (Van Hasselt, Guez, and Silver 2015). The replay memory contained the one million most recent frames. The discount factor was set to γ = 0.99. We used an -greedy policy during the training, where was linearly decreased from 1 to 0.1 over the first million steps, and then fixed to 0.1. Different screen resolutions of the game can lead to a different field of view. In particular, a 4/3 resolution provides a 90 degree field of view, while a 16/9 resolution in Doom has a 108 degree field of view (as presented in Figure 1). In order to maximize the agent game awareness, we used a 16/9 resolution of 440x225 which we resized to 108x60. Although faster, our model obtained a lower performance using grayscale images, so we decided to use colors in all experiments. Scenario We use the ViZDoom platform (Kempka et al. 2016) to conduct all our experiments and evaluate our methods on the deathmatch scenario. In this scenario, the agent plays against built-in Doom bots, and the final score is the number of frags, i.e. number of bots killed by the agent minus the number of suicides committed. We consider two variations of this scenario, adapted from the ViZDoom AI Competition: Limited deathmatch on a known map. The agent is trained and evaluated on the same map, and the only available weapon is a rocket launcher. Agents can gather health packs and ammo. Full deathmatch on unknown maps. The agent is trained and tested on different maps. The agent starts with a pistol, but can pick up different weapons around the map, as well as gather health packs and ammo. We use 10 maps for training and 3 maps for testing. We further randomize the textures of the maps during the training, as it improved the generalizability of the model. The limited deathmatch task is ideal for demonstrating the model design effectiveness and to chose hyperparameters, as the training time is significantly lower than on the full deathmatch task. In order to demonstrate the generalizability of our model, we use the full deathmatch task to show that our model also works effectively on unknown maps. Evaluation Metrics For evaluation in deathmatch scenarios, we use Kill to death (K/D) ratio as the scoring metric. Since K/D ratio is susceptible to "camper" behavior to minimize deaths, we also report number of kills to determine if the agent is able to explore the map to find enemies. In addition to these, we also report the total number of objects gathered, the total number of deaths and total number of suicides (to analyze the effects of different design choices). Suicides are caused when the agent shoots too close to itself, with a weapon having blast radius like rocket launcher. Since suicides are counted in deaths, they provide a good way for penalizing K/D score when the agent is shooting arbitrarily. Results & Analysis Demo videos. Demonstrations of navigation and deathmatch on known and unknown maps are available here 3 . Table 2: Performance of the agent with and without navigation. The agent was evaluated 15 minutes on each map. The performance on the full deathmatch task was averaged over 10 train maps and 3 test maps. Navigation network enhancement. Scores on both the tasks with and without navigation are presented in Table 2. The agent was evaluated 15 minutes on all the maps, and the results have been averaged for the full deathmatch map. In both scenarios, the total number of objects picked up dramatically increases with navigation, as well as the K/D ratio. In the full deathmatch, the agent starts with a pistol, with which it is relatively difficult to kill enemies. Therefore, picking up weapons and ammo is much more important in the full deathmatch, which explains why the improvement in K/D ratio is bigger in this scenario. The limited deathmatch map was relatively small, and since there were many bots, navigating was not crucial to find other agents. As a result, the number of kills remained similar. However, the agent was able to pick up more than three times as many objects, such as health packs and ammo, with navigation. Being able to heal itself regularly, the agent decreased its number of deaths and improved its K/D ratio. Note that the scores across the two different tasks are not comparable due to difference in map sizes and number of objects between the different maps. The performance on the test maps is better than on the training maps, which is not necessarily surprising given that the maps all look very different. In particular, the test maps contain less stairs and differences in level, that are usually difficult for the network to handle since we did not train it to look up and down. Comparison to human players. Table 1 compares the agent to human players in single player and multiplayer scenarios. In the single player scenario, human players and the agent play separately against 10 bots on the limited deathmatch map, for three minutes. In the multiplayer scenario, human players and the agent play against each other on the same map, for five minutes. Human scores are averaged over 20 human players in both scenarios. As shown in the table, the proposed system outperforms human players in both scenarios by a substantial margin. Note that the suicide rate of humans is particularly high indicating that it is difficult for humans to aim accurately in a limited reaction time. Game features. Detecting enemies is critical to our agent's performance, but it is not a trivial task as enemies can appear at various distances, from different angles and in different environments. Including game features while train-ing resulted in a significant improvement in the performance of the model, as shown in Figure 4. After 65 hours of training, the best K/D score of the network without game features is less than 2.0, while the network with game features is able to achieve a maximum score over 4.0. Another advantage of using game features is that it gives immediate feedback about the quality of the features given by the convolutional network. If the enemy detection accuracy is very low, the LSTM will not receive relevant information about the presence of enemies in the frame, and Qlearning network will struggle to learn a good policy. The enemy detection accuracy takes few hours to converge while training the whole model takes up to a week. Since the enemy detection accuracy correlates with the final model performance, our architecture allows us to quickly tune our hyperparameters without training the complete model. For instance, the enemy detection accuracy with and without dropout quickly converged to 90% and 70% respectively, which allowed us to infer that dropout is crucial for the effective performance of the model. Figure 4 supports our inference that using a dropout layer significantly improves the performance of the action network on the limited deathmatch. The difference becomes even more significant in the full deathmatch, where the agent needs to generalize to unknown maps. Conclusion In this paper, we have presented a complete architecture for playing deathmatch scenarios in FPS games. We introduced a method to augment a DRQN model with high-level game information, and modularized our architecture to incorporate independent networks responsible for different phases of the game. These methods lead to dramatic improvements over the standard DRQN model when applied to complicated tasks like a deathmatch. We showed that the proposed model is able to outperform built-in bots as well as human players and demonstrated the generalizability of our model to unknown maps. Moreover, our methods are complementary to recent improvements in DQN, and could easily be combined with dueling architectures (Wang, de Freitas, and Lanctot 2015), and prioritized replay (Schaul et al. 2015).
2016-09-18T17:52:28.000Z
2016-09-18T00:00:00.000
{ "year": 2016, "sha1": "e0b65d3839e3bf703d156b524d7db7a5e10a2623", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "317f2b4e0fff5d9fe3a2ab4cad791a8264df0f4f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
110120795
pes2o/s2orc
v3-fos-license
Analysis of flexibility of the support and its influence on dynamics of the grab crane Dynamic analysis of the model of the grab crane with the flexibly supported base is presented in the paper. The analyzed grab crane is the structure of the open-loop kinematic chain with rigid links. Joint coordinates and homogenous transformations are used in order to describe dynamic behavior of the system. Equations of the motion of the systems are derived using Lagrange's equations of the second order and integrated Newmark's method with iterative procedure. The commercial package MSC.ADAMS is used in order to verify own program. The range of analysis concerns influence of different values of vertical stiffness coefficients in the supports and means of fixing of the load on its motion. INTRODUCTION During the design process of lifting machines flexibility of the support should be taken into special consideration [1,2,3,4,5].It has mainly influence on stability of the system and the same efficiency of reloading work and safety of machine operations.Therefore it has been formulated the mathematical models which already in the first phase of the project include flexibility of the support and carried out numerical simulations illustrate phenomenon sometimes undesirable (loss of stability, uncontrolled balances of the load, unrealization of desired trajectory). The mathematical model of the grab crane with flexibly supported base and numerical simulations are presented in the paper.The formulated model includes small and large motions of the flexible platform.In description of dynamics of the system formalism of joint coordinates and homogenous transformations is used.Equations of the motion are derived using Lagrange's equations of the second order.Own computer program has been elaborated in which equations are integrated using Newmark's method with iterative procedure and constant step size.Correctness of the formulated model has been verified using the commercial program MSC.ADAMS. MATHEMATICAL MODEL OF THE GRAB CRANE The model of the analyzed grab crane is shown in Fig. 1.This system consists of seven rigid bodies constituting the structure of the open-loop kinematic chain.The motion of each link with respect to the previous one is described by vectors of generalized coordinates: , (2) , q (1.2) (4) , q (1.4) (5) (5) , z q (1.5) (6) , q (1.6) . q (1.7) The motion of p link with respect to the reference system {O} is defined by vector of generalized coordinates: where (1) (1) qq .Therefore the motion of the analyzed system is described by vector of generalized coordinates: It is assumed kinematic inputs: where () p are functions of time. Kinetic energy and potential energy of gravity forces The kinetic energy and the potential energy of gravity forces can be written as follows: where: m is mass of the links, g is the acceleration of gravity, r is vector of coordinates of the center of mass in local coordinate system of p link, H is pseudo-inertial matrix [6]. Potential energy of spring deformation and function of dissipation energy It is assumed that the analyzed crane is flexibly supported on four supports which are modelled by means of spring-damping elements The potential energy of spring deformation and function of the dissipation energy can be expressed by following forms: Latin American Journal of Solids and Structures 10(2013) 109 -121 where: , kk EE cb are stiffness and damping coefficients, The equations of the motion of the system can be written in matrix form: where: (3) (4) (5) T P P P P P is vector of driving forces and moments, , T Γ f is vector of generalized force without driving forces and moments. Newmark's method with iterative procedure [7] is used to integrate the equations of the motion.This iterative procedure is necessary because elements of A and B matrices depend on the generalized coordinates and velocities. NUMERICAL RESULTS The geometric parameters of the system are presented in Fig. 3.The comparison of the obtained results using own program with those obtained from MSC.ADAMS is presented in tab.3.The results were compared for trajectory and z coordinate of [s] t Influence of different values of stiffness coefficients of sdes on trajectory and z coordinate of CONCLUSIONS In the paper the model of the grab crane with flexibly supported base has been presented.The model of the system has been obtained by means of joint coordinates and homogenous transformations.The influence of flexibility of the support on dynamic behavior of the system has been analyzed.Good compatibility of the results of own program with those obtained from MSC.ADAMS confirms correctness of the formulated mathematical model of the grab crane and the same any structure with open-loop kinematic chain with flexibly supported base. kE is the kinetic energy of the links, p E is the potential energy of gravity forces of the links, sde E is the potential energy of spring deformation of the spring-damping elements, sde D is function of the dissipation of energy of the spring-damping elements, j Q are non-potential generalized forces, , jj qq are generalized coordinates and velocities. Figure 2 Figure 2 Flexible connections of the crane Figure 3 Figure 3 Parameters of the grab crane 2 kP 2 P points in the case of rigid and flexible support.It was also assumed that the load is fixed in point. 2 kP points for different fixing points of the load is shown in tab.4.The results are ob- tained using own program. Table 3 Trajectory and
2016-02-01T17:59:50.645Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "d4a40461a4596a8276e3ed72b5c445b145dfe844", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/lajss/a/Pn43bRddGRw44rzXBsfh3Yf/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6240dc5b8ce37eb88aaf722e7bfef231a2d20721", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
208617800
pes2o/s2orc
v3-fos-license
StarGAN v2: Diverse Image Synthesis for Multiple Domains A good image-to-image translation model should learn a mapping between different visual domains while satisfying the following properties: 1) diversity of generated images and 2) scalability over multiple domains. Existing methods address either of the issues, having limited diversity or multiple models for all domains. We propose StarGAN v2, a single framework that tackles both and shows significantly improved results over the baselines. Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate our superiority in terms of visual quality, diversity, and scalability. To better assess image-to-image translation models, we release AFHQ, high-quality animal faces with large inter- and intra-domain differences. The code, pretrained models, and dataset can be found at https://github.com/clovaai/stargan-v2. Introduction Image-to-image translation aims to learn a mapping between different visual domains [20]. Here, domain implies a set of images that can be grouped as a visually distinctive category, and each image has a unique appearance, which we call style. For example, we can set image domains based on the gender of a person, in which case the style includes makeup, beard, and hairstyle (top half of Figure 1). An ideal image-to-image translation method should be able to synthesize images considering the diverse styles in each domain. However, designing and learning such models become complicated as there can be arbitrarily large number of styles and domains in the dataset. To address the style diversity, much work on image-toimage translation has been developed [1,16,34,28,38,54]. These methods inject a low-dimensional latent code to the generator, which can be randomly sampled from the standard Gaussian distribution. Their domain-specific decoders interpret the latent codes as recipes for various styles when generating images. However, because these methods have only considered a mapping between two domains, they are not scalable to the increasing number of domains. For example, having K domains, these methods require to train K(K-1) generators to handle translations between each and every domain, limiting their practical usage. To address the scalability, several studies have proposed a unified framework [2,7,17,30]. StarGAN [7] is one of the earliest models, which learns the mappings between all available domains using a single generator. The generator takes a domain label as an additional input, and learns to transform an image into the corresponding domain. However, StarGAN still learns a deterministic mapping per each domain, which does not capture the multi-modal nature of the data distribution. This limitation comes from the fact that each domain is indicated by a predetermined label. Note that the generator receives a fixed label (e.g. one-hot vector) as input, and thus it inevitably produces the same output per each domain, given a source image. To get the best of both worlds, we propose StarGAN v2, a scalable approach that can generate diverse images across multiple domains. In particular, we start from StarGAN and replace its domain label with our proposed domainspecific style code that can represent diverse styles of a specific domain. To this end, we introduce two modules, a mapping network and a style encoder. The mapping network learns to transform random Gaussian noise into a style code, while the encoder learns to extract the style code from a given reference image. Considering multiple domains, both modules have multiple output branches, each of which provides style codes for a specific domain. Finally, utilizing these style codes, our generator learns to successfully synthesize diverse images over multiple domains (Figure 1). We first investigate the effect of individual components of StarGAN v2 and show that our model indeed benefits from using the style code (Section 3.1). We empirically demonstrate that our proposed method is scalable to multiple domains and gives significantly better results in terms of visual quality and diversity compared to the leading methods (Section 3.2). Last but not least, we present a new dataset of animal faces (AFHQ) with high quality and wide variations (Appendix A) to better evaluate the performance of image-to-image translation models on large inter-and intra-domain differences. We release this dataset publicly available for research community. StarGAN v2 In this section, we describe our proposed framework and its training objective functions. Proposed framework Let X and Y be the sets of images and possible domains, respectively. Given an image x ∈ X and an arbitrary domain y ∈ Y, our goal is to train a single generator G that can generate diverse images of each domain y that corresponds to the image x. We generate domain-specific style vectors in the learned style space of each domain and train G to reflect the style vectors. Figure 2 illustrates an overview of our framework, which consists of four modules described below. Generator (Figure 2a). Our generator G translates an input image x into an output image G(x, s) reflecting a domainspecific style code s, which is provided either by the mapping network F or by the style encoder E. We use adaptive instance normalization (AdaIN) [15,22] to inject s into G. We observe that s is designed to represent a style of a specific domain y, which removes the necessity of providing y to G and allows G to synthesize images of all domains. Mapping network (Figure 2b). Given a latent code z and a domain y, our mapping network F generates a style code s = F y (z), where F y (·) denotes an output of F corresponding to the domain y. F consists of an MLP with multiple output branches to provide style codes for all available domains. F can produce diverse style codes by sampling the latent vector z ∈ Z and the domain y ∈ Y randomly. Our multi-task architecture allows F to efficiently and effectively learn style representations of all domains. Style encoder (Figure 2c). Given an image x and its corresponding domain y, our encoder E extracts the style code s = E y (x) of x. Here, E y (·) denotes the output of E corresponding to the domain y. Similar to F , our style encoder E benefits from the multi-task learning setup. E can produce diverse style codes using different reference images. This allows G to synthesize an output image reflecting the style s of a reference image x. Discriminator (Figure 2d). Our discriminator D is a multitask discriminator [30,35], which consists of multiple output branches. Each branch D y learns a binary classification determining whether an image x is a real image of its domain y or a fake image G(x, s) produced by G. Training objectives Given an image x ∈ X and its original domain y ∈ Y, we train our framework using the following objectives. Adversarial objective. During training, we sample a latent code z ∈ Z and a target domain y ∈ Y randomly, and generate a target style code s = F y (z). The generator G takes an image x and s as inputs and learns to generate an output image G(x, s) via an adversarial loss where D y (·) denotes the output of D corresponding to the domain y. The mapping network F learns to provide the style code s that is likely in the target domain y, and G learns to utilize s and generate an image G(x, s) that is indistinguishable from real images of the domain y. Style reconstruction. In order to enforce the generator G to utilize the style code s when generating the image G(x, s), we employ a style reconstruction loss This objective is similar to the previous approaches [16,54], which employ multiple encoders to learn a mapping from an image to its latent code. The notable difference is that we train a single encoder E to encourage diverse outputs for multiple domains. At test time, our learned encoder E allows G to transform an input image, reflecting the style of a reference image. Style diversification. To further enable the generator G to produce diverse images, we explicitly regularize G with the diversity sensitive loss [34,48] where the target style codes s 1 and s 2 are produced by F conditioned on two random latent codes z 1 and z 2 (i.e. Maximizing the regularization term forces G to explore the image space and discover meaningful style features to generate diverse images. Note that in the original form, the small difference of z 1 − z 2 1 in the denominator increases the loss significantly, which makes the training unstable due to large gradients. Thus, we remove the denominator part and devise a new equation for stable training but with the same intuition. Preserving source characteristics. To guarantee that the generated image G(x, s) properly preserves the domaininvariant characteristics (e.g. pose) of its input image x, we employ the cycle consistency loss [7,24,53] whereŝ = E y (x) is the estimated style code of the input image x, and y is the original domain of x. By encouraging the generator G to reconstruct the input image x with the estimated style codeŝ, G learns to preserve the original characteristics of x while changing its style faithfully. Full objective. Our full objective functions can be summarized as follows: where λ sty , λ ds , and λ cyc are hyperparameters for each term. We also further train our model in the same manner as the above objective, using reference images instead of latent vectors when generating style codes. We provide the training details in Appendix B. Table 1. Performance of various configurations on CelebA-HQ. Frechet inception distance (FID) indicates the distance between two distributions of real and generated images (lower is better), while learned perceptual image patch similarity (LPIPS) measures the diversity of generated images (higher is better). Experiments In this section, we describe evaluation setups and conduct a set of experiments. We analyze the individual components of StarGAN v2 (Section 3.1) and compare our model with three leading baselines on diverse image synthesis (Section 3.2). All experiments are conducted using unseen images during the training phase. Baselines. We use MUNIT [16], DRIT [28], and MSGAN [34] as our baselines, all of which learn multi-modal mappings between two domains. For multi-domain comparisons, we train these models multiple times for every pair of image domains. We also compare our method with Star-GAN [7], which learns mappings among multiple domains using a single generator. All the baselines are trained using the implementations provided by the authors. Datasets. We evaluate StarGAN v2 on CelebA-HQ [21] and our new AFHQ dataset (Appendix A). We separate CelebA-HQ into two domains of male and female, and AFHQ into three domains of cat, dog, and wildlife. Other than the domain labels, we do not use any additional information (e.g. facial attributes of CelebA-HQ or breeds of AFHQ) and let the models learn such information as styles without supervision. For a fair comparison, all images are resized to 256 × 256 resolution for training, which is the highest resolution used in the baselines. Evaluation metrics. We evaluate both the visual quality and the diversity of generated images using Frechét inception distance (FID) [14] and learned perceptual image patch similarity (LPIPS) [52]. We compute FID and LPIPS for every pair of image domains within a dataset and report their average values. The details on evaluation metrics and protocols are further described in Appendix C. Analysis of individual components We evaluate individual components that are added to our baseline StarGAN using CelebA-HQ. Table 1 gives FID and LPIPS for several configurations, where each component is cumulatively added on top of StarGAN. An input image and the corresponding generated images of each configuration are shown in Figure 3. The baseline configura- tion (A) corresponds to the basic setup of StarGAN, which employs WGAN-GP [11], ACGAN discriminator [39], and depth-wise concatenation [36] for providing the target domain information to the generator. As shown in Figure 3a, the original StarGAN produces only a local change by applying makeup on the input image. We first improve our baseline by replacing the ACGAN discriminator with a multi-task discriminator [35,30], allowing the generator to transform the global structure of an input image as shown in configuration (B). Exploiting the recent advances in GANs, we further enhance the training stability and construct a new baseline (C) by applying R 1 regularization [35] and switching the depth-wise concatenation to adaptive instance normalization (AdaIN) [9,15]. Note that we do not report LPIPS of these variations in Table 1, since they are yet to be designed to produce multiple outputs for a given input image and a target domain. To induce diversity, one can think of directly giving a latent code z into the generator G and impose the latent reconstruction loss ||z − E(G(x, z, y))|| 1 [16,54]. However, in a multi-domain scenario, we observe that this baseline (D) does not encourage the network to learn meaningful styles and fails to provide as much diversity as we expect. We conjecture that this is because latent codes have no capability in separating domains, and thus the latent reconstruction loss models domain-shared styles (e.g. color) rather than domain-specific ones (e.g. hairstyle). Note that the FID gap between baseline (C) and (D) is simply due to the difference in the number of output samples. Female Male Reference Source Figure 4. Reference-guided image synthesis results on CelebA-HQ. The source and reference images in the first row and the first column are real images, while the rest are images generated by our proposed model, StarGAN v2. Our model learns to transform a source image reflecting the style of a given reference image. High-level semantics such as hairstyle, makeup, beard and age are followed from the reference images, while the pose and identity of the source images are preserved. Note that the images in each column share a single identity with different styles, and those in each row share a style with different identities. Instead of giving a latent code into G directly, to learn meaningful styles, we transform a latent code z into a domain-specific style code s through our proposed mapping network ( Figure 2b) and inject the style code into the generator (E). Here, we also introduce the style reconstruction loss (Eq. (2)). Note that each output branch of our mapping network is responsible to a particular domain, thus style codes have no ambiguity in separating domains. Unlike the latent reconstruction loss, the style reconstruction loss allows the generator to produce diverse images reflecting domain-specific styles. Finally, we further improve the network to produce diverse outputs by adopting the diversity regularization (Eq. (3)), and this configuration (F) corresponds to our proposed method, StarGAN v2. Figure 4 shows that StarGAN v2 can synthesize images that reflect diverse styles of references including hairstyle, makeup, and beard, without hurting the source characteristics. Comparison on diverse image synthesis In this section, we evaluate StarGAN v2 on diverse image synthesis from two perspectives: latent-guided synthesis and reference-guided synthesis. Latent-guided synthesis. Figure 5 provides a qualitative comparison of the competing methods. Each method pro- Table 2. Quantitative comparison on latent-guided synthesis. The FIDs of real images are computed between the training and test sets. Note that they may not be optimal values since the number of test images is insufficient, but we report them for reference. duces multiple outputs using random noise. For CelebA-HQ, we observe that our method synthesizes images with a higher visual quality compared to the baseline models. In addition, our method is the only model that can successfully change the entire hair styles of the source images, which requires non-trivial effort (e.g. generating ears). For AFHQ, which has relatively large variations, the performance of the baselines is considerably degraded, while our method still produces images with high quality and diverse styles. As shown in Table 2, our method outperforms all the baselines by a large margin in terms of visual quality. For both CelebA-HQ and AFHQ, our method achieves FIDs of 13.7 and 16.2, respectively, which are more than two times improvement over the previous leading method. Our LPIPS is also the highest in CelebA-HQ, which implies our model produces the most diverse results given a single input. We conjecture that the high LPIPS values of the baseline models in AFHQ are due to their spurious artifacts. Reference-guided synthesis. To obtain the style code from a reference image, we sample test images from a target domain and feed them to the encoder network of each method. For CelebA-HQ (Figure 6a), our method successfully renders distinctive styles (e.g. bangs, beard, makeup, and hairstyle), while the others mostly match the color distribution of reference images. For the more challenging AFHQ (Figure 6b), the baseline models suffer from a large domain shift. They hardly reflect the style of each reference image and only match the domain. In contrast, our model renders distinctive styles (e.g. breeds) of each reference image as well as its fur pattern and eye color. Note that Star-GAN v2 produces high quality images across all domains and these results are from a single generator. Since the other baselines are trained individually for each pair of domains, the output quality fluctuates across domains. For example, in AFHQ (Figure 6b), the baseline models work reasonably well in dog-to-wildlife (2nd row) while they fail in cat-todog (1st row). Table 3. Quantitative comparison on reference-guided synthesis. We sample ten reference images to synthesize diverse images. The LPIPS of StarGAN v2 is also the highest among the competitors, which implies that our model produces the most diverse results considering the styles of reference images. Here, MUNIT and DRIT suffer from mode-collapse in AFHQ, which results in lower LPIPS and higher FID than other methods. Human evaluation. We use the Amazon Mechanical Turk (AMT) to compare the user preferences of our method with baseline approaches. Given a pair of source and reference images, the AMT workers are instructed to select one among four image candidates from the methods, whose order is randomly shuffled. We ask separately which model offers the best image quality and which model best stylizes the input image considering the reference image. For each comparison, we randomly generate 100 questions, and each question is answered by 10 workers. We also ask each worker a few simple questions to detect unworthy workers. The number of total valid workers is 76. As shown in Table 4, our method obtains the majority of votes in all in- stances, especially in the challenging AFHQ dataset and the question about style reflection. These results show that Star-GAN v2 better extracts and renders the styles onto the input image than the other baselines. Discussion We discuss several reasons why StarGAN v2 can successfully synthesize images of diverse styles over multiple domains. First, our style code is separately generated per domain by the multi-head mapping network and style encoder. By doing so, our generator can only focus on using the style code, whose domain-specific information is already taken care of by the mapping network (Section 3.1). Second, following the insight of StyleGAN [22], our style space is produced by learned transformations. This provides more flexibility to our model than the baselines [16,28,34], which assume that the style space is a fixed Gaussian distribution (Section 3.2). Last but not least, our modules benefit from fully exploiting training data from multiple domains. By design, the shared part of each module should learn domain-invariant features which induces the regularization effect, encouraging better generalization to unseen samples. To show that our model generalizes over the unseen images, we test a few samples from FFHQ [22] with our model trained on CelebA-HQ (Figure 7). Here, Star-GAN v2 successfully captures styles of references and renders these styles correctly to the source images. Related work Generative adversarial networks (GANs) [10] have shown impressive results in many computer vision tasks such as image synthesis [4,31,8], colorization [18,50] and super-resolution [27,47]. Along with improving the visual quality of generated images, their diversity also has been considered as an important objective which has been tackled by either devoted loss functions [34,35] or architectural design [4,22]. StyleGAN [22] introduces a non-linear mapping function that embeds an input latent code into an intermediate style space to better represent the factors of variation. However, this method requires non-trivial effort when transforming a real image, since its generator is not designed to take an image as input. Early image-to-image translation methods [20,53,29] are well known to learn a deterministic mapping even with stochastic noise inputs. Several methods reinforce the con- Source Reference Output nection between stochastic noise and the generated image for diversity, by marginal matching [1], latent regression [54,16], and diversity regularization [48,34]. Other approaches produce various outputs with the guidance of reference images [5,6,32,40]. However, all theses methods consider only two domains, and their extension to multiple domains is non-trivial. Recently, FUNIT [30] tackles multi-domain image translation using a few reference images from a target domain, but it requires fine-grained class labels and can not generate images with random noise. Our method provides both latent-guided and reference-guided synthesis and can be trained with coarsely labeled dataset. In parallel work, Yu et al. [51] tackle the same issue but they define the style as domain-shared characteristics rather than domain-specific ones, which limits the output diversity. Conclusion We proposed StarGAN v2, which addresses two major challenges in image-to-image translation; translating an image of one domain to diverse images of a target domain, and supporting multiple target domains. The experimental results showed that our model can generate images with rich styles across multiple domains, remarkably outperforming the previous leading methods [16,28,34]. We also released a new dataset of animal faces (AFHQ) for evaluating methods in a large inter-and intra domain variation setting. A. The AFHQ dataset We release a new dataset of animal faces, Animal Faces-HQ (AFHQ), consisting of 15,000 high-quality images at 512 × 512 resolution. Figure 8 shows example images of the AFHQ dataset. The dataset includes three domains of cat, dog, and wildlife, each providing 5000 images. By having multiple (three) domains and diverse images of various breeds (≥ eight) per each domain, AFHQ sets a more challenging image-to-image translation problem. For each domain, we select 500 images as a test set and provide all remaining images as a training set. We collected images with permissive licenses from the Flickr 1 and Pixabay 2 websites. All images are vertically and horizontally aligned to have the eyes at the center. The low-quality images were discarded by human effort. We have made dataset available at https://github.com/clovaai/stargan-v2. B. Training details For fast training, the batch size is set to eight and the model is trained for 100K iterations. The training time is about three days on a single Tesla V100 GPU with our implementation in PyTorch [41]. We set λ sty = 1, λ ds = 1, and λ cyc = 1 for CelebA-HQ and λ sty = 1, λ ds = 2, and λ cyc = 1 for AFHQ. To stabilize the training, the weight λ ds is linearly decayed to zero over the 100K iterations. We adopt the non-saturating adversarial loss [10] with R 1 regularization [35] using γ = 1. We use the Adam [25] optimizer with β 1 = 0 and β 2 = 0.99. The learning rates for G, D, and E are set to 10 −4 , while that of F is set to 10 −6 . For evaluation, we employ exponential moving averages over parameters [21,49] of all modules except D. We initialize the weights of all modules using He initialization [12] and set all biases to zero, except for the biases associated with the scaling vectors of AdaIN that are set to one. C. Evaluation protocol This section provides details for the evaluation metrics and evaluation protocols used in all experiments. Frechét inception distance (FID) [14] measures the discrepancy between two sets of images. We use the feature vectors from the last average pooling layer of the ImageNetpretrained Inception-V3 [44]. For each test image from a source domain, we translate it into a target domain using 10 latent vectors, which are randomly sampled from the standard Gaussian distribution. We then calculate FID between the translated images and training images in the target domain. We calculate the FID values for every pair of image domains (e.g. female male for CelebA-HQ) and report the average value. Note that, for reference-guided synthesis, each source image is transformed using 10 reference images randomly sampled from the test set of a target domain. Learned perceptual image patch similarity (LPIPS) [52] measures the diversity of generated images using the L 1 distance between features extracted from the ImageNetpretrained AlexNet [26]. For each test image from a source domain, we generate 10 outputs of a target domain using 10 randomly sampled latent vectors. Then, we compute the average of the pairwise distances among all outputs generated from the same input (i.e. 45 pairs). Finally, we report the average of the LPIPS values over all test images. For reference-guided synthesis, each source image is transformed using 10 reference images to produce 10 outputs. D. Additional results We provide additional reference-guided image synthesis results on both CelebA-HQ and AFHQ (Figure 9 and 10). In CelebA-HQ, StarGAN v2 synthesizes the source identity in diverse appearances reflecting the reference styles such as hairstyle, and makeup. In AFHQ, the results follow the breed and hair of the reference images preserving the pose of the source images. Interpolation results between styles can be found at https://youtu.be/0EVh5Ki4dIY. Female Male Reference Source Figure 9. Reference-guided image synthesis results on CelebA-HQ. The source and reference images in the first row and the first column are real images, while the rest are images generated by our proposed model, StarGAN v2. Our model learns to transform a source image reflecting the style of a given reference image. High-level semantics such as hairstyle, makeup, beard and age are followed from the reference images, while the pose and identity of the source images are preserved. Note that the images in each column share a single identity with different styles, and those in each row share a style with different identities. Cat Wildlife Reference Source Dog Figure 10. Reference-guided image synthesis results on AFHQ. All images except the sources and references are generated by our proposed model, StarGAN v2. High-level semantics such as hair are followed from the references, while the pose of the sources are preserved. E. Network architecture In this section, we provide architectural details of Star-GAN v2, which consists of four modules described below. Generator ( Table 5). For AFHQ, our generator consists of four downsampling blocks, four intermediate blocks, and four upsampling blocks, all of which inherit preactivation residual units [13]. We use the instance normalization (IN) [45] and the adaptive instance normalization (AdaIN) [15,22] for down-sampling and up-sampling blocks, respectively. A style code is injected into all AdaIN layers, providing scaling and shifting vectors through learned affine transformations. For CelebA-HQ, we increase the number of downsampling and upsampling layers by one. We also remove all shortcuts in the upsampling residual blocks and add skip connections with the adaptive wing based heatmap [46]. Mapping network ( Table 6). Our mapping network consists of an MLP with K output branches, where K indicates the number of domains. Four fully connected layers are shared among all domains, followed by four specific fully connected layers for each domain. We set the dimensions of the latent code, the hidden layer, and the style code to 16, 512, and 64, respectively. We sample the latent code from the standard Gaussian distribution. We do not apply the pixel normalization [22] to the latent code, which has been observed not to increase model performance in our tasks. We also tried feature normalizations [3,19], but this degraded performance. Style encoder ( Table 7). Our style encoder consists of a CNN with K output branches, where K is the number of domains. Six pre-activation residual blocks are shared among all domains, followed by one specific fully connected layer for each domain. We do not use the global average pooling [16] to extract fine style features of a given reference image. The output dimension "D" in Table 7 is set to 64, which indicates the dimension of the style code. Discriminator ( Table 7). Our discriminator is a multi-task discriminator [35], which contains multiple linear output branches 3 . The discriminator contains six pre-activation residual blocks with leaky ReLU [33]. We use K fullyconnected layers for real/fake classification of each domain, where K indicates the number of domains. The output dimension "D" is set to 1 for real/fake classification. We do not use any feature normalization techniques [19,45] nor PatchGAN [20] as they have been observed not to improve output quality. We have observed that in our settings, the multi-task discriminator provides better results than other types of conditional discriminators [36,37,39,42]. Table 7. Style encoder and discriminator architectures. D and K represent the output dimension and number of domains, respectively.
2019-12-04T09:42:22.000Z
2019-12-04T00:00:00.000
{ "year": 2019, "sha1": "5474ddca920f59c4ec3c243345a5b9248e64065b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1912.01865", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5474ddca920f59c4ec3c243345a5b9248e64065b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }